At Sketch we’re always working on improving the speed and responsiveness of our platform. For example, we recently talked about speed improvements to the web app’s Inspector. But the most visible part of this is rendering on the Canvas in the Mac app — turning Sketch layers into pixels on your screen in a fast and accurate way. This pipeline is something we pay a lot of attention to and we’re continuously improving.
These changes typically roll out progressively over multiple updates, so there’s rarely a single release where performance improves dramatically. But go back a few updates and you’d definitely notice the difference.
So today we’re going to do just that — compare Sketch today to versions from a few months ago, to see how far we’ve come with our rendering performance.
But first, some quick background.
Turning layers into rendering commands
Our rendering stack has lots of moving parts. So for this post, we’re going to focus on one rendering subsystem — the part that iterates a Sketch document tree, and turns that into CoreGraphic primitive calls. Essentially, this takes your tree of layers and converts them to calls like ‘fill this path’ or ‘draw that stroke’.
When we originally wrote this subsystem in 2010, we used Objective-C. Now we decided the moment had come to take a step back, modernize, and see what we could improve — without compromising on rendering quality.
Switching from Objective-C to Swift
The first thing we did was rewrite the code in Swift. This is not something we did on a whim — it’s typically a bad idea to rewrite code that works just for the sake of it. You risk introducing subtle new bugs in the process. But this time we had a few good reasons for the change.
Rewriting code in Swift is not something we did on a whim. But this time we had a few good reasons for the change.
Firstly, Swift’s interface for communicating with CoreGraphics is a lot nicer, and we thought Swift’s more powerful type system could come in handy. And of course, it’s also the preferred language for Apple’s platforms. Finally, rewriting the code also forced us to really revisit every class and line, which helped us make many other small improvements.
Simplifying a complex process
Turning layers into rendering commands seems simple at first. When you encounter a shape, draw that shape, maybe add a border and a shadow, and then move on to the next shape. But it’s more complicated than that.
In some cases — such as when using masks — you have to look ahead to see what comes next, or remember what came before. And in the case of individual layers, draw fills and borders naively will lead to unwanted aliasing effects.
Turning layers into rendering commands seems simple at first. But in some cases you have to look ahead to see what comes next, or remember what came before.
There are other examples, but they all have one thing in common — they complicate and slow down what would otherwise be a series of simple commands. All of this meant we had to take a more complex route.
The tricky part? We don’t always need to use this complex route — sometimes we can take shortcuts which result in much better performance. The real challenge is knowing when to take each route.
Improving off-screen bitmaps
After rewriting the code in Swift we started looking at these rendering complications and working out to how to deal with them effectively. We suspected we could win a lot of performance here, because the Mac app had incomplete information at the time of drawing — so it was doing more work than it strictly needed to. We were particularly interested in how the app used off-screen bitmaps.
The Mac app uses off-screen bitmaps a lot during rendering, particularly when it first needs to composite multiple drawings together. For example, if you use an opacity value below 100%, apply a mask, or use an effect like a blur.
When the app uses an off-screen bitmap, it has to allocate memory for it, draw into it, draw the resulting bitmap into its final destination, then clean up after itself. As you can imagine, these steps incur a slight cost for handling those pixels, compared to just drawing output directly into the destination.
Off-screen bitmaps in action
Let’s use a specific example which will commonly use an off-screen bitmap — opacity shared across multiple layers:
If you’re experienced with Sketch, you’ll know how to achieve the differences between the two examples above. The first is two squares, each with opacity set to 50%. The second is a group containing two squares, with the group’s opacity set to 50%.
Now let’s translate these to drawing primitives. In the first example, we’re drawing two semi-transparent squares, one after the other — which makes the first render be visible through the second. In the second example, both squares are first drawn opaque into an off-screen bitmap, then composited as one image with opacity across the entire result.
If the group only contained a single square in this example though, the off-screen bitmap route would be pointless. The result would be no different, but performance would take a hit if we approach it in this naive way. So for optimum performance we have to look ahead to make sure there is more than one layer in the group before we take the off-screen route.
Modeling complications
This example is simple — but the real challenge is having enough information to figure out beforehand whether we really need an off-screen bitmap, or if the direct route is possible.
We decided to invest some time in building a data structure to capture all these details. Building an additional tree is expensive, but we thought the cost would be worth it if it could help us optimize our rendering — such as by skipping unnecessary off-screen bitmaps.
It started in small increments. In Sketch 72 we used the new tree for the individual layers we were rendering. In 73, we extended the tree for entire groups. And in 74 we built the tree for the entire part of the document that needs rendering in a single pass. Each step has unlocked further optimizations — and by the time Sketch 75 rolled around, we had enough information to skip expensive paths wherever possible.
Going back to the example above — we know designers have plenty of valid reasons for putting individual layers in semi-transparent groups, rather than applying opacity to the shape directly. In Symbols, for example, you may make the Symbol Source a black glyph, then give the instances a specific tint. And now, the code is smart enough to look inside a Symbol and detect if we can render the shape directly, with the correct fill, and bypass off-screen bitmaps altogether.
Showing results
While each off-screen bitmap on its own is cheap, they add up fast. So these optimizations really make a difference — as you can see from the graph below.
This graph also shows another important point — there is a broad variety of documents out there, and designers choose to stack effects in those documents in different ways. There’s no silver bullet, but some documents show a really impressive drop.
We haven’t finished yet, either. In future updates you’ll see further improvements that will make Sketch work and feel faster. Keep an eye out for those in future posts.