Skip Navigation
A 3D illustration of arrows pointing upwards around a 3D cloud, on a blue background.
Inside Sketch

Tech Talk: Speeding up the Inspector in our web app

We dig deep into our recent web Inspector performance improvements and explain how we achieved 2.3x faster speeds

We recently rolled out some big improvements to the Inspector in our web app — a feature that’s included as part of your Sketch subscription at no extra cost. And one of the biggest changes is a significant performance improvement.

Our biggest focus is always on giving you the best experience. Our users push the limits of what’s possible with Sketch every day, and that inevitably means larger and more complex documents. So we worked hard to improve loading times in the Inspector, to help everyone inspect designs in the browser more efficiently — and give ourselves more headroom for new features. Now we want to dive into the technical story behind how we achieved it.

Focusing on pre-processing

To keep the performance of the Inspector as high as possible we pre-process Sketch files (through an AWS lambda function) that gives us just the data we need. Sketch documents can be large, but because you’re only inspecting a single Artboard at a time we can load just the data you need, and keep the interaction performant. This pre-processing was the step we wanted to speed up.

The first thing to do when you want to improve performance is to measure it. Our web Inspector’s preprocessor is written in Go and luckily for us Go comes with great profiling tools out of the box. If you already have a benchmark test in your application, Go makes it incredibly easy to profile CPU usage:

go test -bench BenchmarkParse -cpuprofile out.pprof

The resulting file (out.pprof) can then be analysed using go tool pprof out.pprof.

Parsing JSON

There’s probably a joke to be made here about how many performance bottleneck problems boil down to someone, somewhere parsing JSON. And because our file format is basically a folder containing JSON files (and an open spec — but more on that later in a future post), it’s also at the heart of what we’re talking about today.

We found that most of the loading time was spent inside various JSON-parsing functions (using jsoniter), rather than the processing we do with this data afterwards. This wasn’t really surprising; the test document we used consisted of 750MB of uncompressed JSON data, and it takes time to parse that from plain text back into some sort of structure.

As we mentioned above, because the web app lets you inspect a single Artboard at a time, we don’t need to parse the entire Sketch document. With that in mind, our original approach was to parse the JSON into a generic map[string]interface{} map. This way, we could simply ignore data for Artboards we weren’t using before we processed them any further — which would reduce the amount of JSON we had to work with to a fraction of the original JSON load. Then we used mapstructure to convert the data that was left into meaningful structs which the frontend could use directly.

Checking our assumptions

When we first made this optimization, it intuitively felt like the right decision. But it’s always good to check your assumptions with hard data, right? To find out for sure how fast parsing into a map or structs was, we wrote tests to measure both.

A graph comparing the time taken to process a file when parsing a map and parsing a script. Parsing a script is 2.9x faster at 2.23 seconds.

As it turned out, decoding JSON straight into structs (2.23 seconds) was almost 3 times faster than turning it into a map first (6.51 seconds). It was clear that our first task would be to rip out the mapstructure library we’d used. That way, we could parse everything into structs directly and then trim away the pages and Artboards we didn’t need afterwards. Once we’d rewritten a big chunk of our lambda, we were ready for an initial comparison:

A graph comparing the time taken to process a real document when parsing a map and parsing a script. Parsing a script is 1.6x faster at 3.35 seconds.

The result — a change from 5.27s to 3.35s — was a very nice speed improvement! But we were on a roll, and we couldn’t stop there. Maybe we could utilize Go’s famous concurrency features to speed things up even more?

Unfortunately… not really. In most cases the web Inspector lambda processes only a single JSON file inside the Sketch document — after all, we only look at a single page or Artboard at a time. Not wanting to admit defeat, we turned our attention to the main document.json. This file contains Shared Text and Layer Styles that we need to reference — maybe there was a quick win there? A few lines of code later and it was clear that the savings… were slightly less than 8ms. Oh well, not every idea yields a perfect result!

Parsing points

As the concurrency features couldn’t help us, we went back to the profiler again and noticed many regex-related functions showing up. We knew we used regex in one place to parse points — we represent them as an "{x, y}" string that was parsed using a regex and converted into floats:

re := regexp.MustCompile(`{([\\w\\.\\-]+),\\s?([\\w\\.\\-]+)}`)
parts := re.FindAllStringSubmatch(pointString, -1)
x, _ := strconv.ParseFloat(parts[0][1], 64)
y, _ := strconv.ParseFloat(parts[0][2], 64
return Point{X: x, Y: y}

Looking at our large test document, we noticed that it contained more than a million points — lots of layers, with coordinates and points describing vector paths. This felt like another potential bottleneck. Regexes are very convenient, but not always the fastest, so we ripped out the regex and replaced it with some handcrafted string parsing instead:

var point Point
pointString = strings.TrimLeft(pointString, "{")
pointString = strings.TrimRight(pointString, "}")
parts := strings.Split(pointString, ",")
x, _ := strconv.ParseFloat(parts[0], 64)
y, _ := strconv.ParseFloat(parts[1], 64)
point.X = x
point.Y = y
return point

It was slightly faster — but we could do better. That’s when inspiration struck: we realized that many of these points were same. Why is that? Well, the Sketch file format describes all vector points in unit coordinates (where the coordinate system goes from {0,0} to {1,1}). So we checked, and indeed, in our test document almost 70% of all the points were “{0, 0}”, “{0, 1}”, “{1, 0}” and “{1, 1}”. This was great news — it meant we could cheat!

var point Point
switch pointString {
case "{0, 0}":
	point.X = 0
	point.Y = 0
case "{1, 0}":
	point.X = 1
	point.Y = 0
// [...]
default:
    // parse string
}

But had it made a difference? Well, as it turns out, “cheating” is very good for performance. We got a nice 560ms reduction in execution time; from 3.34s to 2.78s. Processing was now almost twice as fast compared to where we started.

A graph comparing the time taken to process a real document when parsing a map, parsing a script, and parsing a script and then parsing individual points. The combined improvements are 1.9x faster than the original method at 2.78 seconds.

Deploying the changes

This felt like a good moment to stop and test what the performance improvements would look like on actual hardware, rather than on our development machine (an M1 MacBook Air in this case). We use a lot of Macs server-side to process Sketch documents, but there’s an even larger part that runs Linux on AWS.

Two graphs comparing the time taken to process a document and the memory usage difference between the old method and the new method. The new method is 2.3x faster and uses 3.7x less memory.

Finally, we ran a test on our AWS test server and compared the results against the old process. The improvements were clear, backing up all our earlier tests (which we carried out on an M1 MacBook Air).

We were really pleased with the result — a 2.3x improvement in execution speed and a 3.7x improvement in memory usage was even more than we’d hoped for. After a round of virtual high fives, we rolled it out to all our users. You can quite clearly see the moment we switched to the new code:

A graph showing the real-world processing time of documents before and after the new method is introduced into production. The new method cuts the peak time down from around 19 seconds to around 7 seconds.

Very few documents are as big and complicated as the document we had used in all these tests, but it’s often useful to work with an extreme case when making improvements like this. A look at our metrics showed the p99 latency (the average time all documents except the 1% outliers take to process) dropped from 6.25s to 1.97s — a solid 3.2x improvement.

We now process the vast majority of Sketch documents in less than two seconds. When you consider that a few hundred milliseconds of those are spent pulling down the Sketch file from our storage servers, we’re really pleased with this result. We hope you’ve noticed the improvements in your day-to-day work, and that they’re making your work in Sketch that little bit smoother.

You may also like

Inside Sketch

Sketch and AI

We’ve been thinking a lot about what AI means for Sketch for some time now. Here’s how we might use it, how we will never use it, and what guides our thinking.

Try Sketch for free

Whether you’re new to Sketch, or back to see what’s new, we’ll have you set up and ready to do your best work in minutes.

Get started for free
Get started for free