I've been writing a lot of code recently, so I thought it might be fun to write about a code optimization technique that saved me a lot of time yesterday: short-circuiting. This story is true, potentially informative, and almost certainly also an extended metaphor for something else going on in my life this week.





Short-circuiting is all about finishing something as quickly as possible. In programming terms, it means ending an operation before everything is checked. This saves time, which increases the speed and efficiency of executing that check. If the check is executed thousands or millions of time whenever the program is run even very small improvements in speed can be significant.

Let's give an example of a decision that most people are fortunate to have experienced at least once:

Would you like chocolate, or vanilla?

To make this decision, I need to take three mental steps:

Do I like chocolate? Do I like vanilla? Do I like chocolate MORE than I like vanilla?

In a normal decision-making process I have to take all three steps before I arrive at an answer:

Yes Yes Yes ANSWER = chocolate.

If my brain were coded with short-circuit functionality, the process could be shorter:

Yes ANSWER = chocolate

As soon as I have an answer, I just go with it. I don't even bother to hear about the other options, because I'm already satisfied.



My ice-cream metaphor doesn't map exactly onto code short-circuiting, because my example was comprised of positive choices. Real short-circuiting is usually about stopping as soon as you are asked a question to which you can say "NO" - rather than a question to which you can say "YES."

For a real example, I need to explain a bit about the program I'm writing. The heart of it involves placing objects onto a grid. Some objects fill 2x2 grid spaces, some fill 1x2 spaces, and some fill 2x1. As users drag objects around on the touch-screen I am constantly showing them where their selected object would land if it were release from that point - which means I am evaluating drop locations constantly - exactly the sort of continuous check that needs good optimization.

First Pass

My original function was organized like this:

If I'm a 2x2 object...

If bottom-left space is open...

And bottom-right space is open...

And top-left space is open...

And top-right space is open...

Then put down object.

If I'm a 1x2 object...

If bottom-right space is open...

And top-right space is open...

Then put down object.

If I'm a 2x1 object...

If bottom-right space is open...

And bottom-left space is open...

Then put down object.

A 2x2 object would evaluate 5 things, a 1x2 object would evaluate 4 things, and a 2x1 object would evaluate 5 things. That's not terrible, but it could be better.

Second Pass

The first thing I noticed was that in all cases I had to evaluate the bottom-right space. This is a consequence of the coordinate system I'm using, and how the assets are aligned to it. So we rewrite the function like this:

If the bottom-right space is open...

If I'm a 2x2 object...

If bottom-left space is open...

And top-left space is open...

And top-right space is open...

Then put down object.

If I'm a 1x2 object...

And top-right space is open...

Then put down object.

If I'm a 2x1 object...

And bottom-left space is open...

Then put down object.

This doesn't speed up things at all if I can put an object down - my evaluation count is still 5,4,5 for the three cases. But if that first space is filled, I can stop right away! Instead of 5,4,5, my evaluation count drops to 1,1,1 in that fail case. I haven't improved the success case, but failure is now 80% faster!

Third Pass

Next thing I notice is that the top-right (or bottom-left) space being filled is a failure condition for 2 of my 3 objects. So let's pull those out like so:

If the bottom-right space is open...

And the top-right space is open OR I'm a 2x1 object...

And the bottom-left space is open OR I'm a 1x2 object...

If I'm a 2x2 object...

If the top-left space is open...

Then put down object

Otherwise...

Then put down object

Immediately you can see that combining checks to take advantage short-circuiting is also making the function much more compact - we're at just 8 lines and 7 conditions to check, down from 14 lines and 11 conditions to check initially. Our success cases now require 5, 4, 5 checks - so we haven't added any overhead, our simple failure condition (bottom-right filled) is still short-circuiting after just 1 check, and we've added 4 additional short-circuit opportunities.

Fourth Pass (final)

There is just one more place where we can improve things - that final check which does one thing if we're a 2x2 object, and another thing if we're not.

If the bottom-right space is open...

And the top-right space is open OR I'm a 2x1 object...

And the bottom-left space is open OR I'm a 1x2 object...

And the top-left space is open OR I'm a 2x1 object OR I'm a 1x2 object...

Then put down object

This reduces the success condition for 2x2 objects all the way down to just 4 checks - so we've actually improved upon the original function. Unfortunately, we've increased the maximum possible checks in the 2x1 case to 6, and the maximum number of checks in the 1x2 case to 7. But those are worst-case losses, and in practice this change optimizes the run-time of this function by nearly 35%.

Short-circuiting can grant huge efficiency gains, but it is also very dangerous. It's dangerous because it promotes a focus on making decisions quickly - rather than considering all of the data first. There's no need to hear all of the data if you are certain that a small set is sufficient - but that's rarely true in the real world. That's one reason the real world is governed by legal systems which rest upon the judgement of real human beings - instead of anonymous decision-making programs.