⚠️ Warning: this is an old article and may include information that’s out of date. ⚠️
I stumbled across this concept recently and I thought I’d share it, because I don’t generally see this pattern being used. More importantly, I also share test results that show that maybe it’s not always a good idea to use this pattern…
The problem with Switch statements
The basic switch statement in JavaScript looks something like this:
|
|
So what’s wrong with this? The JS engine has to examine a bunch of unrelated cases until it finds the relevant one, executes the code, then breaks out of the switch because the job is done (this is why it’s important to break!). In the above example we had to go through case A and case B until finally reaching case C. What’s worse is that if it didn’t match any of these cases, the JS engine has to jump through ALL of the cases before it reaches Default, the fall-through case.
Actually it’s not so bad, as long as there are a limited number of cases. It’s probably no big deal if you only have a few cases to jump through. The problem gets bigger as your number of cases increases (some of you may know this as O(n)). What happens when there’s 10 cases? Then there’s potentially 10 checks on cases (assuming what ended up being executed was the default). 100 cases? Then potentially 100 checks.
What would be better is if there were a way to reduce the number of checks. One way would be to put the most frequently used cases at the top. This would alleviate some of the pain, but you still end up with extra processing while the JS checks each case. It would be ideal to avoid this extra processing altogether.
An alternative: The hash table
There is a way to avoid this extra processing! It’s by leading the code directly where it needs to go, without unnecessary checking of unrelated cases.
You can do this using a hash. In JavaScript we accomplish this with an object:
|
|
There we go! No extra case checking here. We’ve led the JS straight to the code we want to execute!
Performance improvement…?
So.. this hash lookup seems faster in theory, but what about in practice? Unfortunately I ended up with some mixed results…
I created a simple performance test on jsperf.com and got these results:
Browser |
Chrome 6.0.490.1 dev |
Safari 5.0 |
Opera 10.61 |
Firefox 3.6.3 |
IE6 |
IE7 |
IE8 |
Mobile Safari (iOS4 on iPhone 3GS) |
Android (2.2 on Nexus One) |
- Ops/sec = Operations per second. Higher is better
- Chrome, Safari, Opera, and Firefox were tested on Mac OSX 10.6.4 2.53GHz Intel Core i5. IE tests were run on Windows 7 64bit 2.4GHz Quad Core
The Results
From the results, it looks like the hash optimization is only a benefit for Chrome, IE6-IE8, and Android. That’s quite a specific sampling. My guess is that the other browsers have implemented some sort of Switch statement optimizations that actually turn the hash optimization into an antipattern.
More info
Although I first read about this online, by no surprise this trick also appears in Nicholas Zakas’s High Performance JavaScript in a section on “Lookup Tables” (p. 72).
Comments