The Great Browser JavaScript Showdown

If you could run this on Vista SP1 when it is released, that’d be great. SP1 improves on JavaScript performance in IE7.

I fail to see how this benchmark is useful for anything but “intellectual curiosity”. Of what use is it to leave out DOM manipulation and events, the one thing that almost all code is about unless you write a scientific application that’s more busy “on the inside” compared to any other (web) applications, that by far spend ALMOST ALL their time manipulating the interface - i.e. CSS, DOM events???

Thanks for the work, I’ll keep a bookmark to your site - I may have no right to be so negative (not having paid a dime or done anything else for you), but I decided to post because few others did and I really fear a lot of people, especially the less technical but more marketing ones, might misinterpret those results.

Here are the results I got on my MacBook (1.83 Ghz Intel Core 2 Duo - 2 GB 667 Mhz DDR2 SDRAM Mac OS X 10.5.1)

IE7 (Parallels - XP Pro SP 2)
94 s

Safari 3.0.4
22.3 s

WebKit 5523.10.6
10.0 s

Firefox 2.0.0.11
36.7 s

Firefox 3.0b1
22.2 s

Opera 9.25
NaN s

That said, I have heard of people working around it by shoving all their strings into an array and joining them all at the same time, but that’s quite a pain when you have to go to such lengths just for one browser.

I’ve been doing this for a bit now, and I feel the code is actually easier to read then doing string concats. I also fail to see how it’s a pain. maybe 1 extra line, regardless of the number of concatenations.

That aside, I haven’t encountered a real-world scenario where the string bug would cause any major side-effect; perhaps only the slightest pause on a reasonable machine. I mean really… who’s doing tens of thousands of concats!!! The access benchmarks were much more alarming to me.

Another good example would be Redfin which is actually AJAX based real estate portal. If you search a locality, it shows a map with location of all the results and below the map it shows details of each result. This table it constructs by appending several hundreds of strings. - MSDN JScript blog

Several hundred times for said functionality? Quite minimal compared to 20-30k times.

However, I’m curious: does this mean that when you are doing DOM manipulation (createElement, appendChild, etc) that you are indeed only creating strings and using concatenation to append them to some node in the DOM? that seems rather ridiculous really.

I think the graphs you made are great. The reasons people have listed as to why they dislike them seem to lack any thought in my perspective. Thanks for the article! Kudos.

I think it’s absolutely typical for our web period to ignore all kind of performance issues: Use big image files, link to thousands of lines of javascript code and make a website as bulky as possible. Let’s start to remind real world example. If this is a high-end machine, don’t even think of what a normal machine would need. Maybe it’s time again to change minds?

Hi Jeff,
I tried out the benchmark on my system on different browsers including Flock and Firefox 3 beta 2, and found them to be much faster than Firefox 2 (barring IE7 due to the string test). The interesting thing is that Flock 1.0.3 is based on Firefox 2, but is still over 10% faster overall. I have recorded my results on my blog (http://abaditya.wordpress.com/2007/12/20/firefox-3-beta-2-javascript-benchmark-plus-why-flock-is-faster-than-firefox-2/) and also published a Google spreadsheet with my results (http://spreadsheets.google.com/pub?key=pCUHgnruBnILDFJ7TXTEMQg).

Michael Hasenstein: The point is to help optimize browser javascript engines. Utility to the public (if there is any) is a side benefit.

As a direct result of the existence of this benchmark, the current nightly builds of WebKit have immensely faster javascript execution. If the data had been cluttered with other parts of the browser it would have been much less clear where optimizations were needed.

Other benchmarks can (and have!) been created to test rendering, layout, DOM mutation, etc…

David Smith: Je suis d’accord (Agreed). I said it my last paragraph why I was posting. Had there been no Digg-/Slashdot-effect, attracting the masses, I had kept quiet (following my last but one sentence). I’m all for what is being done here for development! But with all the attention… okay, no reason to repeat myself.

AMD Athlon 64 3200+ Laptop Windows XP Pro x64 Edition 2 GB Ram

Firefox 2.0.0.11 x64 15542.6 ms
Firefox 2.0.0.11 Official 26503.6 ms
IE 6.0 32-bit 47474.0 ms
IE 6.0 64-bit Crashed in controlflow-recursive
Opera 9.25 13537.8 ms *
Opera 9.50 Beta 11470.2 ms
Safari 3 Beta 13295.4 ms

  • Opera 9.25 had problems with fasta, dna and format-tofte so I added up the
    other times and came to 11134.6 ms. I added in the numbers from Opera 9.50
    for the three failure cases on the guess that 9.50 should be faster than 9.25
    and came up with the result as shown.

It should be interesting to see what a Firefox 3.0 x64 build can do. I haven’t
had the time to setup the build environment for 3.0 but maybe sometime next
year.

Quite amazing that this was even published. A very quick look at the string function routines will quickly show that they are using non-standard calls, and IE doesn’t support them. Once the javascript is corrected, IE outperforms firefox.

A javascript string is not a simple character array, and should not be accessed as such. Many calls in the suspect script (Written by Mozilla), attempts to do so with calls like “data[i]”, when it really should be calling the function charCodeAt. Once you change all the invalid statements like “data[i]” to “data.charCodeAt(i)”, the code runs fine in IE.

rlk (December 20, 2007 10:45 AM)

Out of curiosity, are you doing all changes in the DOM live? Because that will be tend to be ridiculously slow as each change is going to cause the rendering engine to kick in and reflow the document. If you are doing them live, try doing something like take the root node of where you are going to make the changes, pull it out of the DOM (removeChild()), make all your changes, then put it back in.

You will be assimilated.

The bitops-bitwise-and test appears to really be a test of JavaScript global variable access speed. The same test code wrapped in a function with all local variables is blindingly fast in Firefox on a Mac in comparison. I was getting 53ms instead of around 2900ms. So the test does show a performance issue but not what it claims to be testing.

Slightly off topic…

I heard a podcast on conversations network by one of the main guys at Google, talking about their toolkit that allows you to write your JS app in Java (strong types, debugging etc. etc.) and then cross compile it into optimised JS. I think that if I were going to write a large JS app I’d give this a good look, much richer and easier to develop a large app without so much overhead, and the toolkit handles the browser incompatibilities and JS optimisation without you having to worry about it.

Also, JavaScript slows things down in another way: simply loading it as text over and over again. The new version of Rails has some configuration options that gather up the Scriptacolous and other stuff into one zip file automatically. David Heinemar Hansen says that this has given significant performance improvement at 37 signals, almost for free. (cf another podcast on conversations network where he talks about this).

Zoips, I will need to take a look at it. That could account for the problem.

As some other posters observed, the stacking chart format obscures some details - it gives the overall performance, but you can’t see the key areas where there are anomalies between the browsers.

A quick charting exercise in Excel reveals:

  1. IE - strings[*], base64, validate-input, and tagcloud are the weaknesses
  2. Firefox - 3d, bitops, bitwise-and, date are the weaknesses
  3. Safari - no obvious areas of weakness - on par with the average, and occasionally better than the others.
  4. Opera - similar to Safari, except tends to be better more often than on par.

The only ‘weakness’ for Opera is unpack-code, but the (relative) poor performance here is far outweighed in the grand scheme of things.

[*] See Robert McKee’s observation above about the test validity

Running the test on Leopard, Firefox 3b2 is slightly faster than Safari 3, and both are substantially faster than Opera; admittedly I’m using Opera 9.23 so may not get optimal results, but both FF and Safari come out ~1.5 times faster. Safari, strangely, took longer to complete the test – there was an inexplicable delay between finishing one section and starting the next, but the individual sections ran quickly…

@Robert McKee

Good catch. In a future version of the benchmark, we’ll avoid this construct (and also add checking for correct output from the tests). However, I think this affects onlly the string-base64 test, not the other string tests. Please let me know if you find other bugs in the tests.

“If Web 2.0 is built on a backbone of JavaScript, it’s largely possible only because of those crucial Moore’s Law performance improvements.”

Or maybe we’re seeing these huge improvements precisely because so much of the web uses javascript? Seems a lot more likely to me.

Jeff said, “I had to use a beta version of Opera to get something other than invalid (NaN) results for this benchmark, which coincidentally summarizes my opinion of Opera. Great when it works!”

I would have to agree with that - every time I’ve tried Opera, I’ve been disappointed - and I WANT it to work! However, that isn’t the reason I think Opera is going to fail. The reason Opera is going to fail is amply demonstrated when I try to use gmail, Google Notebook or Google Spreadsheets from Opera (I am running on Linux - haven’t tried lately from Windows) and am told it is unsupported. THAT is what makes Opera irrelevant - if Google is explicitly writing their apps to not support a browser, then that browser is for all intents and purposes dead, even if it doesn’t realize it yet.