The Day Performance Didn't Matter Any More

Eric Lippert has written about performance at various times, and he notes that performance isn’t interesting in the abstract; it’s interesting as a measure of how close you are to your particular goal. He also notes that if you’re going to optimize – ie, worry about speed – not only should you know what you’re shooting for (“how slow is good enough?”), but you should also start by measuring, and then fixing the things that are the slowest. As has been pointed, the execution speed of any given lines of code is probably nothing in comparison to I/O speed, wire speed for Web pages, database speed, interacting with OMs/DOMs, and the drag effect of one’s dumb algorithms. :slight_smile:

It’s also been noted, albeit less directly, that the choice of a language involves many factors, and performance is just one, and probably pretty low on the list. Ease of development, deployment, maintenance, and appropriateness to the problem (no one is programming DHTML in C++, for example) probably count for a lot more. It’s hard to imagine a situation in which a group of programmers sits down with a chart like this one and uses that to decide what language to code in. I suspect the real benefit of these types of comparisons is to enable people to say “Whew, I’m glad to see that my language isn’t the slowest one.”

it should have practically fallen off the bottom by now!

Yet people are still weirdly fascinated by comparative performance, and some folks occasionally publish benchmarks! :slight_smile:

Not all the devices accessing the Internet are high-end powerful desktops. Many of them are also PDAs, Smartphones, Set-top boxes, etc… with a limited memory and 200MHz to 400MHz processors.

On the other hand, it is true that hardware performance improves constantly but requirements also are constantly evolving. Initially JavaScript was used for user input validation and DHTML, but nowadays, we are talking about real-time 3D JavaScript engines and JavaScript web servers :-0 .

So, IMHO I think performance and efficiency always matters.

It is amazing what happens when you leave your old projects on the web for a few years.

Eventually it seems that even the most remote sites are seen by a few people.

I wrote that code the only way I knew how at the time. I must say, it sure is painful to see now, but at the same time, I remember being so proud that I was able to finish that project.

So why did I make it?

I was trying to help my mom out with a problem.

In 2003 she had a work laptop from 1999 that was not going to be replaced. It is the 333MHz that is listed in the chart. Now a 4 year old computer is normally not “that” bad. However, as the world moved on to Windows XP, a 333 MHz machine was not going to cut it.

For reference as to the speed change in 4 years: A year earlier I had bought a P4-M 2.2 GHz and only a few months after I was running a P4 3.2GHz machine.

My goal was to show how bad a Celeron 333 MHz was in comparison to current machines. JavaScript was pretty much the only language that I knew well enough to write this type of program with an interface. I wasn’t going to code some C/C++ stuff up and hand my mom a program run at the command line, and I didn’t own a Visual Studio License to make one in Visual Basic.

In the end before she had the chance to use it to prove her case she got some side work that required a laptop right away, so a Pentium M was bought.

None the less, the program was finished and I chose to leave it be.

If nothing else, it is sometimes fun to look at an antique and wonder why someone did it that way once upon a time.

Since then I have managed several web based corporate solutions and written a few apps here or there.

Only a few have ever been polished up, my most recent one was: http://errormessage.kazdan.com

Before you judge it, understand that the goal is to give someone who is not a computer expert a fighting chance to find the solution to an error code. It is about 95% complete at this point.

Oh, just for fun I ran a 2.4GHz Core 2 Duo with 2GB ram running Windows Vista Ultimate on the benchmark test. It was completed in 12.146 seconds in IE7. I suppose I should have probably closed the other 15 or so apps I have open… oh well.

Well this is all interesting but see again it matters what you do, let us teake the other extreme example: For me speed is often paramount. Im mainly dealing simulation and rendering. And yes it does not matter much if one frame takes 1 hour to render, because as you say i can buy more boxes. Howevr remember that the 1 hour represents the quite optimized.

Now consider me taking a 10 times hit. me render time that already is in 24h/machine per second of output. Now a feature film has 6060120 seconds roughly. So we are talking of 1183 computer years versus 11830 computer years.

On the other hand theres a lot of extra time for many other tasks it doesn’t really matter if i waste one second here and there to draw nice load metrics or roboustity checks that can well be even 200X speed hit stuff if its easy to do since it don’t matter in the whole picture.

I prefer php. Easy, powerful and fast.

As far as Psycho JIT compiler for Python goes, I imagine he hasnt been able to implement all the crazy tricks that most JIT engines have in them. Steve Yegge covers some of them in one of his blog posts. (See the five slides on JIT Compilation)

a href=http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.htmlhttp://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html/a

Basically, with a JIT compiler going, you can do real-time analysis of how the program actually runs, and change the program to take advantage of optimizations based on that. This ends up being more effective than the compiled language optimizations, since those are based on guesses. If Psycho JIT could do that in-line optimizations that the JVM can do, then we’d probably see Python have drastic speed increases.

If we take out the ‘long’ test, we see a rather different picture: the performance of non-Python languages varies by more than a factor of 3 (instead of just under 2), and with Psyco, we now catch up to within a factor of 2 of Java.

However, I’m not just fiddling with the numbers. There’s solid justification for getting rid of this test from the data: it is unfairly biased against Python because in Python, ‘long’ is an arbitrary-precision data type (equivalent to the library BigInt class in Java, for example) whereas in the others, it is simply a larger integer primitive. The authors claim to test 64-bit integer arithmetic, but there is no such type in Python.

It occurs to me that, as the test is described, the other languages would not actually calculate the same result. When the code gets to the first multiplication, it should evaluate 10,000,000,002 * 10,000,000,003, which won’t fit in 64 bits (since each operand - deliberately, mind - doesn’t fit in 32).

Interesting read. The staggering performance loss associated with Python intrigues me, in particular. The reason: I’ve been closely following the development process of one CCP hf, an Icelandic gaming company which owns a multiplayer online game which hosts up to fourty-five thousand players at peak time… And its server is written entirely in Python O_O;;
Five years old, now, the game has scaled admirably in light of all the hardware improvements that’ve occurred. However, it’s really struggling to keep it’s head above water since Python’s stackless implementation has really poor (read: non-existent) support for system-level multi-threading and multi-core process support. CCP hf has been beating their heads against the glass for about a year now, and I would be surprised if they haven’t come to regret their decision to deploy an interpreted language solution back in 2002. They can only expand vertically so far - they’ve already got a tremendously powerful server farm; and Python limitations are preventing them from expanding horizontally with multi-core support. CCP has ended up looking at alternative solutions like infiniband to fill in for Python’s shortcomings.

I think performance still matters, we shouldn’t use new hardware as a crutch for producing sloppy, poor performing code.

I think one thing to be considered is the cost of maintaining code. C++ will be more difficult to maintain than VB.NET code, so there is a trade off for the speed. Also, each language can be considered a tool, pick the right tool for the application. The good news is whether you pick VB.NET (VB), Java, or C#, performance should be relatively the same. For most business applications, any of these languages should suffice. As the processing demands increase, you want to get closer to the hardware, so that’s when C++, C, or even assembly come into play. These will be specialized applications that demand the highest of performance.

I think it boils down to what you know. If you have been using VB for a long time, no need to learn Java, just develop in VB.NET. So, the path you choose for the next application should be with your strengths with previous applications, because performance doesn’t matter between C#, Java, and VB.NET.

Would you be willing to run this test again and include Google Chrome?

And here we are, in the era of tracing JITs on JavaScript, and near-native code execution with asm.js and emscripten.

How crazy for 10 years of improvment, right?

1 Like

I always thought of Java as slow too, but wow… Benchmarks ran today prove otherwise. I ran it and got ~1 second on my Core i5-7500. Took about 4 seconds on my Moto G5+ (2017 midrange phone). Which has increased the performance of Java the most, CPU speeds or Java improvements?