The Day Performance Didn't Matter Any More

OSNews published a nine-language performance roundup in early 2004. The results are summarized here:

This is a companion discussion topic for the original blog entry at:

David nails it. I’d venture to say that 80% to 90% of perf problems today are a result of badly written code, not the language the code is written in.

There are common perf problems that can occur in any language. Like when you see someone load a dataset of thousands of IDs and then iterate through each one loading the record from the database one at a time instead of doing a bulk load of all the data in one query. That won’t magically be quick because you used C++.

Good article. I have tried to explain to many people what you have stated - performance doesn’t matter too much anymore from a language standpoint, and these days it’s all about scalability.

computers are fast enough today that most problems are not going to be limited by the language you choose

That’s right. And I’d say this has only really become true in the last 3 years-- we have enough brute force computing power to ignore the massive 100:1 perf penalty of interpreted code.

I remember a web app we wrote in 2003 that had hundreds of kilobytes of heavy-duty client JavaScript. Simply sorting a column of data with 50 rows took many seconds on the then-typical client P3-800 machines.

On today’s 2.5+ gigahertz machines, that same operation would probably be instantaneous.

Great post as always, but reagarding this:

Well, our computers are now so fast that-- with very few exceptions-- we don’t care how much interpreted code costs any more.

Doesn’t it depend on where the code is interpreted? JavaScript is interpreted on the client-side so performance isn’t a problem. But in a web app, Python code (correct me if I am wrong) is interpreted on the server. When the code is executed on a server with thousands of simultaneous users, poor performance can definitely have an impact.

But in a web app, Python code (correct me if I am wrong) is interpreted on the server. When the code is executed on a server with thousands of simultaneous users, poor performance can definitely have an impact.

What’s happening, as others have pointed out, is that the overall percentage of CPU time spent on the language is very low now relative to the time spent on I/O. CPUs keep getting dramatically faster, but the disk, memory, and network interfaces aren’t!

Given the pace of CPU speed innovation, eventually all languages-- even the 100:1 perf interpreted type-- will perform roughly the same. It’s basically already happened as of a year ago.

Would a web app written in C++ scale better? Probably. But it’s also pretty trivial to buy another inexpensive white box PC, build a web farm, and double the number of users your web app supports.

Consider the numbers. A web server built in 2006 will support 2x the number of users compared to a 2003 server – and 100x the number of users compared to a 1996 server!

I would be interested to see how IronPython ranks, since it is a dotNet language, and should, in theory at least, be compiled.

I notice that Delphi is STILL missing as if it wasn’t a real language. That gets annoying.

As far as VB 5 being compiled, I believe that was just the ocx controls. Applications even to the last non dotnet vb version continues to be bytecode if I understand things correctly.

VB.Net, however, is a real language at last and I can respect it finally.

The Python result is a bit surprising, because Python code is also bytecode-compiled, so one has to wonder: what’s the big difference with Java, for example? Maybe it’s this: Python has a number of properties that are not the best choices if speed were the #1 concern -

(a) the call interface is said to be slow, however I can’t really find good explanations on this so I don’t really know why. I think it’s this because when you define a function object:

def foo(): return “bar”

you get a function object which has an object call of type ‘method-wrapper’. So when you call foo(), the interpreter calls and somewhere along the way constructs a stack frame object.

(b) iterator objects throw a StopIteration exception when the iteration is to be terminated; this affects every for loop. So when you really need speed, you might need a while loop which increments the index into the sequence.

© variable lookup is faster when done in the stack frame than in an instance:

class C:
myList = []def A(self):

``def B(self):
````localList = myList
````for a in range(1000000):

Here, C.B() will run faster than C.A(), because looking up the object referenced by C.myList is quicker if it's done from the stack frame.

There are more of these little things. It's just a language that wasn't created with performance in mind.

Joost, there may be a misunderstanding of terminology.

.NET, Java, and (presumably) Python are first compiled into byte-code, that is true.

However, the byte-code generated for a Python program is then interpreted. That means every byte-code instruction is re-evaluated whenever it is executed.

The byte-code generated for .NET and Java, on the other hand, is compiled to machine language, by a so-called “just-in-time compiler”. That means, during the first run through a loop Java will be as slow as Python, but during all subsequent runs it will be much faster since machine code is being executed.

(C# and Java use the same JIT compilation, by the way; one shouldn’t be listed as “compiled” and the other as “byte code”.)

What’s really surprising is how poorly Python fares even with the Psyco JIT compiler. I guess one guy can’t be reasonably expected to do as well as the research labs at Sun and Microsoft! However, my own benchmarks absolutely confirm the posted benchmarks: Python really is 100x slower than C#, and Psyco is still ~10x slower.

That’s for algorithms that are actually written in Python, of course… many so-called “Python programs” are really just stubs of Python scripts that call C DLLs for all performance-critical code.

Nice as always, I smell a good discussion coming up. :slight_smile:

Anyway, why is Visual J# and Visual C# performance not equal, I thought they where compiled to the same? But again I have no experience with J#.

I assume Viusal C# and Visual VB performs equally?

I am old java asp - C# programmer, but recently at my current job position I am developing VB.Net 2003. But I am finding several issues like Viusal Studio add-in functions that are only available in C#. This is not a request for another C# vs. VB.Net discussion, but does Microsoft priorities both languages equally? It is more because we had a discussion regarding a new project where I was hoping it would be implemented in C#, but since the company already has so much experience with VB6 they valued the switch to VB.Net would be more straightforward. But again I get the chance to develop #61514;

Regarding Javascript, it is actually scary that not more focus is upon that language since it is being used more and more. Microsoft is shipping a new browser and all people talk about is tabbed browsing and html standards. Not that I want Microsoft to make more of their own Javascript and DOM extensions, they did enough damage on that front already. But as far as I know with my little knowledge is that Netscape and Sun made javascript, but are they still following up?

I think the biggest difference with Javascript/EMCAScript and scripting languages is that you don’t really know what they are doing. Normal programming languages can be debugged and disassembled; you can measure down to little pieces what is happening. And on the same time you have different browsers that run it differently. For instance IE6.0 has some bad memory leaks or stupid way of handle memory when it comes to JS where you have to restart the browser in order to get a clean run of your script. Ahh don’t get me started on that browser… :frowning:

On the other hand you also have the worst form of copy/paste among coders when it comes to Javascript. I hate watching ok developers copy/paste some Dreamweaver-Image-Mouseover script just because the language is “too messy”, “too clumsy” or whatever other excuses they come up with.

As a general comment, this attitude that we no longer need to care about efficiency since computers are so amazingly fast today has been around basically ever since there were computers. And there has always been an opposite attitude that every cycle counts and we always need new features that require better efficiency.

The truth is really somewhere in the middle. Even on a 4.77 MHz PC interpreted batch files were “fast enough”… for SOME tasks! On the other hand, even on a multicore next-generation game console C/C++ will still be mandatory since the games are supposed to look even better than the previous generation.

As computers got faster, the number of tasks that could be done with less efficient (but more convenient) tools increased. But usually this also requires a better infrastructure with more and more powerful libraries that are written in C/C++.

The Python/C example above is paradigmatic: how fast do you think your JavaScript would be if the browser commands you’re using were also implemented in JavaScript? JIT compilation is now so good that most of the .NET/Java libraries are themselves written in C#/Java but interpreted languages still can’t do that today.

Sorry the language, I am not much of a english writer. Anyway with Visual VB i ment Visual VB.Net.

Great stuff! I’m also very surprised by the Python results. That would definitely make me think twice before using it.

In the old days I was always amazed at just how fast PowerBuilder used to run. It is a byte-code/interpreted language as well. Yet you could write pretty nice applications on machines as low as 486’s and still get great performance. I wonder if you took Powerbuilder and put it up against the likes of Java what kind of difference you would see?

C# looks good:)

IIRC and FWIW, Basic compilers just stuffed the interpreter and the p-code in a file with .exe on the end. don’t remember how long that went on.

w/regard to the JS limits: my javakiddies decided to pull the sort capability from the jsp pages we built. we’re “side grading” to Spring/iBATIS/etc but the reality is that our customers won’t go out and buy new machines to run our stuff. sorting 1,000s of rows in IE leads to hangs, or so the users think. AJAX style does it faster.

As a recent Python convert (coming from C#) I am neither surprised nor concerned about the results. There are so many other factors that should go into choosing a language. What about learning curve, tools, libraries, community support, readability, etc?

As for performance, what kind of app are you writing? A web app where database access and transmission of the HTML are going to be a couple of orders of magnitude slower then any of these languages? A tool for parsing documents where native regex support is of primary concern? A monte-carlo simulation where you are floating-point bound?

Looking at performance in a vacuum is a fun discussion exercise but without a context to evaluate all of the criteria I think its not that useful.

I agree with Jeff’s main thesis: computers are fast enough today that most problems are not going to be limited by the language you choose. Rather the architecture, design and quality of code will have a much more dramatic effect.

While its true that the relative performance between scripting languages and C will remain large for a long time, so many apps are limited by UI input, network performance, I/O and other components that have not followed the dramatic increases in speed of the CPU, that focusing on this one issue is a little silly.

the choice of a language involves many factors, and performance is just one, and probably pretty low on the list

And as of the last 2-3 years, performance hardly matters any more in practical terms. That’s why I cited the benchmarks above. So if it was low on the list of priorities before, it should have practically fallen off the bottom by now!

I notice that Delphi is STILL missing as if it wasn’t a real language. That gets annoying.

That would have been a good point a few years ago but by now, I think we can consider Delphi a dead language, thanks to Borland’s tireless efforts to kill it in the marketplace…

As with so many benchmarks, there’s value in looking at the details. The benchmark at is really measuring the performance of the script to repeatedly update the values of fields in a form (hidden fields in this case) of a web-browser document.

The inner loops of this benchmark update the hidden form fields ‘cycle’ and ‘count’ of the ‘benchmark’ form. If those field updates are changed to updates of regular global variables, the benchmark reduces to below 1 second performance - about 1/50 of the the speed if the benchmark is updating fields.

Now, benchmarking field updates may well be of interest to someone, but it should be clear that this benchmark is not measuring looping, math, or algorithms.

Unfortunately, we don’t have the historical data to see how well the older machines ran the script when modified to update non-document-field variables.

Regarding Server Scalability, I think the real measurement you want to look at is something like cost per 1000 users.

For example, writing something in C++ instead of Python might mean you need one less server to scale out, so you save the cost of a server and associated licenses.

But the cost to write that C++, with equivalent security, robustness and feature list, will WAY more than outweigh the savings of that solitary server.