That whole article rests on the assumption that the software is only ever used once on a single product. A bussines solution that software is especially bad at.
Deploying multi server solutions is not exactly cheap. Doing it multiple times on multiple solutions is very expensive.
Furthermore it is always done in the end of a project where the deadline has long passed by, so it is stressfull too.
I’d say with both developers and hardware it comes down to the law of diminishing returns. For example if a website cannot cope with 10 users then throwing more hardware at it won’t fix it. And on the other hand adding more developers to optimise code there comes a point where you get less benefit.
One of these days (soon I hope), we’ll convince customers that gathering a dozen or more people in one place for a design review is not a smart use of development funds. If ten of those people need to fly (business class for those wearing suits) to Chicago or Birmingham or ?, need rental cars, need hotel rooms, need to eat, etc., for a review, perhaps the money might be better spent. A terabyte hard drive costs less than most domestic airfares. A fully loaded server costs less than an international airfare. Doing the formal parts of design review via virtual meetings (assuming you can spend the savings on hardware), might mean higher thruputs via extra servers, or redundant mirror systems where appropriate.
On the optimization front, I’ve found unrolling loops by hand very effective if the N in do 1,N is a constant. Knuth Volume 3’s index has 13 optimization, loop references, mostly in the exercises.
Just a guess, but I suspect there’s plenty of scope for optimizing
anything written in Java.
Perhaps irrelevant, but whether I use an ATM in NYC, or Glasgow, or Auckland, I see much the same response (cash out), and crashes are almost unheard of. Large databases are being queried, correct answers are almost always returned, and in a short time. I don’t think I want my bank to move to Java.
This would only work if you run the hardware that this runs on. If you send out the software to customer sites that is slower than the last version or the competition it just looks like a poor quality product in my eyes. If the software is made for the desktop it will often cost more in hardware upgrades than was spent on the software in the first place (e.g. Vista/Office 2007).
I think you have to pay attention to performance metrics but not necessarily do anything about them if they are not too bad. If you are trying to wring the last 10% or 20% out of something you should give up since it’s not worth it. However I often find optimising bits of unloved code gets speed ups of over 10x (which is still only worth it if that code is often used or causing a problem).
Throwing more hardware at the problem most of the time is only a temporary solution used by people who don’t understand the route cause of the problem (yet), and rarely works as a long term solution in my view.
Well, at some point I agree with you. But remember that future optimization of hardware seems to be done by throwing in more cores, and (for most problems) you have to do some good old fashion programming to keep the level of serialized code to a minimum in order to scale beyond a certain number of cores.
Although, for a lot of web applications the add more hardware approach will get you far (but only so far…).
My friend (and IT exec extraordinaire) Dave Berk has a response to this article posted at his blog. http://www.lump.org/?p=135 He hits on many of the same ideas mentioned by others.
–Most software is usually not very fast to start with. Good optimization can often get you much better than the linear gains you get at best by adding hardware.
–Lots of common software gets exponentially slower as you increase it’s usage / throughput.
Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea.
Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.
The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.
JakeBrake is thinking along similar lines as me. Throwing hardware at a software problem has its place in smaller, locally hosted data facilities. When you’re running in a hardened facility the leasing of space, power, etc. begins to hurt. One could argue the amount of time and labor necessary to design and implement a new server, along w/ the hardware costs, space, power – and don’t forget disk if you’re running on a SAN (fibre channel disk isn’t cheap!) – can easily negate the time of a programmer to fix bad code. But again, that also assumes you know the problem can be fixed with code.
and sometime ago ballmer was hopping around like a monkey making fun of all schizophrenics clapping and shouting in coordinated fashion, developers, developers, developers, developers!!
In some scenarios your analysis holds true, in others it doesn’t. Sometimes you can’t just throw hardware at the problem - especially if the problem is IO bound rather than CPU.
Also the performance difference between well designed code and poorly designed can be extreme and more then can be made up for by simpy throwing hardware at it. Someone’s already mentioned the web DB scenario and there are others.
I would say get your good guys to design the most critical/sensitive bits of the system fairly well in the first place. This also avoids optimisation as there is less to gain.
Some kinds inefficiency are inexcusable, a product of stupidity or ignorance rather than rapid development. Fast code doesn’t need to be complex/bloated/unintuitive/difficult. Someone who’s made a habit of writing good code can often produce better, faster code in less time than someone who happily writes piles of slow, repetitive, disorganized crap.
Finally someone has put this to perspective with actual numbers. Nice job! There is no excuse for giving developers inadequate tools that slow them down.
It’s like giving a carpenter a wooden hammer instead of a metal hammer. Takes 2x as long to pound in the nail.
Also, you look at any Engineer (CAD, Mechanical, Electrical) and they never have slow PCs. Partly because of the CAD software and that proves that they also don’t have time to wait either so why should we? For the longest time, my wife had a better PC than I as she is a Mechanical engineer. It really pissed me off that I was programming and had to wait for MS word to pop up, or VS to compile took 1 minute because I had an fing 5400 rpm hard drive at 2 of my jobs.
I ninja-stealth upgraded my own computers at a previous job where we had restrictions like that. I basically gutted the inside of the PC so on the outside it looked like a normal…
When you work for any company who is giving you only 2 gigs ram, or a 5400 rpm hard drive, or inadequate processor and they refuse to give you a developer based laptop or desktop I say I’m all for the ninja-stealth. If they are not smart enough to provide the tools needed then you have to do what you have to do.
You don’t come to work to wait for shit to load. You come to work to code and do that as efficient as possible.
One good post, then 3-4 bleeding obvious posts on this blog. Over and over again. That’s how you build web properties kicks himself.
Optimization is for losers. Just ask Microsoft how they fared with throwing more hardware at Vista and its inefficiencies. I hear that Vista has eroded Microsoft’s standing, brand and profits (somewhat) and allowed a company called Apple to gain a strong foothold in the market with their little super-efficient OS X (running on super-efficient hardware).
Jeff, with all due respect, you should take the time to read up on subjects you write on before posting such simplistic articles.