Hardware is Cheap, Programmers are Expensive

Because fast hardware becomes more and more cheaper ‘we’ start to do slow things like using XML or programming in Ruby because it shortens the time to market.

Most developers never read database documentation.

Fast hardware makes Vista Aero possible.

Clearly, hardware is cheap, and programmers are expensive. Whenever you’re provided an opportunity to leverage that imbalance, it would be incredibly foolish not to.

That might be true for application software that just runs on a few servers.

It is generally not true for device drivers and other system software that runs on millions of desk top PCs (Vista anyone).

It is rarely true for for memory/weight/power constrained items like mobile phones, organisers etc.

And it is total nonsense to apply it to embedded software that runs on the billions of TV remote controls, network switches, keyboards, mice, mp3 players, dvd players, televisions, washing machines etc. If you’re selling a hundred million units then an extra penny saved on a component can hire you a lot of programmers, as can selling an extra 10% because your device is perceived to be faster/slicker/better than the alternatives.

I was brought into a place one time to check out why a government-mandated report on mortgage fee codes was failing. The report (written in C# against SQL Server by one of my junior colleagues) was now taking so long that it didn’t finish.

I walked in, looked at the table and put a table index on fee code. Instantly the report went from taking over 2 minutes to 45 seconds. Then I spent a few minutes looking at the query. I made a simple change to it (eliminated an obvious cross join) and got it down below 30 seconds.

Then I looked at the C# code. My predecessor was concatenating strings together when formatting the report. I changed it to StringBuilder Appends instead. The report now ran in 8 seconds. That was all in under an hour’s worth of time and it really wasn’t hard at all.

So, I agree that it’s well worth it to spend an hour or two getting an expert to look at the low-hanging fruit first. Sure, faster hardware would have got it done, but why?

On the Go programming list, they recently discussed an area where throwing more hardware on a problem may not be appropriate: If you have made a change to your Go playing program, you may need to run it thousands of times against other programs and earlier versions of itself, in order to convince yourself that the change is any good. Then the code needs to be fast - the gains in productivity you gain from using a higher level language are lost again in the additional time needed to regression test it.

There might be cases where this also applies to other software: Things that need to be profiled so heavily and continuously, that it’s better to use additional skill to get it right the first time than just buying more hardware (after a certain point). Maybe weather prediction systems, climate models, or other simulation/prediction systems?

This is an interesting article and one cannot in isolation argue either clause of your premise, especially the I think however, that much is missing from your list of factors with respect to the issue.

The initial cost of hardware (servers) is not the only cost, and - yes hardware is cheap, but is a drop in the proverbial bucket compared to the total cost of ownership. I believe overtime your premise is eroded by cost of:

  1. Licensing software such as Oracle or Websphere on additional servers
  2. Space/leasing costs
  3. Power and A/C costs
  4. Server admin/maintenance costs, and
  5. Most importantly - if one wants to add servers to improve performance, then one will probably need to upgrade (continuously) networks and network appliances to realize the gains, and
  6. Finally - not all performance issues are caused by software. Server configurations (improper configurations as it were) and under-powered network appliances can certainly contribute to performance issues.

I think your statements,

Always try to spend your way out of a performance problem first by throwing faster hardware at it. It’ll often be a quicker and cheaper way to resolve immediate performance issues than attempting to code your way out of it. Longer term, of course, you’ll do both.

… amount to bad advice. Example: I go out and buy “n” servers to “cure” a performance issue per your statements above while at the same time incurring all costs listed above. A day later and while the new servers are being built out (incurring additional costs) and prior to going live – a server admin solves the problem in 15 minutes by increasing Java heap size.

Did you catch the same “virus” as this guy? :slight_smile:

http://articles.techrepublic.com.com/5100-10878_11-6044115.html?tag=search

Forgive the grammatical error of the opening sentence above. Thank you.

Jeff,

For once I disagree with you.

I think throwing hardware at the problem is like playing Monopoly and having the get out of jail free card. Certainly with windows servers you can only expect to play this card once every 24 months to get a significant leap forward in terms of ‘raw’ pace.

As for dev workstations, any software company that does not invest in tools/workstations/monitors/chairs/environment etc. for their developers (or indeed any serious PC power user) is simply being short sighted.

Mike

Heh, try getting the public service to fork out for hardware for developers. We’re developing on 5-year old servers that are show as hell. I’ve been trying to get us new development servers for 2 years now – with no success in sight.

In terms of effort, it’s easier to hire more bums on seats than have new servers purchased at 1/10th of the cost.

Programmers are not about writing the best/fastest algorithms. The more important part is building the system/software itself. Build it reliably, to be modular and expandable. Going from general to specific. Creating and tying the pieces together. No hardware can do that.

You’ve gained some interest on Slashdot:

http://developers.slashdot.org/developers/08/12/20/1441203.shtml

weak argument. Throwing more hardware at problems does not make them go away. It may delay them, but as an application scales either:

  1. you may get a combinatoric issue pop up outstripping you ability to add hardware

    and

  2. you just shift the problem to systems administration who have to pay to maintain the hardware. Someone pays either way.

All-in-all, this is the sloppy sort of thinking that leads to such screwed up software. We don’t need good software so we don’t need good programmers and the cycle perpetuates.

The worst performance I have seen has come from middleware, despite all the DB bashing that goes on. Things like constant roundtripping to the DB, bubble sorts (!), constant re-rendering of the UI/web page etc.

Oh, and most of the time, the server is not the bottleneck. The network is. You can never go faster than your network. So keeping queries minimal and knowing when and how to cache are important. But that takes skill, which doesn’t doesn’t exist very much amongst programmers.

Wow! I’m grossly underpaid, but then I don’t do much on my job and live in an inexpensive area. I really should upgrade my home computer but it would take a long time to reinstall everything on a new box. Figure at least an entire day wasted installing Visual Studio alone.

Hello,

In India, Programmers are cheap, hardware is pretty costly…

This is a pretty gross generalization.

True, spending 3 weeks to get 5% more out of an algorithm makes little financial sense.

But, for poorly designed database tables, indexes, and databases, this is very poor advice.

Problem here is that you can spend your self into a problem too.

Classic example here is 4 and 8 core processors. Wrong design and 4 core version is just as fast as the 8 and the 12 core is slower. So yes 12 core is cheaper than 8 because it basically don’t work.

Be sure to throw good hardware at problem. If good hardware happens to be cheep of course then throw it at it.

Number two adding more cores on some OS’s makes problems worse not better. So make sure you use a good quality OS built to scale and have developers working on fixing its scaling issues.

Be sure problem scales. Not point throwing 1000 cores at a problem if problem can only be processed single threaded.

Be sure to use a complier that optimizes correctly. Surprising how much difference spending some money buying Portland group complier and using that instead of MSVC or gcc.

I have had cases were people wasted hours trying to improve performance threw altering code. Only to find out that I was hammering them all I did was build the code they were starting with with a good complier that had done all the optimization methods they were trying automatically.

Some compliers even rewrite Algorithms removing human error. Human doing optimizations can be a complete waste of time. Some cases it would be better for long term to send your developers into the complier itself.

Depends on the programmers you have working at your shop, but my impression is the world has replaced programmers who know about qsort being a standard feature, with those who implement their own, buggy bubble sort.

I work in a shop with 70-80 developers, so any initiative costs a prohibitive amount of money in absolute dollars. At the new year kickoff event, the most popular request from the Devs [aside from more vacation time :slight_smile: ] was dual monitors.

Management had a cow when they found it would cost about $500 per seat for a second monitor and graphics card, or $40-50k to outfit the whole group. A pilot project with a selected group of developers was initiated to evaluate the productivity gains. As one could guess, a year after the initial outcry we are still waiting for dualies.

This is one of those wonderful articles that ends up at Facebook and other Web2.0 applications. They utilize an unbelievable amount of hardware to make up for some phenomenally poor design decisions that they’re now stuck with. Throwing hardware at the problem first is rarely the right answer.

Nice article.
I think it applies only when:

  1. you have a web based project.
  2. your code is THE shit and scales well(responds to addition of new hardware).
  3. you have an optimized, tested and released product and it makes economic sense to add more servers than to pay for new developer.
  4. you own stackoverflow.com

Jeff: I think you may have nuked the fridge with this one.