Hardware is Cheap, Programmers are Expensive

Many of the comments here bring up, in different ways, that the economics of this aren’t just that clear. For my tiny one-person company, for sure I solve just about any problem by doing a few mouse clicks and buying hardware. But for my clients for whom I do embedded product development, it totally depends on the cost of goods sold (COGS) and number of products shipped. Sometimes saving pennies on hardware really pays off in high volume applications. However, I also go with what I read decades ago in the the first edition of Kernighan and Ritchie: first make it work, then make it fast.

My thoughts are to write code that performs as fast as the customer expects it to perform. A minimalist. If the code is well written and useful, then someday the application will move to faster hardware and perform better.

I would argue that the cost of insufficient hardware is actually higher, because if a developer has to wait a long time for long-running operations that should be interactive (e.g., refactoring, rebuild, run, restart the environment), he’s more likely to do a quick switch to the web/mail/coffee and this will end up actually taking longer.

Also, developers may be afraid of anything that shakes the boat like refactoring. I never refactor on my old laptop, only when I’m on a fast desktop.

How many $5000 MP3 players do you own? I mean, forget optimising, just add hardware… it weighs 10kg because there’s a large battery to power the fan to cool the fast processor, but the software is simple.

Even with server systems it can be misleading, because the cost of upgrading client’s hardware can be very, very high. Even ten servers at $5k starts to add up, when you have one per store in a nationwide chain it can get really ugly really quickly because roll-out costs are significant. I also have worked with people where clients would pay $100,000 for an upgrade rather than shell out an extra $5k for faster disks or more memory in their single licensed server. My conclusion was that they were running many, many more copies of the server than they were paying for but my boss convinced me they were just idiots. Sorry, constrained by bureacrazy. Software upgrade budget A, hardware budget B… which one has money in it?

I think we get a little lost in the semantics sometimes.

Certainly the ‘throw hardware at the problem’ approach isn’t a viable option in all scenarios, and it’s never the ideal solution; but when it is a viable option it is often likely to be the cheapeast fastest short term solution.

It’s unfortunate, sad even, that the state of development has come to this. Seems like every doubling of hardware power has been met with an even greater, pardon the expression, ‘crapping up’ of the software we run on it.

The developer as the most costly component of development has led to the development of RAD IDEs that do so much hand-holding that any schlep can pump junk code to market that somehow, accidentally or otherwise, manages to do most of what it’s supposed to at the expense of hardware. ‘Get it done’ walks all over ‘Do it well’. Dynamic memory allocation, automatic garbage collection, and that sort of stuff seemed kinda cool at the onset, but in retrospect I’d gladly give it all back if it would scare a few would-be developers away from the biz or make them clean up thier act.

Sorry, I’m a bitter angry man who has cleaned up a lot of sloppy source code in his time.

I understand your point Jeff, I got a laugh out of it; but I do agree with a few of your readers that indicated that often there are glaring issues in the code that a talented and experienced developer can quickly find and correct often leading to striking performance improvements.

So, by all means give 100 monkees VS08 and see what they come up with but don’t publish it until you’ve had your more seinor prime-ape check their work to make sure it’s not mucked up with their feces.

Wow I am bitter…

nice post. thanks.

Just one thing: no hardware will save you from code produced by stupid programmers.

So, it’s not about hiring cheap programmers and buying expensive hardware, but giving good tools to good programmers, so they won’t have to waste time doing unnecessary micro-optimizations.

It seems like you cannot count on Moore’s law anymore.

Until a few years ago, the CPU clock frequency increased all the time, …

Er…no. Moore’s Law relates to the number of transistors they can cram on a chip every year, not to CPU clock frequency. Not only have they kept pace this year, but they are ahead of the pace.

Hypothesis: Harddrive space is cheap - no need to optimize our code.
Fact: MS Office for Windows 95/98 came supplied on floppy disks or and all-in-wonder CD that contained many EXTRAS as well… Today - it comes on DVD and doesn’t contain near as many extras.

Your hypothesis is what has lead to multiple generations of code bloat. I am truly glad not everyone thinks this way.

I ninja-stealth upgraded my own computers at a previous job where we had restrictions like that. I basically gutted the inside of the PC so on the outside it looked like a normal corporate box, but on the inside it had a modern mobo, PSU, CPU, hard drive etcetera. I just imaged the old drive over to the new one, and let it redetect all the new hardware.

There was a similar policy at the place I worked for a few years ago. As I quit, corporate said everyone gets the same hardware regardless of job function, so as to save lots of money in tech support costs. There were a few developers who did the same thing, replacing the standard hardware with something better. Corporates’ response was to mandate a system app to detect unapproved software and hardware and email tech support. The offender gets a similar email detailing the offense, with the addition that they will be fired if the situation is not corrected. One developer I talked to got those emails four or five times a day, soon after recompiling a company app. She quit soon after that, so I never found out if they managed to solve that problem.

Tim

  1. The first rule of Optimization Club is, you do not Optimize.
  2. The second rule of Optimization Club is, you do not Optimize without measuring.
  3. If your app is running faster than the underlying transport protocol, the optimization is over.
  4. One factor at a time.
  5. No marketroids, no marketroid schedules.
  6. Testing will go on as long as it has to.
  7. If this is your first night at Optimization Club, you have to write a test case.

@`Josh
View’s shared by some very smart people.

Does Django scale?

Yes. Compared to development time, hardware is cheap, and so Django is designed to take advantage of as much hardware as you can throw at it.

That isn’t an argument against investing development time in optimisation. It’s an argument for using frameworks where the development time has already been invested. To be scalable, someone must have spent time in optimisation (probably from the start at the architectural level). Django’s core business is creating the framework, so it makes sense for them to spend developer hours optimising it. The user’s of Django don’t have to spend that time, and can scale out by hardware. Scaling out isn’t the same as throwing hardware at a problem without thinking about optimisation bottlenecks.

There are other considerations apart from dollars - assuming it’s linear with CPU load, a script implemented in Ruby uses 50 times the energy of an equivalent program written in (not particularly optimised) C. If economic measures come into force to reduce the world’s carbon footprint, then the balance between cost and energy may be rebalanced.

Uh, yes. Programmer time is more important. That’s why we write in high level languages now, not assembly.

I think this is missing the point a fair bit. If you want to build something that is worth having a performance problem, i.e. a high volume website for programming QA for example, then you need good developers anyway.

Once the site is becoming popular and you need to improve performance you can either (a) fire those good developers and get better hardware or (b) let those good developers improve the performance.

Looking at this problem in isolation of having to have good people to design/build the app in the first place is kinda dumb. Sorry.

If a one-time investment of $4,000 on each programmer makes them merely 5% more productive, you’ll break even after the first year

… and their hardware will be out-of-date by then too.

Not disagreeing with post in general, though.

I think this is bullshit. I have seen developers working in such a style, and the effort was: You have a ten times slower system by letting them add features on a new hardware than the old system was on older hardware. And that for a system which is sold to customers. Customers who don’t want to change a big IBM machine against a newer one, who think they can use their hardware at least until they returened the investment they made into it. Think about that. One good indian programmer gets ~36000 $. Isn’t it cheaper to hire one for coding, one for optimization and one as spare part than one buggy american programmer at 100.000$ which no optimization and a need for more modern computers

Would like to recommend Release it over at pragmatic progammers. It amounts to a debunk proper of this lofty talk.

Noticeable that you skip the part where one would try to figure out if capacity sinks linearly or exponentially, slowly or at a particular threshold … Or how fast and why performance grow as you add resources.

Please mind that probably the most rapidly increasing clients at the moment is Iphone and Android. These do not have increadible amounts of memory, they use a battery that is supposed to last for quite a while, they are convection cooled, and moore’s law works vs. display and battery, not nearly as much in terms of cycles. School kids to-three years from now all over the world might be entirely happy to put such a device into a screen, keyboard, battery docking device. These things will run on solar.

Social responsibility and ethics is no small problem with your thinking.

Further, it bodes for the deservedly infamous complete code rewrite down the road.

If you put your eggs in the just add hardware basket, basically you are making far worse a design than in terms of car analogies, the SUV.

Niall Flanagan has it correct, its a very poor assumption that doubling hardware (for example) will double performance or even raise it at all (and is that performance in terms of throughput or performance in terms of response time because whilst you can sometimes add throughput, reducing response times is often a whole lot harder just through hardware)

Many systems I see in my daily work wont go any faster however much hardware you throw at the problem, because someone (usually inadvertently) programmed in a maximum level of performance - either because of incompetence, or because we’ll never need more performance than X.

For example take a simple app that updates the exact same row in a table for every transaction, maybe a sequence number or a transaction count. At some point, you will hit a wall and after that point add all the extra disks and CPUs you want, it wont make a jot of difference.

So many people are getting this wrong. If you’re selling to customers, then the cost of the hardware they require is multiplied by every customer, which is hopefully many times more people than the number of developers. But if you’re providing the service to customers, then throwing hardware at the problem can work, and not only that, it is the best solution, short and long-term, within reason (if 99.98% of your time is spent on linear searches on sorted data, then an optimization would cost less in all metrics than a hardware upgrade).

The disk footprint of MS Office install discs is a good example. Why should they fit the installer in a few floppies? DVDs cost almost nothing on margin compared to smaller media. If we’re using the installer size as a proxy for MS Office’s actual installed size, the complaints makes more sense, although with that said: would you pay $10 extra for a version of Office that has 90% the HDD footprint? Or do you just on principle want optimal optimization against every metric (how soon it’s released, stability, price, disc size, memory size, working set, alignment, lack of redundancy, robustness, CPU time, correctness, GPU use, I/O subsystem use, security, usability, maintainability, and computational elegance), even when they contradict?

I agree with the general thesis, however there’s a place where your entire foundational logic breaks which is when you’re creating libraries (like we’re doing) for others to use. Since then you can multiply any performance gain with the number of users using your library which makes even the slightest performance gain grow insanely in importance…

Anyway, great writeup. BTW, I remember a friend of me kept the number of simultaneous sprites record in fact for more then a year I think :wink:
Ypsilon from IT…
He was a couple of years older then me at the time and actually my hero (kind of) while we were kids in their views…

Though when we (ADD, later to become Power Lords - me == Polterguy) crushed them in a copy party demo contest later that year - we were all of a sudden OK to talk with … :wink:

Those were the times…

For software which runs on a few machines, sure it doesn’t pay to optimize too much. Take software which runs on thousands of machines (ala, google search), and suddenly the cost function is in favour of employing a whole team that optimizes it as much as possible.