24 Gigabytes of Memory Ought to be Enough for Anybody

I agree with Kevin Krueger. I can’t believe so many people are getting upset at the sentence “Algorithms are for people who don’t know how to buy RAM”. I found it hilarious.

MOAR GIGABYTES!

I’d also point out that the alleged trade off between “spend more time writing better code” and “just buy more memory” could in some cases be an effort to hide problems like memory leaks, where you are just delaying the inevitable. Disk speed is primarily an issue for latency-sensitive operations (like interactive database queries) but apps that have to churn through a lot of data ought to be batching up reads and issuing async I/O requests so that the OS can fetch or copy the data independently from the application. The need for programs to be mindful of its current working set doesn’t go away with more RAM, it just moves up the cache hierarchy to the CPU.

@Seth Heeren So, what about the 1Mhz, 32K machine I have in my basement that still works great? Busicalc is still quick, even after all of these years.

RAM RAM everywhere and none of it ECC.

For what it’s worth, I think people are taking the algorithms vs. RAM comment way too seriously. I interpreted it as a comical statement made mostly in jest—am I wrong?

On an unrelated note, you are coming to my school in February (CMU Silicon Valley)! I am pretty psyched for that.

I don’t find memory to a be future-proof investment. I upgrade my system once every year or every other year, and usually by then there’s some slight architectural change that renders my old RAM useless. It might be something small, like an increase RAM speed, or going from DDR-2 to DDR-3, but either way, I’m forced to buy new RAM again.

Ummm excuse me, but when did “Algorithm” become synonymous with “Optimisation” ? An algorithm is a predefined set of steps to achieve an end-result. Sounds like a program doesn’t it. Algorithm = Program. No Algorithm = no program -> no computers, and any conversation about RAM disappears in a puff of smoke.

What you all appear to be talking about is “Optimisation is for people who don’t know how to buy RAM”.

Apparently a lot of people are incapable of detecting hyperbole.

RAM is cheap…now, but it wasn’t always this way. 20 years ago I was working at Novell, and they showed off their new testing computer.

It cost $1,000,000 just for the 1G of RAM it had. Yes, RAM cost $1,000 a MB at that time.

About a year later someone discovered how to increase RAM yields and the prices fell thru the floor. We might laugh about how lame computers were not so long ago but it wasn’t always like this.

My computer only has 3.0 Gig of memory… but my phone has 14.5G!

Some 25 years ago, I was sysadminning a time-sharing computer that had a whole 2.4 gigs of disk… that was two drives the size of small washing machines! Last month, I bought my boss a 10G thumbdrive smaller than most of my fingernails…

I only just used that Bill Gates quote the other day http://www.matthewedmondson.info/2011/01/assumptions-in-software.html - sorry, I guess I’m guilty of spreading such rumours.

This RAM v.s. Algorithms quote is stretching the truth, of course. RAM is not something special in context of algorithms and optimization. It’s just another type of resource, which is limited. As a programmer you should track usage of this resources and plan to invest in increasing some of them.

So, if you have twice as big of RAM this does not imply that you would see any improvements in performance. But… in most cases we did.

“Plenty of RAM is great for most applications.” — it’s 80% rule. There is no need to raise it as a silver bullet. And we all should remember what happend with GHz race in CPU space. Ten years ago I could easily say:

Algorithms are for people who do not know how to buy new processor.

This sort of thinking drives the whole industry for decades. Well, not anymore.

Your approach encourages this. I hate this blog post. Sorry.

I’m with you on this. Having switched recently from a Windows Mobile phone to an Android one with a very similar hardware specification (same speed CPU for sure), the responsiveness of the HTC is so fluid compared to WMP. I am in admiration for the Android developers (and I suppose Linux) for a non-bloated system. The WMP team tried to shoehorn a big memory approach into a small device.

One bit advantage of the smartphone surge is that it’s refocussed attention on performance which is closely tied to using memory correctly.

How is power consumption for i7 (versus i5 or i3 I guess)? For me, lower is better.

Why is everyone so angry? Chill, the comment was in jest.

Check out Jim Getty’s blog stuff on bufferbloat to see how unchecked addition of memory to routers on the Internet will be the cause of the next big slowdown.

http://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/

Regarding “Algorithms are for people who don’t know how to buy RAM”, one tip-off to the quote is that it was related by a computer science associate of Clay Shirkey’s. Given that CS main focus is algorithms, there should be enough clue that there’s subtle wisdom in this statement.

I’ve been around the block a few times, got my master’s degree in CS, love algorithms, and still see the wisdom in this statement. Three real-world examples quickly sprang to my mind.

I remember all the time I spend in my CS studies learning mergesort algorithms for two tape-deck sources spooling to a streaming tape output. Do YOU know how to optimize sorting algorithms for 1k of memory and a trio of streaming tapes? No? Thank your RAM. Problem solved.

I also spent a lot of time learning a number of hairy hidden-line and hidden-surface algorithms to display the visibility problem in computer graphics. My first graphics computer (Amiga 1000) didn’t have enough working RAM to hold one screen’s worth of 24bit pixels. Today? Ray trace or use a Z-buffer. Indeed, modern raster graphics today have Z-buffers, A-buffers, bumpmap sources, texture sources, stencil buffers and on and on. Better algorithms? Nope – today we finally have enough RAM to dump all the crazy algorithms we used to have to employ. If you use Z-buffers today but don’t know the floating horizon, Roberts, Warnock or Weiler-Atherton algorithm, then hold your tongue and thank your RAM.

I spent several years of my career working on a advanced rendering solution that was predicated on the belief that RAM would be prohibitively expensive and that better algorithms (essentially decomposing scenes into 2D affine sprites with asynchronous error-based update) would save the day. Several years later, RAM prices had dropped considerably and our hardware was a footnote in history.

Yes, learn your algorithms. Study big O and little O notation and learn about the asymptotic performance of the algorithms you use and write day-to-day. But there are clearly areas where RAM is THE solution to various problems we encounter day-to-day.

If you think algorithms will save your butt every time, then you’re no better off than the fool who believes that RAM will save his butt every time.

@Steve Hollasch.

Well said. Use what works best.

Well, quote "Algorithms are for people who don’t know how to buy RAM. " is for people who never heard about NP-problems:)