24 Gigabytes of Memory Ought to be Enough for Anybody

"Algorithms are for people who don’t know how to buy RAM."
That’s all well and good until you:
a) Need to run software on a minimalist system (think phone or netbook, or even a budget desktop [relevant if you’re an organization trying to cut costs])
b) Need to write software that does some serious heavy lifting (database manipulation, simulations, some types of AI).
c) Realise that RAM consumes 20% of an average desktop’s power (this is most relevant if you’re running a server)
a in particular is becoming increasingly relevant - it’s the reason MS had to recommend XP for netbooks until Win7 came out. Efficient code can be used in places that bulky code can’t, and just cause your dev system has 24 GB of RAM doesn’t mean that your users do.
While realistically you can’t optimise code til it’s perfect, efficient code will always be valued by users for its speed, while the reverse is also true (consider how much crap Nero has gotten for being a 1+ GB CD burning package, when competing packages are <10 MB).
This is all to say nothing of the emerging market that is netbooks and smartphones (tablets too) - getting more than 1 GB RAM in those isn’t about to become common anytime soon, and they’re usage is increasing.

Worths pointing out that not all Core i7 branded processors have triple channel memory controllers. i7-9xx processors support 1066MHz triple channel DDR3 memory whereas i7-8xx processors have dual channel 1333MHz DDR3 memory controllers.

Oh come on people, the algorithms aphorism is a funny little quip that obviously doesn’t hold 100% of the time. But there are a great many algorithms out there you can still use, but where it would be cheaper (in terms of total power draw) to just load it into memory and do it there.

For instance a sort of a large dataset. If you don’t have the RAM available to do it, fine you’ll use some sort of external sort, involving at least a few trips back and forth to tertiary memory. If you do have the RAM though, it’ll be faster to just do it in there.

Yes, this doesn’t hold 100% of the time. That I need to explain this is kind of sad.

There’s more to performance than just what a programmer can do by optimizing its code. This is particularly important on that type of software which performance is also dependent on user input, or on software that is supposed to be scalable. So the quote “Algorithms are for people who don’t know how to buy RAM” deserves a little more attention than what some of you guys have been giving it.

It’s not stupid, it’s not the worst thing you ever heard. It’s actually quite, in fact, true!

There’s so much a programmer can achieve in terms of code optimization, much of which is well documented and easily understood. There’s very few secrets concerning code optimization these days. Particularly on well-known and proven areas of development. Any code displaying less than optimal optimization is pretty much understood these days as either a strategy (maintenance concerns, etc), laziness on behalf of the programmer, or inexperience. Not the product of some secret knowledge available only to a few.

Optimization is, for the most part, taken away from us and put on the hand of the compiler, the operating system and the hardware. Those are the real agents of performance on our modern systems. Our saying (assuming of course good quality code) was pretty much limited.

Now, with only a limited capacity for optimization, it’s pretty easy to understand that hardware scalability comes into play in terms of what one can or cannot do to actually increase the performance of our software. Instead of relying oneself on wasting time over-optimizing our code to achieve some performance goal that may actually be unattainable, we greatly reduce costs and achieve much better results by increasing the capabilities of our hardware. And that’s where this quote fits in.

And who says RAM, says CPU, or any other relevant piece of hardware. A few microseconds attained by some very smart code optimization technique that took weeks to achieve and introduced new code maintenance problems, cannot ever replace the elegance and simplicity of an hardware upgrade. And will never compete with it on software that is meant to scale.

One disadvantage of large amounts of RAM (that you don’t need) is that suspending/hibernating times are considerably longer.

Writing a 24 GB hib file takes much longer than say a typical notebook with 4GB.

I have had 10GB for the past 2 years (Mac Pro 8 core) for VM purposes (live in OS X, work on W7) and I don’t look back, but upgrading to, say 20 right now, would probably end up being a waste… I’d rather get OWC SSDs :slight_smile:

If you have the money why not. I have 12gb in my i7 and it works great when I need to use a virtual machine or two. Not needed more than that yet.

Future proof not so much though the main thing with memory now is speed not capacity as we can afford capacity. The ram you bought runs at 1333Mhz this is pretty standard. But its pretty common to get 1600Mhz ram especially for i7’s and OCing you can even go as high as 2200. I can see you having a ddr3 upgrade for faster ram before the move to ddr4.

I still have only 1gb of RAM in both my computers.

Note +1, +24G! “Buying RAM is for people who don’t know how to write Algorithms.” – Me, just now.

I am SO using this as my signature now!

After just upgrading my storage space, after a recent memory upgrade, I generally find storage speed to be the limiting factor more than the amount of memory. 8GB is more than enough for me, and anyway, I’m sticking with AMD.

Now if only I could replace this work iMac with something that can have more than 4GB of Ram…

"It’s not stupid, it’s not the worst thing you ever heard. It’s actually quite, in fact, true!

There’s so much a programmer can achieve in terms of code optimization, much of which is well documented and easily understood. There’s very few secrets concerning code optimization these days. Particularly on well-known and proven areas of development. Any code displaying less than optimal optimization is pretty much understood these days as either a strategy (maintenance concerns, etc), laziness on behalf of the programmer, or inexperience. Not the product of some secret knowledge available only to a few."

And that’s another ridiculous point of view.

Better algorithms are not the same thing as what you call “optimization”. It’s is hard enough to create optimal algorithms, even considering that most of what you do at corporations is very simple and there are many examples in the literature you can use. But programmers, like yourself, are uneducated. “Optimization” is what you do to an already optimal algorithm, to speed it up in a given platform/hardware.

What use is buying 24gb of RAM when you have a naive exponential algorithm? It is happily going to chew up all your memory and then a hundred more gigabytes, where a better, polynomial algorithm could live with 1gb, or less. Oh, it even might not do it right away, working fine for months and suddenly exploding when the input dataset grows over a threshold.

Usually, “optimizations” are done to minimize those pesky constants the big-O notation hides. That is usually done at the expense of memory, which is fine.

Now saying that “optimization” is equal to “optimal algorithms” displays an unprecedented level of ignorance. It is a shame that people with no CS education are allowed to write non-toy software.

Question: Why buy a house with 1800sqft when you can buy one with 3600sqft?

Answer: You pay more up front and you pay more every second of every day that you heat/cool the house between 69-72F.

“Algorithms are for people who don’t know how to buy RAM.”

That reminds me of the Jon Bentley book Programming Pearls where he compares (something like) a 300MHz DEC alpha to a 2MHz Radio Shack TRS-80 running a more efficient algorithm. For problems above a certain size the TRS-80 is faster. For problems above an even bigger size, the alpha will effectively never finish.

If your problem is big enough then algorithms matter.

Maybe it should be: “Memory is for people who like getting stuff done.”? I’d rather spend time getting my code right than fixing someone else’s mistakes.

@Vostok4 You are absolutely right. More hardware is no excuse for bad code. Sore, I can make bad code still run if I throw enough hardware at it, but imagine if I write a decent piece of code, how well it will scale! I agree that the last little bit of optimization is often a waste considering programmer time taken, and machine time saved, but the work to get to pretty good is worth it.

Being primarily a Linux user, I see a great value in that much RAM. It is very aggressive about file system cache. My disks will appear to really fly with that much space to work with.

BTW, the new sign in for posting comments is OK, but I miss orange.

Relax guys… obviously having more RAM won’t solve all problems. But it’s cheap and nice. And Jeff wants you to buy some RAM using his affiliate link. Chill out and go buy some RAM. Or don’t. Your choice.

I’m out of words. I’m quite angry that you are actually proposing this. I spend a fair amount of time arguing with newbie programmers who over allocate and do not pay attention to memory simply because “they can”. They obviously have no clue that their machine represents absolutely nothing close to reality, and as a result they write terrible code that you should be hanged for.

Your approach encourages this. I hate this blog post. Sorry.

But programmers, like yourself, are uneducated. “Optimization” is what you do to an already optimal algorithm, to speed it up in a given platform/hardware.

And with this you pretty much denounce your own lack of education. Not just the fact you choose to insult people you disagree with, but also the fact your obviously do not really understand what you are talking about.

If you choose to call code optimization “what you do to an already optimized algorithm”, I feel obliged to point you to this: http://en.wikipedia.org/wiki/Code_optimization

In there you will hopefully learn that choosing the appropriate algorithm is also part of the optimization procedure. But contrary to what you want to indicate, the choice of an appropriate algorithm is not always straightforward since often you’ll find yourself compromising on other aspects of performance; is it faster but uses more memory? Is it slower but has a small footprint? Since these algorithms are proven, you don’t have many chances (if any) of optimizing them any further. Yet, when you are faced with these questions, the decision must be made on what you need to compromise.

And then there’s those algorithms you design yourself for those non-generic needs, where the level of optimization is decided by you as you are developing. When will you stop optimizing your algorithm? When you are satisfied with the results? Or when you cannot do any more optimization? If the former, congratulations! You made your way into the real world of professional programming. If the latter, unless there’s a concrete reason to spend your time with your algorithm (e.g. you are coding a performance-critical application), you will hardly find anyone sympathetic with you. Much less your boss.

And its in this context that quote fits in. RAM, CPU, Hard-Drive, all can contribute for better performance in ways that your coding skills cannot by the simple reason the programming language semantics limit what you can do. It’s not an invitation to write bad code or to make poor decisions; it’s the reassurance that once you write good code and make good choices, the hardware will do more for your application than any extra bit of added performance you can extract from your code.

I hope, if you choose to reply to this, you avoid the insults. The level of emotional response you put into this topic has the opposite effect than you think; It does not intimidate and does make you look insecure about your own thoughts.

Wow, I’m surprised by the amount of anger in response to this post. Sure the quote “Algorithms are for people who don’t know how to buy RAM.” is provocative, and probably meant to be somewhat tongue-in-cheek, but I have to agree overall.

Most of us are already coding on top of many layers of abstraction, each of which comes at some cost in terms of efficiency. However, hardware is cheap and getting cheaper, and each layer helps us to be more productive as developers.

The overall response here really reinforces my opinion that as an industry we’re often too focused on the geeky details, such as optimizing for minimal RAM usage, while we fail so often at understanding the problem domain and our customer’s needs.

RAM is the new disk.
Processor cache is the new RAM.
Algorithms still matter. Actually, they matter more than before, because thanks to Google, users now expect each of their tasks to be completed as they type.

Shades of gray

48 GB is for people who haven’t heard of algorithms
12 GB is for people who know how to avoid writing algorithms some of the time
8 GB is for people who know how not to write algorithms
4 GB is for people who know how to use algorithms (you write them on paper, not memory).
2 GB is for marketing people who write required specs that they know would not work smoothly
32 Mb is for people who know how to write algorithms, operating systems and generally rock

In case you wondered: Jeff is in a class of his own :slight_smile: