24 Gigabytes of Memory Ought to be Enough for Anybody

1 minute of mp3 ~~= 1MB
in person life (120 years) 3784320000 minutes.
that mean for music you need 3609 Tera bits - for music (MP3_ for life!
it reach able in 10 years from now…
but the you need to calc , video HD 3D 10 channel of audio ,+ different angles… :slight_smile:
life is complicate and take more and more Memory

“Algorithms are for programmers who want to stay employed in the 21st C” - Me

Because desktop computers are on the decline. Laptops are becoming workstations and tablets, consoles and phones are dominating consumer computing.

So mostly you need to write good algorithms that scale well in the cloud or ones that work well in restricted devices or in a sandbox. Who cares that the desktop dinosaur can be monster powerful?

You definitely can’t future proof ANY particular piece of hardware anymore. Your 24GB of ram might sound great right now, but in a year when intel stops making the i7, moves on to something else, and that something else only supports DDR4 ram…you’re basically SOL for upgrades with your Ram.

The new P67 / H67 motherboards just released 2 weeks ago can go to 32GB once 8GB dimms start appearing… but it will probably be a lot more expensive at first. :frowning:

http://ncix.com/products/index.php?sku=58007&vpn=P8H67-M%20EVO&manufacture=ASUS

“4 x DIMM, Max. 32 GB, DDR3 1333/1066 Non-ECC,Un-buffered Memory”

"According to Intel® SPEC, the Max. 32GB memory capacity can be supported with DIMMs of 8GB (or above). ASUS will update QVL once the DIMMs are available on the market. "

“otherwise unexciting Intel Core i7 platform upgrade”

Jeff, your article timing sucks. In case you missed it, there was another platform upgrade at the beginning of 2011. Namely, the Sandy Bridge platform upgrade, which introduced a host of changes and new features. The least exciting one there was actually the onboard graphics update, which still comes nowhere near the performance of standalone nVidia and AMD cards.

One of the major changes that has a direct bearing on this article is the new memory controller, which is significantly faster than the ones in the older i5/i7 processors.

Anandtech talks about the Sandy Bridge processors here: http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i5-2600k-i5-2500k-and-core-i3-2100-tested

@Javin and others:

“I am surprised Bill gates indeed said that…”

He didn’t. In fact, he thinks that quote is stupid.

My old home i7-920 has 6GB. (It’s one of the early four-slotters, but only using three, to actually be triple-channel.)

My newer office workstation has 9GB.

Nothing wrong with 24, especially if you “really need it” - but on the other hand, not much point in buying ram that won’t be “needed”… before it’s time for a newer CPU/MB.

Of course 24 gb will be enough!

Until the people who think thats not practical come out with the incompatible “ddr 2.0” which can hold twice as much per stick next month, and it becomes mainstream in a year and a half :smiley:

Sharepoint 2010 eats RAM.

It’s funny to think that 10 years ago hard drives were about 24 GB. I suppose that 10 years from now 1 TB of memory will be enough for anybody.

Anyone aware of linear search and binary search?

Especially those in the camp of buying more RAM to the problems, you also need to buy more powerful CPUssss, and SSD etc, as your computer need to be 500 times faster.

As many pointed out, optimization is not better algorithm.

We programmers don’t have moral ground to decide whether to buy more RAM or use faster machine to the problems, and the decision is the job of system engineer or project manager during deployment.

Gates may not have said the 640k thing, but Microsoft did say:

“Each 32-bit application can access up to 2 GB of addressable memory space, which is large enough to support even the largest desktop application.” – Microsoft Windows 98 training kit.

“To me, it’s more about no longer needing to think about memory as a scarce resource, something you allocate carefully and manage with great care. There’s just … lots.”

Maybe I’m not looking in the right places but I rarely ever see examples of programmers managing memory like that. The most common error handling routine for a failed malloc is to quit the process ([citation needed]), that is, if they even check for a failed malloc.

I know I’m way late to this post and my comment will probably only be read by very few people, but to those telling others to “relax, the algorithm/RAM comment was in jest,” you obviously don’t read this blog much. I really like this blog and stackoverflow is a god-send but Jeff constantly says these types of things. This wasn’t a one-time thing.

“Algorithms are for people who don’t know how to buy RAM.”

What!? A computer science professor said this!? Well, I guess having a doctorate doesn’t make you smart…

That professor obviously doesn’t understand the difference between time complexity and space complexity.

You’re not going to make selection sort (aka. bubble sort) any faster by putting more RAM into your computer, nor are you going to be able to do the Floyd-Warshall algorithm for finding the all-[airs shortest paths in a weighted graph.

I agree with the people above me, that is the most ignorant thing I’ve ever heard.

I need algorithms! I’m just a programmer and not a computer scientist or software engineer. I have to process 100’s of megabits per second. Look-up tables help a_lot. More memory isn’t a help. I could have 24TB. My problem is the cache. Cache hits are killing me. However, I’ve improved my algorithm three times (after each elegant/best solution) and the improvements have been worth the effort.

“I agree with the people above me, that is the most ignorant thing I’ve ever heard.”

That’s how I feel about a lot of the comments that get horribly offended at the idea that RAM might actually be a better solution than clever code in a lot of situations.

Let’s say you have a dataset of, oh, let’s say 4 GiB. You want to process this data in some way, each “chunk” of data is 64 kiB (65536 chunks to process).

Now, assuming that processing each chunk incurs an overhead of another 256 kiB while procesing and the processed data also takes up 64 kiB (I’m just making up numbers here) this means if you just dump the whole dataset to RAM, run process it using four threads and then dump the entire resulting dataset to RAM as well before saving it back to disk you will be using roughly 8 GiB of RAM but writing the code is fast (and you can focus in making the actual processing fast).

Now, the “OMGZORZ!!1 RAM IS A RARE COMMODITY!” approach taken to its extreme would of course be to figure out an algorithm that allows you to queue chunks of data so that you are using 4x64 + 4x256 kiB of RAM for processing with a queue that is just long enough that your queue is never empty while still not wasting RAM.

The latter approach makes sense when the dataset is large enough that you can’t use the first one (or rather, where the development time for the latter one is less than the hardware cost for the first one). Once upon a time lots of problems had this issue because RAM was expensive, I remember writing little games in c + x86 asm and trying to squueze as much info as possible out of every byte of RAM, these days it often doesn’t make sense solving these “problems” because they aren’t problems.

(Please don’t assume that I’m in favor of unnecessary bloat, I just don’t get why some people are obsessed with not “wasting” RAM and refuse to just get more RAM)

for people who are notorious for their mental hability(programmers), many of you act as pretty stupid, making that silly flamish fuss over that algorithm-RAM sentence. do you thing Atwood bases a single line of his code on THAT? have you read more than one post of this blog? go waste your time elsewhere flameboys because it doesn’t seem to be worth a penny. i doubt you are even real coders anyway…

“Once upon a time lots of problems had this issue because RAM was expensive, I remember writing little games in c + x86 asm and trying to squeeze as much info as possible out of every byte of RAM, these days it often doesn’t make sense solving these “problems” because they aren’t problems”

This is somewhat surprising coming from someone who has programmed in assembly. You must have not done much of it because this doesn’t make much sense at all, especially for embedded development, when the cost of RAM really isn’t as much of a factor as the limitation of it due to design constraints.

I do agree that there is a time space trade off in that the more space you take up in writing an algorithm, the faster it can be and vice versa. This is observed in the use of look up tables and hash tables. However, being able to index those tables can be rather complicated depending on the accuracy you want in your choice of hash function or other index method. It is also hard to analyze their efficiency given their non-deterministic nature and the fact that they rely on heavy sparsing of the data in order to be effective due to the possibility of collisions. And this is just talking about storing the data, you still may want to search it, sort it, and do other forms of manipulation which wouldn’t be possible without algorithms.

And in that post, you were only talking about the dataset itself and not about the code and other processes occupying the RAM at that time.

My main point is, you’re right that RAM is relatively inexpensive and I wasn’t implying that 1 GB of RAM is a rare commodity or anything like that. However, the development or use of algorithms shouldn’t be attributed based solely on how much RAM you have and any respected computer scientist would agree with that, and it’s shocking to me that a professor said that.

And I’m not even going to waist my time commenting on the ridiculousness of Mdaj79’s comment.

I think you’re all dumb. The most intelligent thing said here was about pragmatism.

It isn’t better algorithms mean less RAM, nor the opposite.

Every situation is different. There is no silver bullet. Stop thinking there is.