Moore's Law in Practical Terms

There are two popular formulations of Moore's Law:


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2006/12/moores-law-in-practical-terms.html

The Moore’s Law is

  1. now actually the benchmark of CPU manufacturers, resulting a self-fulfilling prophecy.
  2. highly maintained by the demand from software, not those highly specialized, but FPSs and RPGs, “stand-by-in-tray” programs, not-so-optimized applications development and Windows-family.

“Whatever given by Intel taken by MS” still holds today.

But do you know anyone that uses Notepad to compose their term paper or report?

I did. Real men use LaTeX to write their term papers. Nothing else comes close if you want quality output and good looking formulas.

How do you measure performance? Some number crunching application now runs twice as fast? That’s nice, but only the geeky care. What can you do with a PC now that you could not do ten years ago? What would you like to do with a PC now that current PC’s cannot do? Me, I am still stuck in the text world. I will occasionally paste a picture, but I do almost no graphics work on a computer. I enjoy the graphics that other people produce. Most is fluff, but occasionally I see something really good. Interactive graphing of algebraic equations was one.

What is the website blank for? I put in a URL, submit and I get some error message saying you bad, you tried to put a URL in the space for a website. What gives?

At areound 10 mio. transistors we had enough computing power to start on (mainstream) garbage collected languages, with 100 mio. we got fully debuggable and refactoring capable IDE’s. I wonder what 1 bio. will bring!?

For years, it hasn’t been microsoft doing the taking away, it’s been crappy antivirus and spy_ware [ngg blog filter]… Microsoft’s finally back in the game of stealing ur megahurtz though, hopefully it’s worth it. =p ( a href="http://www.knitemare.org/cats/headoutpower.jpg"http://www.knitemare.org/cats/headoutpower.jpg/a )

Another decent indicator (but harder to research) is how much performance you can buy with a given amount of money (adjusted for inflation), either individual components or a whole system. That particular measure has probably increased faster than overall performance.

Another decent indicator (but harder to research) is how much performance you can buy with a given amount of money (adjusted for inflation), either individual components or a whole system. That particular measure has probably increased faster than overall performance

in his writings, Moore was quite explicit: this was exactly what he wanted to measure. number of devices and feature size were implementation details.

actually, number of transistors is not doubling per processor anymore. It’s all about multi-core processors now so average machine can’t take an advantage of all this doubling unless it uses software which can actually utilize more cores which is usually not the case.

Everyone I know who does latex uses Emacs. Like mathml, it’s not nearly as user-friendly as html. (Not that I doubt you in this instance, anyone can learn to do anything by hand eventually.)

Charles, I’d say quick and easy home DVD authoring is one area that’s reached mainstream price in the last two years. No more need to suffer while editing and then let it render overnight. In photography, panoramic stitching has entered that realm.

Like mathml, it’s not nearly as user-friendly as html

Sorry, but I’ll have to disagree. LaTeX is a thing of beauty once you get a hang of it. It makes the impossible possible. I still use it whenever I can (whitepapers, documents that don’t require editing) and output beautifully typeset PDF that’s easy on the eyes. If anyone here still hasn’t discovered LaTeX, I urge you guys to give it a try. Initial setup can be a bitch, and there’s a bit of a learning curve, but once you typeset a good size document in it you will never go back to Word if you can do without it.

BTW, even if you use emacs, it won’t automate much for you. Maybe cross references, maybe matching curly braces, but that’s about it.

There’s a Mac tool that works great when you can’t use LaTeX, it’s called LaTeXiT. Basically it gives you LaTeX syntax for formulas and then you can drag and drop the result into your non-LaTeX document.

And your point is…?

How fast is software getting slower? I think you can build and OS like Windows/Linux and make it start at least 100 times faster.

Yes, Notepad is awesomely fast. But do you know anyone that uses Notepad to compose their term paper or report?

It’s a specious argument.

An anti-corollary is Gates’ Law: every 18 months, the speed of software halves:

http://en.wikipedia.org/wiki/Gates’_Law

Herb Sutter claims that:
a. Moore’s law is fading away.
b. All programmers need to start learning to use
concurrency in order to utilize the computing power that does evolve in the near future:
http://www.gotw.ca/publications/concurrency-ddj.htm

I have to disagree with your final conclusion that transistor count is a strong indicator of performance.

Transistor count is only vaguely linked to performance. For a simple example, imagine a single-core system vs. a dual-core system. The dual-core system will have almost exactly double the number of ‘transistors’, but will it get double the performance on your example of an office benchmark? Not a chance.

The benchmarks you cite are using a P4 2.0GHz vs. a P4 3.2GHz. This is, again, an office benchmark, so there are a lot more confounding variables. Even assuming that this is a pure CPU performance measurement, it doesn’t look right - you have a pair of P4 machines, one running at 1.6 times the clock speed, but somehow getting double the performance. What’s going on there? Is there a difference in cache? Chipset? Hard drive size? Memory types? There’s no way to know, and it’s probably a combination of these things.

There are even problems with the use of ‘transistors’ as a measurement, because what constistutes a transistor varies greatly to the hardware designer and generally isn’t their lowest-level primitive anyway. In certain situations, a transistor might occupy n units of space. If you combine transistors into a logic gate, they might take 1.5n units of space.

I’d say that at best, Moore’s Law is a tool for marketing and evangelism, and not something to be used for any sort of scientific measurement or prediction. Intel want you (and their shareholders) to believe that processor performance is ramping up; they’ll quietly ignore that period after the P4 3GHz where they took three years to increase their clock rate by 1GHz. Now performance-per-watt and multicores is their marketing thrust for the simple reason that throwing transistors at a single core was giving diminishing returns.

The vast majority of the transistor count of a single-core CPU was already cache. There are have been plenty of well thought out experiments measuring the performance difference gained by increasing cache size, and you’ll find that it’s a rare day indeed that doubling your cache size doubles your performance.

“…they’ll quietly ignore that period after the P4 3GHz where they took three years to increase their clock rate by 1GHz…”

Right, but this is an Intel design issue; the huge 31 stage pipeline and the unfortunate “overclocked-by-manufacturer” tendency they had to persue.

I would much rather go back to the Pentium days where a discrete cooling profile was enough. My dev system is an AMD X2 3800+ 35W and it provides silent performance, but is relatively expensive. In that light, I would prefer moores law to be non-existant and let CPU architects meassure their success by more than mere FLOPS.

A number of people need to review what “correlation” is: http://en.wikipedia.org/wiki/Correlation

Observing a correlation doesn’t imply causation or a “link” (as Ian calls it). And Hamilton Lovecraft, there are correlation coefficients other than 0 and 1. Nobody’s claiming it’s a 1 (which would mean transistor count is 100% determinative of performance, clearly false), just that it’s way higher than arguments based around (completely true) architecture differences and the differences in the rate of improvement of other hardware might lead you to believe, if you don’t actually examine the evidence.

The benchmarks you cite are using a P4 2.0GHz vs. a P4 3.2GHz. This is, again, an office benchmark, so there are a lot more confounding variables.

You know, Ian, it’s funny, because when I originally did this research I was trying to disprove the idea that growth in transistor count relates to performance in a modern PC.

Imagine my surprise when I found that the interval for performance doubling in SysMark2004-- 26 months-- almost exactly mirrors the 24 month doubling rate of transistors in CPUs. I did not expect this.

I’m not saying it’s a strictly causal relationship (and I agree with all of the criticisms you list), but there’s plenty of evidence to support the idea that transistor count is a decent, if very rough, measure of overall performance.