Productivity Tip: Upgrade Your Pentium 4

In C# and the Compilation Tax, several commenters noted that they have "fast dual-core computers", and yet background compilation performance was unsatisfactory for them on large projects. It's entirely possible that this is Visual Studio's fault. However, I'd like to point out that not all dual core computers are created equal. Not by a long shot.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/05/productivity-tip-upgrade-your-pentium-4.html

Even better, if you’re in a more than one computer office, demand IncrediBuild, and get ALL the CPU’s working for you.

I think increasing the amount of RAM you have can be as good if not better than increasing your CPU power. If you have the latest Core 2 Duo processor but only 512MB of RAM, you’re cutting yourself a bit short.

Upgrading my laptop from 512MB to 1.5GB of RAM provided a more noticeable responsiveness and speed improvement than did switching from a Pentium M to a Core Duo.

Um, for one thing they used Visual C++ 6.0. Which has no parallel compilation to speak of.

(That benchmark study would have been much more useful if they also measured CPU utilization.)

@Mike Johnson

Jeff Atwood made some interesting points about the Raptors agreeing with you.

Check out this blog entry:
http://www.codinghorror.com/blog/archives/000800.html

Also, if your time is this valuable, it probably makes sense to spend a little time figuring out where your bottlenecks really are. Sure, computers are “cheap” these days compared to developer time, but make sure you’re concentrating your upgrade money in the right areas.

CPU performance is one thing… but also measure your CPU utilization over time, as well as your memory utilization, cache miss rate, IO per second, disk utilization, and so on.

Depending on your application, you may realize larger performance gains by upgrading your disk(s) and memory, or your disk controller, than by just upgrading your CPU.

Isn’t the value of your time worth at least that?"

Apparently not according to my boss… :frowning:

And 2GB of DDR2-667 memory is less than $80 nowadays. There’s no point in having only 1GB anymore.

It matters also that Pentium 4s with HyperThreading are not really multi-core or multi-processor chips in any real fashion. Where as an X2 or Core 2 actually has two physically separate processor cores on the one chip a Pentium 4 is pretending to have any extra core by using its crazy long pipeline to simulate running processes in parallel. In practice it makes stuff faster but the second ‘core’ is quite slow compared to the first in most real uses.

“Now if only WD would bring out Raptors in capacties greater than 160GB.”

Speed and size are always trade-offs. The super speed is probably the entire reason why they don’t have a bigger version.

But really, 160 GB is perfectly fine for an applications drive. Just keep all your media on a second drive.

Time to ditch my Pentium D 820 2.8Ghz then… although my computer never felt slow when running VS2005 with middle-size (college) projects.

The only “problem” is energy efficiency, because this thing has a TDP of 130W! Sometimes in the summer with the stock cooler it would reach 140F. My friend’s Core 2 Duo E6600 has a TDP of 65W and it never goes above 95F. Totally insane…

Nick, all of these are Pentium D benchmarks, but they’re lumped under P4 because that’s what they’re made of. If you’re stuck on a P4HT, compiling large projects (or background compilation) must be extremely painful.

Pentium 4s with HyperThreading are not really multi-core or multi-processor chips in any real fashion.

That’s true. But most modern Pentium 4 CPUs are true dual-core chips with 2 physical CPUs, no Hyperthreading present. The Pentium XE 965 is the rare exception; it is dual-core with hyperthreading, so it appears as 4 physical CPUs.

Also, if your time is this valuable, it probably makes sense to spend a little time figuring out where your bottlenecks really are.

Yes, but we’re looking specifically at the CPU in these benchmarks. Upgrading disk and memory are always a good idea (particularly memory now that DDR2 has gotten so cheap) but that’s not the point of this post.

The solution I’m looking at has about 50k lines of C# code and takes around 5 seconds for a full compile on a Pentium D 940.

C++ takes a long time to compile. If you want to save time compiling, port away from C++.

Also, ReSharper will help you avoid compiling your C# so often: it points out your mistakes as you type.

Use a Borland compiler to save time…

How much of this is the fault of the language itself?

One of my big bugbears with languages in the C family (C and C++) is their heavy reliance on files - source files that include header files that generally require other header files etc. You can have the fastest CPU in the world with a ridiculous amount of memory and you’re still massively hampered by file access times. Even when using the trick of #ifdef ___BLAH_INCLUDED, while it does cut down on the time taken to parse the file, still requires that the file be opened first (in modern systems this is by far the most significant time issue).

Of course you can get around this (to a degree) with pre-compiled headers and so on but the ultimate cost of a much more complex compiler, along with all the potential issues that come along with that. There is only so much optimisation you can do on this text file based model.

Alternative languages such as Delphi which do not rely on text based files are so much faster in build times, even for a large application on a “slow” machine the time taken to build and then execute is a fraction of the time taken to compile the C/C++ equivalent.

So how much kickback are you getting from AMD/Intel/Dell/HP for posting this?

Here’s a hard one to justify because I have no numbers to back it up, but I’d swear that my computer’s a lot faster running Vista (64 bit) with SuperFetch – it just doesn’t go to disk that often when running Visual Studio, so compiles are lightning fast. Totally unscientific, and I should be ashamed to bring it up without anything to back up my claims.

(Not that running 64-bit Vista as my primary computer is without its problems!)

Now to find a way to get rid of this Celeron at work…

“how much kickback are you getting from AMD/Intel/Dell/HP”

Wow, that’d be a hell of a kickback!

Although I kind of see the point. This article points to the CPU being the bottleneck. Are we sure that the compiler/IDE is as optimized as it can get? Maybe the reason we need a bigger minivan is because our IDE has a fat butt? Maybe we could get by with a VW Bug if our IDE/compiler wasn’t so fat?

Are there any benchmarks comparing GCC and OpenWatcom to the VC++ compiler? On different CPU’s? Is it better to use NAnt/MSBuild to compile your products instead of the IDE?