Building a PC, Part III - Overclocking

Excellent article!

I’ve built many systems, but always been afraid to overclock. I think I’ll try it tonight.

Oh, no HTML allowed, Ok, here are the jokes:

In old Russia, two beggars sat next to each other in a district where few Jews lived. One held a sign saying “Please help the war veteran”, and the other holds a sign saying “Please help a poor Jew”.

People pass by and even those who didn’t intend to give money to either of them, give to the first beggar to upset the Jew. Finally, one day a good man passes by, gives money equally to both, and then says to the Jew: “Why don’t you change your sign? Don’t you understand that nobody will give you any money?” and walks away. As he goes, the Jew turns to the other beggar and says: “Chaim, he would teach us business…”

Abe’s son arrived home from school puffing and panting, sweat rolling down his face.
“Dad, you’ll be so proud of me,” he said, “I saved a pound by running behind the bus all the way home!”
“Oy Vey!” said Abe, “You could have run behind a taxi and saved 10.”

Very interesting post. Gamers who are planning to buy a new computer should read this. For me, I don’t need that kind of computing power as I’m into programming. My Pentium D 3.4 GHz machine is more than enough.

Sorry if I missed this point (reading from China, where I can’t see flikr pics and it’s damn slow at times!

Anyway, are you using RAS 3 memory, or RAS 2? I’ve found that RAS 2 (obviously) gives a big performance boost. I agree it seems the memory isn’t optimum for this system.

The memory effect, oddly enough, is due to the Quad CPU.

I took the memory out of my system, which is a Duo running on the same 650i chipset, and it still scored 5.3. In my system, that memory scores 5.9! And it’s running at totally stock DDR2-800 speeds in both cases.

There are lots of forum posts from Quad owners wondering why their Windows Experience memory scores are so low. It’s probably partly the fault of the benchmark, too.

Jeff,

Thanks for informing that overclocking isn’t as complicated as it sounds (which is what I thought). Now I can overclock with confidence.

I fully agree with your insightful perspective, actually there is a similar thread at Frontier Blog
( http://www.hwswworld.com/wp )

Some background on the Core Temp program, how it works, and what safe temperatures are for your CPU:

http://www.overclockers.com/articles1378/

Great set of articles. What is the sound ouput from the case with all of the parts put in it?

The computer I have now has 4 80mm case fans in it and I can’t stand to leave it on at night.

I’d like to build a super fast quiet computer. And I wonder how quiet this computer is?

As long as you don’t flip the fan switch to high or add extra fans you don’t have to worry about hearing it at night.
With the three TriCool fans set to medium speed I was unable to hear them over my PSU or heatsink fan. On high speed they are significantly louder, but I never felt the need to run them faster than medium speed, and most times… even while gaming low speed was more than enough.
I’m running 4 drives with 2 being raptors in mine(see specs below) and not being able to hear them at all should tell you something about how quiet this case is.
Intel 6600
Evga 8600GTS
Kingston valueram ddr2 2gb
Asus p5nsli
SB Audigy2
Plextor sata px755sa
150GB and 74GB raptor drive
2x400GB seagate sata drives
P180 case

p.s. The bright blue HDD access light at the front of the case might get on your nerves at night though so you’ll have to disconnect it or cover it up.

Beware overclocking.

You can certainly get a faster computer for less $$ if you overclock. Yet, overclocking can undermine the reliability of a computer in ways that won’t show up in initial stress testing.

One dark secret of the computer industry is that the switching elements in computers aren’t 100% reliable – sometimes you ask for a “1” and you get a “0”. High-end computers, such as Itanium servers and IBM mainframes contain error correction logic, but it simply goes unnoticed on commodity computers.

The rate of errors is small, but some computers are orders of magnitude worse than others. People who build large clusters (1000’s of computers) need to deal with errors as a matter of course, but most people just blame software or human error.

Overclocking can increase the bit error rate. How bad this is depends on many factors, including luck. Some people who “successfully overclock” have random crashes and data corruption that they blame on something else.

Years ago I had an overclocked computer – I thought it was worked great, but I had the little problem that it segfaulted when I compiled the Linux kernel. I blamed the software, I asked about this on the LKML and was told it was a hardware problem… They were right, the kernel compiled just fine when I dropped the CPU speed.

A lot depends on the chip you’re running and how well you cool it, but next time your BitTorrent client rejects a bad block it got from somebody, it may well have been flipped by an overclocked computer that works “perfectly well”.

I thought you guys might get a kick out of this…
http://www.dailytech.com/Intel+Rolls+Out+Overclockable+Core+2+Extreme+X7800+Notebook+Processor/article8059.htm

Looks like the new Core 2 Extreme X7800 Notebook Processor has an unlocked multiplier.

A friend of mine directed me to this article, said (why don’t you do this) so I enjoyed reading it. I do a lot of high-speed benchmarking under singlestagers or cascades but a lot of the questions are still relevant. For the Core 2 architecture ~1.5 vcore is about tops under aircooling, 1.55-1.6vcore is safe for water, and I’ve even thrown 1.8vcore while under cascade but I don’t recommend it for 24/7 usage. The Core 2 architecture is incredibly resilient, after multiple 4.3+GHz runs with my E6400 with ungodly voltages it still purrs away at 3.6GHz/1.45vcore, not even a hint of electromigration or degradation in performance. In regards to temperatures I’d suggest staying under 70 celsius 100% loaded. Heat decreases a CPUs lifetime however CPUs have lifetimes of something around 10 years or greater. Who cares if you knock off 3 years of the lifetime by running it at 3.8GHz instead of 3.4GHz, do you plan on using current PCs in 7 years or even 3?
One last thing, stability, there’s a comment above that mentions “crashes and data corruption”. Higher clocks and tighter memory timings contribute to instability while increasing vcore and vdimm helps to counter these effects to a point. Cooling can help to an extent but mainly on processors, GPUs, and chipsets, not so much on memory modules. I personally stability test with Super Pi 32M runs, two for each core, Prime 95 torture test, 1 for each core, along with various Orthos blends running to thrash the CPU and memory subsystem. This sort of testing is done to determine if the system is 99.999% stable for 24/7 operation, there is always the chance that something will go nuts inside the silicon so you have to keep that in mind. Hope some of that made sense, overclocking has become an artform and it’s always fun watching it breach to mainstream individuals.

I don’t understand these voltage things. How do I know the voltages are at a safe level? Mine are:

+3.3 Voltage - 3.26V
+5v voltage - 5.36V
+12V voltage - 14.55

I overclocked my 3.2 Prescott P4 Intel CPU to 3.24. I guess it doesn’t make much of a differnece to overclock so low.

Thanks.

I was into overclocking in the 90s, but is it really worth it to try? Instability can show up at any time, and you can physically destroy chips this way. I managed to burn out an 1200mhz Duron chip a couple of years back

Overclocking your own computer can be a fun hobby, but that’s it. Years ago I purchased a 368-25 machine for my wife that only contained an overclocked 386-20. We argued about whether that was ok. She didn’t really believe be until she called the manager in charge of the Intel processor production line for 386 and he set her straight. Subsequently she had more respect for my technical opinions. Here is my bottom line. When you buy a machine and it is specified to have a certain speed, it must have parts actually rated for that speed. Anything less is fraud. We argued with the vendor until they sent us a new machine with a “real” 386-25. I am a software engineer and I have worked with hardware engineers before that built microprocessor motherboards. They use logic analysers to characterize the signals between parts to assure the leading and trailing edges are clean and comply with the timing specifications of the parts involved. If computers have marginal timing, the output stages of various parts begin to have driver transistor failure over time from bus conflicts. Real enginees know this. Just because a computer can boot DOS or Windows or “seems to work ok” absolutely doesn’t mean that things are ok. Relibility is the “most important” aspect of computing to me. When you pay for a specific speed of machine, you deserve to get it. Don’t you think?

Douglas,
I think you are forgetting one thing. Since at least the start of the pentium product line, and perhaps even before that, Intel / AMD only manufactur a couple of different CPU’s based on the socket type.

After testing to determine the full performance range these are then binned into different groups based on marketing. So that some are placed the value range, some in the mainstream, and some are for high end. These are the exact same chip with the only difference determined by how many of each the manufacturer can expect to sell in a given marketing segment.

Once Intel/AMD knows how many are destined for a group, they modify the chip through the multiplier settings, or in some extreme cases even physically destroy a part of the chip in order to limit it. AMD laser cut some traces in order to disable use of half the on chip cache for some of their processors in order to sell them in a value market.

The reason for this is simple economics. It is much cheaper to make one processor in the factory and disable features to meet market demands, even if it requires laser etching, than it is to produce multiple processors custom tailored for that market.

Overclocking then is simply getting past these issues and allowing the chip to perform at it’s full / originally specified potential.

Having an SLI board, why not use NVidia’s NTune to adjust settings instead of using the bios?

"I think you are forgetting one thing. Since at least the start of the pentium product line, and perhaps even before that, Intel / AMD only manufactur a couple of different CPU’s based on the socket type.

After testing to determine the full performance range these are then binned into different groups based on marketing. So that some are placed the value range, some in the mainstream, and some are for high end. These are the exact same chip with the only difference determined by how many of each the manufacturer can expect to sell in a given marketing segment.

Once Intel/AMD knows how many are destined for a group, they modify the chip through the multiplier settings, or in some extreme cases even physically destroy a part of the chip in order to limit it. AMD laser cut some traces in order to disable use of half the on chip cache for some of their processors in order to sell them in a value market.

The reason for this is simple economics. It is much cheaper to make one processor in the factory and disable features to meet market demands, even if it requires laser etching, than it is to produce multiple processors custom tailored for that market.

Overclocking then is simply getting past these issues and allowing the chip to perform at it’s full / originally specified potential"

I couldn’t agree more Douglas

http://www.overclockyourcpu.co.uk

Hi guys.

I am a lot of sad because I own a Q6600 G0 and MSI P6N Diamond with Patriot LL plus 8800GTs in SLI and I can’t get more than 1199Mhz on FSB no matter what. What is the problem? Tried increasing everything, but nothing works above 1199…