Choosing Dual or Quad Core

The newer game engines (e.g. Crysis) are already starting to utilize quad cores so I would add games to the list.

Real world applications would be Nikon Capture NX on both XP and Mac and the Canon equivalent, IrfanView and Bibble transcoding and tweaking directories of 12 MP files, using something like MS Publisher to add and delete pages. On a Mac run Tiger and XP at the same time.

Another stress test would be to have all these apps open, plus FireFox with a dozen tabs, throw in QuickBoooks, Windows Explorer, a couple of large spreadsheets, and burn a DVD.

I’m looking to replace a G4 Mac and a 3 year old XP box. Should a get a Mac mini, or go for an XServe or MacPro?

Help me, folks.

Compiling a C++ project in Visual Studio 2005 with a Quadcore requires to use Incredibuild, otherwise 3 CPUs sit idle for 80% of the time.

Visual Studio’s C++ compiler doesn’t scale out-of-the-box for projects with a lot of dependencies, because building projects in parallel doesn’t fit here. but by adding the undocumented /MP(X) compiler option under “advanced command-line options” there is a multi-cpu performance gain even at the project level:
/MP2 for dual-core, /MP4 for quad-core

It utilizes 100% of all available cpus over here, so the statement that the cpu’s sitting idle is not correct, at least with vc++ 8.0.

As Mark already commented on 12:58 AM: the last 4 entries in the XBit comparison have their sign reversed – Excel is actually 63% SLOWER on 4 cores.

Plus, while I understand the reasoning behind comping 2.4GHz to 3.0GHz, I find it worth noting that in many cases in which the quad performs worse, it’s still performing better than the CPU ratio would predict.

Compiling a C++ project in Visual Studio 2005 with a Quadcore requires to use Incredibuild, otherwise 3 CPUs sit idle for 80% of the time.

Visual Studio’s C++ compiler doesn’t scale out-of-the-box for projects with a lot of dependencies, because building projects in parallel doesn’t fit here. but by adding the undocumented /MP(X) compiler option under “advanced command-line options” there is a multi-cpu performance gain even at the project level:
/MP2 for dual-core, /MP4 for quad-core

It utilizes 100% of all available cpus over here, so the statement that the cpu’s sitting idle is not correct, at least with vc++ 8.0.

Massive parallelism is coming. What will drive it is robotics, and the need for whatever the desktop evolves into to program and perhaps control it.
In ten years machines with hundreds, perhaps even thousands of cpus will be either on the drawing boards or in production designed specifically for this appication. The massively parallel revolution is coming, and it’s coming in a big way.
Massively parallel machines are probably the only way to simulate cognitive functions.
Even at the level of an insect’s cognitive abilities today’s supercomputers still fall fare short in comparative processing power. In order to build robotic applications that are truly useful they will have to be at least as smart as your typical insect and massively parallel machines at the micro level are the only way to achieve this goal.

I run lots of apps at the same time, not being on the same core is very helpful to me and improves latency. Writing for a single core isn’t that bad.

One of the tests I’m personally interested in (and which is always missing) is using a sequencer (Cubase, Ableton, Logic, etc.) and running VST plugins. For instance, something like Native Instruments’ Massive totally bogs down my old (XP2400+) CPU if it’s set to high quality mode, and the number of instances playing simultaneously or doing neat tricks like convolution reverb would be a good workout. With the software studio, lots more people are making music.

Today I’ll get a Quad system, so it’s interesting to see how the workload is divided; simultaneous audio streams should be parallelized rather easily.

What about increasing the cache size (as mentioned without detail by others above), increasing the register set, or adding special-purpose instructions such as matrix manipulations?

On a related note, has anyone ever done a post-mortem on the RISC vs. CISC wars? Lessons learned?

“Thanks for that! What about servers? web applications, database… Will quad cores systems add benefit there?”

Pardon if someone’s already covered this, but applications that can handle more simultaneous threads of execution will benefit, otherwise not. A database I use with my day job, Progress, can start up multiple server processes. The last several places I’ve worked have had quad-cpu machines, and the database will cheerfully spawn multiple servers to spawn user requests, and typically use 3 or 4 of those cpus. A multithreaded web server would probably see benefits, for the same reason.

It won’t make that much of a difference on current systems, but that’s because games specifically are normally not very multithreaded. This is changing.

Every engine that is designed around the PS3 or XBOX 360 will most likely feature a very multithreaded design that will benefit significantly from a quad-core PC.

So I wouldn’t buy a quad core now, but I’d expect those utilitization numbers to change over the next year or two.

I can’t really see a “negative” to getting a quad-core processor beyond price, and I think it’s a poor arguement. The difference in price between clock-speed and core number is negligable, as is the performance difference.

Having recently had to purchase an “emergency” replacement for the home computer, I had “no choice” but to spring for a quad-core (the Q6600 mentioned above) from Gateway because the gross price of the system was phenominally lower than anything I could have assembled by myself ($1000 flat for a complete multimedia PC with decent RAM video and hard drive components!)

I’m counting on the fact that this PC will be in use in three years, at which time more mainstream software should leverage multi-core.

As is the case for so many other things with me, it all comes down to actual performance. I love reading benchmarks and watching the wars that start between fanboys/girls over the accuracy of the results and the ensuing “My hardware is better than your hardware” mud-slinging.

The way I see it, your average Joe (at this point) doesn’t need a quad-core system. We have a situation where the hardware in question is actually ahead of the software being developed to use that hardware. At this point, I can’t think of a single scenario where an average computer user would need that much power.

Is this -really- a problem, though? If we, as developers, know that these types of processors are available, then why not write to take advantage of that power? Let’s face it…single-core CPUs are on the way out, unless clock speeds start improving significantly with the next generations of CPUs.

Back to my starting comment, though. I’ve owned my share of machines in my relatively short lifetime, and I’ve done development on every one of them. As it stands now, I wouldn’t dream of having anything less than a dual-core, and the next machine I plan on building will feature a quad-core. It’s not that I need that power -right now-, but when the time comes that I will need it, it will already be there.

My old machine (which at this point, is essentially my test machine), has a 2.0 GHz AMD 2000+. I installed the VS Orcas Beta this weekend, and it runs like a pig through molasses. I know if I put it on my brother’s dual-core 2.0 it would run better. And if I were to install it on a quad-core, then it would run even better than that, as well as the 40,000 other things I’m doing while hammering on my keyboard. Moral of the story? There -are- people who can take advantage of that technology who exist, so why not harness it?

Benchmarks are helpful, but not what I base my hardware purchases on. Now, gimmie one of them quad-cores. :slight_smile:

Core Unaffinity:

Some multicore processors have a shared L2 cache, while some keep them separate for each core. Skiz’s idea of “system based on small chunks of work which are given to whichever processor is free” (Skizz on September 4, 2007 01:24 AM) is fine when there is a shared cache. Otherwise, bouncing a thread’s stack and active data among the cores’ caches will waste a lot of cycles.

Running multiple applications:
Who runs two processes?

Whether with Windows task manager (alt-ctrl-del) or linux ps -ef see how many processes are running, many of which have multiple threads.
Typically, Windows desktop users are probably running 50-100 processes and 500+ threads.

Affinity:

Unless you force core affinity, I find that Windows does not do a good job keeping a CPU intense process on one core (even without any Windows calls in the intense loop)

Power Savings:

In the future, when the HW and OS work together to shut down cores that are not needed that microsecond, multicore will be very helpful in lowering the average power consumption of the processor.

Responsiveness:

I have had a Pentium D machine for two years, and have been very happy with it. It stays very responsive even when doing interactive work while there is a CPU intensive application running, e.g. MP3 encoding, audio filtering, etc.

See more about the multicore topic at http://www.2cpu.com/

David

Maybe we’ll measure performance in cores and not Hz in a not so distant future. “I have a 2 MegaCore computer, what do you have?”

All these benchmarks seem to focus on running a single application, which is hardly ever the case! I usually have at least 4 applications open (web browser, email, IM, download manager etc), not to mention all the OS processes that run in the background.

Surely these different applications can be run on different cores? While the performance of an individual application may not be improved by additional cores surely the performance of the whole system will be?

As others pointed out, single applications may be slow in handling more than a two processors at once (which is what a dual core chip really is). That will depend upon the software writers taking advantage of multiple core processes and using threads. However, that doesn’t mean there isn’t improvement if multiple applications are running.

One of the biggest pains for a developer is building an application in parallel. I know that XCode and gcc on Linux can take advantage of quad core systems, and it may simply take Windows a while to catch up. Visual Studio 2008 “Ocras”, will be able to handle parallel builds, and probably handle quad cores.

As for games, as more and more people start using quad core systems, games will be rewritten to take advantage of them. Rendering engines certainly could be optimized to allow more than a single object at a time to render at once (think of the cell processor). I am not too familiar with the gaming environment, but I expect that most games will start to take advantage of the new hardware with in a year.

As one of the department heads once told me, “The Hardware Fairy only comes once every few years, so always overspec what you need because you may be stuck with that system for five years.” You may be right now that today’s Windows software may be unable to take advantage of quad core systems, but what about the next twelve months?

I suspect that Visual Studio 2008 “Ocras” will be able to once it finally comes out, and that most games will quickly put out newer revisions that will take advantage of the quad cores. There may even be a service pack for Vista that will take better advantage of quad cores in the next 12 months. I personally would opt for a quad core system based upon my experience with software and operating systems. I suspect that even if there is now improvement in speed now, there certainly will be with in six to twelve months.

Mark posted:
‘Your bottom four comparison percentages seem to have the wrong “polarity”.’

Actually, if you look at the source article, the bottom 4 comparisons are measuring run-time in seconds (lower is better), whereas some of the other comparisons are measuring speed (or some other quantity such as FPS, where higher is better).

So the relative percentages are correct, but the numbers reproduced in this blog post lack important contextual information - the actual quantities being measured, and how to interpret them (i.e. which is better - higher numbers or lower numbers).

You are way off target here.

The reason things like games don’t use the quad core is that most of them were written before quad cores. Most of them don’t easily scale to more cores so you’ll have to wait for the games to catch up.

There also is the very real issue of being able to do more things at once.

Someone asked how you can watch a movie and do other things–quite easily. Nothing says that what else you are doing needs a human. There are plenty of things you could leave running while you’re watching that movie.

In my previous comment, I should’ve clarified that the given relative percentages are correct as long as you interpret a positive value to mean “better performance”, and a negative value to mean “worse performance”. Again, the problem is that the raw numbers, with no units, are meaningless - you have to go back to the original article to see whether “higher” or “lower” means “better performance” for any given comparison.