Dual core CPUs are effectively standard today, and for good reason -- there are substantial, demonstrable performance improvements to be gained from having a second CPU on standby to fulfill requests that the first CPU is too busy to handle. If nothing else, dual-core CPUs protect you from badly written software; if a crashed program consumes all possible CPU time, all it can get is 50% of your CPU. There's still another CPU available to ensure that the operating system can let you kill CrashyApp 5.80 SP1 Enterprise Edition in a reasonable fashion. It's the buddy system in silicon form.
This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2008/04/should-all-developers-have-manycore-cpus.html
Heretical question: Do more cores make result in better software?
Sure, the compile is faster. But a compiler is just one more hurdle to circumvent to make it shut up (I’ve read a code comment to that effect).
And the ever tighter edit-compile-debug loop takes away time to think about what one is doing, as opposed to how one is doing something.
Not necessarily true, I concede that.
“Heretical question: Do more cores make result in better software?”
I’d say that’s a reasonable thing to ask. I think the answer is the same as, “does more MHz result in better software” : maybe. On the one hand you get the tirade about developers able to be “more lazy”, but on the other hand you do get more honest performance which can be used to do things faster, sooner, or smarter, or even to just make things look nicer. shrug
If nothing else, dual-core CPUs protect you from badly written software; if a crashed program consumes all possible CPU time, all it can get is 50% of your CPU.
Wait, what? On any pre-emptively scheduled operating system, no application should be able to monopolize the system; the scheduler should be allowing every other application some time on the CPU. A multi-core CPU doesn’t give you any “protection” that you wouldn’t have anyway; you should always be able to kill the runaway process.
But what about us managed code developers, with our lack of pointers and explicit memory allocations? … I downloaded the very largest .NET project I could think of off the top of my head, SharpDevelop. The solution is satisfyingly huge; it contains 60 projects. I compiled it a few times in Visual Studio 2008, but task manager wasn’t showing much use of even my measly little two cores:
I did see a few peaks above 50%, but it’s an awfully tepid result compared to the make -j4 one.
Interesting. Surely if it’s compiling multiple projects (and their source code) at once, it should get it done in about half the time—yet it appears to take exactly as long. That’s very strange.
Digital audio workstation applications benefit tremendously from multiple cores! So will everybody just keep quiet about wasting electricity. I need everybody to buy quad-core, octa-core, whatever so that the chips are cheaper for ME.
"And the ever tighter edit-compile-debug loop takes away time to think about what one is doing, as opposed to how one is doing something."
I’m not sure this holds water. The one thing you can’t do during a compile is adjust the code. So if you spot another run-time bug, you either cancel the build, or wait for it to finish, . You can put the fix in while you’re building and not save, but this generally screws up the debugger.
So you tend to leave it. One time I spent so long waiting for the build to finish I forgot what I needed to fix. And what I needed to test. And that I’d done a build at all.
By contrast, if you’re the kind of person to rethink, you’ll probably use the compile time to do that anyway, and if the build’s finishing earlier, you’ll ignore it till you’re done reconsidering. But at least you won’t come out of your codezone to find that it’s still only halfway through the damn build.
It’s a bit like the old User-Friendly quote “But crashes are good! They give the computer time to relax!”
I purposely develop only on slow and outdated computers. Incremental builds make compile times negligible anyway, and it forces me to code as efficiently as possible to get the program to run smoothly.
I’ve seen too many cases of ‘rock star’ developers buying the fastest hardware on the market and ultimately releasing a bloated and inefficient program that only runs fine on their computer. It’s better to design around more modest hardware specs since that automatically have it absolutely SCREAM on the latest and greatest.
Yeah, fixing bugs is the how part. But the “what” question is when you think about design and code, instead of business-logic and code. I.e. the grand scheme of things, rather than code minutiae.
And I said that it’s not necessarily true.
I prefer to sidestep the compiler completely anyhow. It doesn’t save me from logical errors, only syntax errors. And an interpreted language that allows me to be more productive than any of the rather verbose compiled languages is more of a boon to me (yes, I trade speed for more productive coding. So?)
I love you articles, but it gets a little old every time you tell me that no one’s writing the apps that I’m writing and using.
Try doing any government contract work (it’s just a little, several-billion part of the industry) and tell me that few people are doing desktop application development.
What about things like distcc? Most of the time, those four cores will sit utterly idle (typing doesn’t need 4 cores). Distcc (or some other distributed compiler) would make use of all those cores sitting around doing nothing…
Ruby uses threads. But green threads, not OS threads, and doesn’t lend itself to multi-threading as Haskell or Erlang do.
Currently, only JRuby uses native threads, via the Java virtual machine. I suspect that IronRuby will do something similar, as well as Rubinius, even though the latter two are far from production ready at this point in time.
And let me point out a danger of using multi-core CPUs: It is difficult to find out how the application will perform on older platforms (this is especially important for managed code users). Sure, you could set up a virtualized computer to simulate that, but you can get all kinds of side effects with that.
But I am surprised that Jeff didn’t touch on a very useful use for n2-core computers: Running multiple virtualized computers at the same time, to test all kinds of nice things (need to find out how to deploy remotely on a Linxu machine? Fire up a Linux VM. Need to test networking code? Fire up a bunch of machines in a virtual network…).
As far as the C++ compiling, isn’t that why there are specialized build servers? If you compile large things that take more then a couple of seconds on a regular basis, your probably a big enough operation to bother building or buying a build server for your compiling.
"Yeah, fixing bugs is the how part. But the “what” question is when you think about design and code, instead of business-logic and code. I.e. the grand scheme of things, rather than code minutiae."
Yes, but my badly-expressed point was that if you’re the type of person to think about that stuff, you’ll think about it anyway, you won’t need to be artificially prevented from coding by a crippled compiler.
Those that don’t automatically do it would just sit there staring at the code, surf the net, or go get a coffee. Certainly, if your compiler takes an age, the idea of rewriting a load of files to cope with a change in overall design is more of a hurdle.
"And I said that it’s not necessarily true."
I know. I wasn’t saying you were inherently wrong, or anything, I was looking at the counter-arguments.
"I prefer to sidestep the compiler completely anyhow. It doesn’t save me from logical errors, only syntax errors. And an interpreted language that allows me to be more productive than any of the rather verbose compiled languages is more of a boon to me (yes, I trade speed for more productive coding. So?)"
What? I’ve got no problem with that. Zen Kode-an #001: “If a function takes twice as long to return, and no-one ever notices, is there a problem?”
Multi-core is good! Whether 4 cores is better than 2 depends on what you run.
Sadly, I still see magazine articles that say there is no point in multi-core, because the applications aren’t up to it!
So, let’s have a quick snapshot of how many threads a few applications use:
AVG Anti-virus: 8
HM, I think that means that applications are very thread-aware, as they have been for several years!
They may well not make the best use of those threads, but the argument is possibly that they use too many, so the app spends it’s time thread-swapping and missing out on useful work!
Anyone remember IOCompletionPorts?
“I love your articles, but it gets a little old every time you tell me that no one’s writing the apps that I’m writing and using.
Try doing any government contract work (it’s just a little, several-billion part of the industry) and tell me that few people are doing desktop application development.”
Is/has anyone studying/studied what developers are using/creating? It must be in language designer’s interests, but I can’t seem to find a good study.
Come to think of it, seems like that would be a really good addition to stackoverflow.com
If you’re looking to find the best tool for a job, the tools that other people have used are definitely worth investigating.
I can’t speak beyond Java, but in my experience, javac is more I/O bound than CPU-bound - it’s the reading and writing of loads little files that delays things, not the number of compilation threads.
The root need is CPU cycles. I can get them in a bunch of ways.
1 PC - 1 core. Fast as possible. MOST (not all) will run fastest on this since MOST (not all) haven’t the foggiest idea of how to take advantage of multiple cores.
1 PC - multiple cores. Hard to buy a system now without at least 2.
2 (or more?) PC’s - single or multiple cores… whatever. The extra can be a server or a personal system.
I have a “developer” desktop system, relatively untouched by our corporate I/T department, and one “corporate controlled” laptop. Development environments don’t always play nice with the corporate version.
No on has yet mentioned virtual systems. I can be testing,installing, whatever on a virtual system while developing on the “real” system.
Multi-core systems are pretty good for that, although I have not benchmarked it myself beyond checking the task manager…
i’ve noticed that our web server has 4 ‘cpus’ and three of them are idle when i was seeing the task-manager.
I sincerely believe that all developers should indeed have multi-core CPUs. Perhaps not quad-core, but definitely dual-core. With single-core CPUs, most developers are absolutely oblivious to the kinds of bugs that can occur on multi-core systems.