Should All Developers Have Manycore CPUs?

On GCC:
The absolute numbers seem kind of small, but the percentages are incredibly compelling

Well yeah. Take Wine for example. Here are the respective build times on my computer:

-j1 (one thread) took 22 minutes.
-j3 (dual) took 12 minutes.
-j5 (quad) took 7 minutes.

Slicing off ~65% of the build time is pretty significant on large projects. If you’re compiling your entire system from source (eg Gentoo), you’re saving hours of time.

Thank you. A brilliant article and well said. I often do speaking engagements with Students and Developers alike and the issue of who has the fastest processor always comes up. I often also find developers, myself included, highlight their threaded applications, until someone sat down one day and showed me how hard it really is to write for multi-processors.

IT is unfortunately an environment where marketing always tend to twist the truth, and often IT specialist and non IT users alike get caught up in hype. I am reminded of my dual core notebook with 4GB of RAM outperforming a colleague Quad Core with 1 GB of RAM when doing Video Editing and Poser renders. He opted for the faster processor and sacrificed RAM. He quickly realized his mistake, however it was a costly one.

Most developers aren’t writing desktop applications today. They’re writing web applications. Many of them may be writing in scripting languages that aren’t compiled, but interpreted, like Ruby or Python or PHP. Heck, they’re probably not even threaded.

Ok - and what takes 50-99% time of typical web script?
Database - able to scale up to X cores where X = 32 generaly.
Also scripts runs usually concurrently, thanks to? Web server’s dispatcher, which should scale almost linearly. Real time of just that script’s execution is barely noticeable!!! It wouldn’t matter if one request could use threads.

Most developers aren’t writing desktop applications today.

This is an interesting point - I wonder what the breakdown of development effort is? My own feeling is that there’s still a lot more bespoke/vertical desktop development going on than people might think, and I’d be interested to know if there are any useful metrics anywhere that can be used to measure what might be going on.

I’ve always thought that the prevailing job market is a BAD indicator of what’s happening, as by definition you’re analysing the movement of people between roles rather than what people are in general actually doing - as soon as you get a team of developers who are happily working in their field, they disappear from the job market and so as a data point they never get counted. I’ve never bought into the argument that because a tool doesn’t appear very much in the job adverts, it must follow that nobody is using it anymore.

What makes it even worse is that there are a lot of developers who simply don’t interact with the online connected world - a small company I was talking to this week just don’t read blogs, spend time on the websites or get involved in any kind of way whatsoever - to the outside world, they just don’t exist. We see these online polls and the results are misleading because by definition you’re only attracting the responses of people who tend to be more online, pro-active and ‘outgoing’. :slight_smile:

@Jon Limjap

Sure, most Ruby applications most of us encounter are Rails web applications. But Rails uses Ruby internally, and is constrained by its thread handling (not to mention that Rails is blocking server threads).

However, that web applications need multiple instances is more a problem of the web server (and its usage of multiple cores), than Ruby’s, PHP’s, or Python’s.

Typically, web developers don’t have to care about concurrency, since the application’s server spawns interpreters for each instance of the web application, and RDBM servers should be ACIDic (if they aren’t, I get a problem as web developer).

And Rails’ fun way of blocking server threads, leads to BackgroundRb, which works for long running tasks (scanning for viruses, fetching XML data, what have you), which, in turn, can again profit from multiple cores.

However, all the Ruby solutions need their own Ruby process to take advantage of threads (since Ruby still doesn’t support native threads :P).

Long story short: Rails wouldn’t benefit if Ruby were natively multi-threaded in any case. But: Rails can profit from multi-threaded webservers, whcih in turn can profit from n 1 cores. Multicores are now to the desktop workstation, but not to the server.

“Should All Developers Have Manycore CPUs?”

The answer is: No. It should be:

“Should Everyone Have Manycore CPUs?”

And the answer is: Yes. In my opinion, once you’ve tasted from the fountain of Dual-Core, going back to single-core is like exchanging a Pentium II against a 486. Sure, Benchmarks show that Dual or Quadcore is not always faster for a task, but: The whole system feels faster and more responding. So my Video Encoder still needs 2 hours, compared to 2:15 Hours because it’s a badly optimized one? Maybe, but during that 2 hours, i can do other stuff, and my system simply feels better.

Even for us engineers, there is still an emotion associated with working on a PC. I don’t care as much about the raw numbers, as long as I have the feeling that I am in control and that i have a machine that obeys me, I am happy.

And for Quad Core:
Not everyone has the luxury of more than 1 machine. I only have 1 PC. Visual Studio is not faster on a Quad Core compared to a dual core. But the VMWare Server on my machine that runs 2 Windows 2003 Server, one with Sharepoint and SQL Server thanks me for those 2 additional cores. A single 3.16 GHz Core 2 Duo may be faster than a single 2.4 GHz Core 2 Quad? Maybe, but i would not want running 3 Operating Systems, 2 Database Servers and a Sharepoint server on a Dual Core - or, God help, Single Core - anymore.

That also leaves us with the definition of “Developer”. I think that the term “Developer” is like the term “craftsman”: It’s too generic to be useful. If anyone who is still developing COBOL Stuff on an AS/400 is reading this, that person may think “Dual Core? Quad Core? Bah, my 486DX100 Terminal is all a developer needs”, whereas a Sharepoint Developer who more or less has to run everything on 1 machine (Microsoft recommends that Sharepoint Developers have a local Sharepoint server) just looks at your “Quad Cores are a waste of electricity” post and asks himself “Wait, did someone replace Jeff Atwood with a Anti-Jeff-clone?”

Anyway, glad to see that this Blog is still run by the real Jeff Atwood :slight_smile:

@someone

The point is, that developers don’t have to care about concurrency, shared memory, and thread implementations of their web apps.

The infrastructure takes care of that for them. The number of cores doesn’t matter to a web developer. It does, however, matter to the developer of database and web servers.

It may be hard to write a proper multi-threaded program, in say, C++, but have you ever tried Erlang, or Stackless Python? They make it goddamn /easy/ to write multi-threaded applications. Stackless can literally scale up to thousands of threadlets with no performance hit.

Hi guys.

Dual Core in Server?
It may sound funny but I just resigned from commercial web hosting and moved my website, ftp server and mail server to my place setting my own server at home. There is a small wardrobe in my room and inside it what?

Pentium III 700 MHz
384 MB Ram
20GB HDD
Network Card

Really old school PC (if you would try hard enought you would probably found one like this inside some rubbish bag outside some house :slight_smile:

It works fine, I am happy… and don’t plan to upgrade it to multicore :slight_smile:

Greets
Mariusz

I totally agree with the part that developers writing applications for the desktop should have multi-core CPUs.

From my experience, there exits a totally different range of errors and pitfalls you as a developer can make with regard to multi-cores and that appear only on multi-core machines.

I had a bug in my application that was impossible to trigger on a single-core machine – and the time the bug has been reported I was still developing on a single-core machine. Fortunately I had access to a quad core terminal server, so it was possible to find the bug.

So, to get a feeling of this special kind of errors and train yourself in designing your code to run safe on multi-cores, every developer writing applications for the desktop should have at least access to a dual-core machine for testing.

I’ll be short on this topic:
1~ yes, software needs to catch up to hardware right now, and without software pushing the envelope, hardware will soon slow down its tech-upgrade curve;
2~ dual vs quad, right now, should be resolved as follows: get a dual core CPU, but make sure the mobo will support a quad if you want to upgrade;
3~ as far as developers goes… depends on what you’re developing for, of course. Definitely, your continuous integration servers could enjoy a boost for large, enterprise-grade apps, but your development desktop bottleneck might not be the PCU: for the past 3 years, I’ve been suffering from a disk I/O bottleneck, for instance.

/cheers

I’ve been out of the hardware loop for a while, but a brief search through the comments today and yesterday doesn’t seem to show anybody talking about this:

Has Intel finally implemented their version of DEC/AMD’s “HyperTransport” bus? If not, there’s hardly any point going above a single Core2Duo CPU on an Intel platform. With all the naive fighting over cache and RAM going on between the cores, you experience very rapidly diminishing returns. It seems like they’re still focusing on simply bumping up bus clock speeds instead of redesigning their MMUs, which I guess is fairly predictable considering the last 30 years.

Mariusz, re: P3-700

I think the only reason you’d have to upgrade is if you could find an Intel Atom which could reduce your electricity bill :slight_smile: I don’t remember if the Pentium III had good power management built-in – but at ~20W it doesn’t hold a candle to the ~2W of the Atom :slight_smile:

BTW, the ~20W figure is from here: a href="http://en.wikipedia.org/wiki/CPU_power_dissipation#Intel_Pentium_III"http://en.wikipedia.org/wiki/CPU_power_dissipation#Intel_Pentium_III/a

I’ve been in corporate environments where an anti-virus program would take an entire core for an hour or more at a time. If you say that applications such as excel could benefit from 2 cores and you throw in the fact that most people in large companies also run Outlook all day, I think you might be able to make the case for quad-core configurations. Most companies load so much stuff up at machine startup, I would think a quad-core machine would make a big difference.

As you said, developers are an extreme edge case but I think even web developers that are writing single threaded code will benefit directly from at least a quad-core configuration. Early in development, I often find myself running a database server, a web server, a web browser, and my IDE debugging the server code all at the same time.

Most developers aren’t writing desktop applications today.

I’m working on a web application, and I can use quite a few cores : I’m running on my development machine IIS, Oracle, VS2005/VS2008, IE and FF (and Cassini when not using IIS for debug).
2 cores are a minimum, my 4 core machine feels more responsive, even if i never max out the CPU.
BUt I agree that current computers are I/O bound, not CPU bound.

Developers should have multicores to find more easily their concurrency bugs, and to compile faster. They should also have a 400MHz around, to test the application performance.

MSBuild does parallel builds so on the developer desktop you should see some benefit (depending of course on your dependency tree). If you run FxCop a lot (like myself), you’ll definitely see an improvement - we’ll make use of as many cores/procs you have.

Just as a side note : Would you consider Java as being interpreted or compiled ? If you answer “compiled”, then please correct your statement about Python being interpreted.

I disagree on not being able to use more than two cores even with stuff that average person might do. How I’ve hit it routinely:

There is non-interactive compute-intensive stuff running (as it’s in a VM I can run task manager in there and see that it can saturate a core. The VM is simply for isolation, the same stuff could have been run in the main environment.) and I’m playing something like Supreme Commander that can use two cores by itself. While this is unlikely to saturate 4 cores it’s definitely going to run faster than on a dual-core.

Jeff is picking an choosing from a lot of positions to justify his recommendations, so it’s a little confusing.

PCs are I/O bound anyways, unless you have a top-end, overclocked dual-core (upgrading from another high performance dual-core), but people aren’t really in need of the extra power anyways, because the average person is sitting around waiting for their Web Mail javascript to complete (!), or for their SunSpider benchmarks to finish.

It’s honestly just baffling now.

I wait on my PC very little, and barely notice the difference between any current generation system. Where I do wait on my PC (application builds, video encoding, or when I’m “doing a lot” and processes start impeding each other) just so happens to be those realms that are massively scalable across cores.