Is It Time for 64-bit on the Desktop?

Your link text “nobody will ever need more than 640 kilobytes of memory” points to a page that describes how Bill Gates never said such a thing and how the quote is actually a myth.

even a 32 bit Linux kernel can be configured so that every application can use (almost) 4 GB, the 2/3 GB limits are Windows-specific

The limits aren’t Windows specific, they’re tied to the x86 architecture. Sounds like you’re referring to PAE mode, which is a hack you can use on the server editions of Windows but not Vista or XP. 64-bit is preferable to PAE hacks, obviously, and especially on Linux-- you can just recompile everything from the source code!

I certainly don’t mean PAE, PAE only allows to address more than 4 GB RAM (so it is useless if you have “only” 4 GB RAM) and adds the NX bit.
In addition PAE does not (directly) affect applications, only the OS (so whether you can recompile applications or not does not matter for PAE), and I wouldn’t really call PAE a “hack”, it can provide better performance than 64 bit mode if you have many, many, many applications that all use less than 1 GB of address space.
The fact that the upper 1 - 2 GB of virtual memory is not freely useable for applications is OS specific, Windows always reserves at least 1 GB (default is 2 GB) in each application’s address space for special uses, 32 bit Linux can be configured to reserve between 3 GB and 0 GB (reserving less can mean a performance penalty for some system functions).
64 bit Linux will always leave (almost) 4GB of virtual address space for 32 bit applications to use however they want, since it does not have an (additional) performance penalty here.

64bit means 64bit CPU register length and memory address size.
4GB memory is not enough soon, but big integer are not useful in desktop programming in my programming experience.

Hey Now Jeff,
I’m so shocked to learn the 64-bti transition will be the last of our lifetime. Your reasoning makes sence, I just never heard that before or thought about it. I agree w/ you on the 4gigs of ram, I wish older notebooks had the space physically place it.
Coding Horror Fan,
Catto

@ Mark Smith:
Tiger had no problems addressing of 32GB of RAM

It kinda did. It had a 64 bit kernel, but apps could not be 64 bit, because app libraries were still 32 bits. In other words you could only access all your RAM by either running more than one memory intensive 32 bit app, or by writing a console 64 bit app. That’s no longer the case, you can now be 64 bit throughout. And you can compile your app for 32 and 64 bit simultaneously - there’s a setting in XCode. And most (but not all) of the apps that come with Leopard are 64 bit, which you can see in the Activity Monitor as terabytes of VM space.

The real bit of kick ass work is that the kernel can use the old drivers. This is something Microsoft failed to figure out how to do.

G’Day from Dublin, Ireland.

You will probably never know how tiring many of those questions seem to me. I recently got a new monitor and was so frightened of it that it remained, unopened, in its box for three days before I dared look in. It took two days research to work out which graphics card my camera needed.

Must study your excellent blog more often.

but why is less used when you only have 2gb ram

If I’m not mistaken, the problem is that if you have 3 Gig of RAM, normally you’ll be using the /3G switch in windows so that an application can use it. Usually windows splits the virtual address space in 2G for the applications, 2 gig for addresses to shared system resources. With /3G, it’s 3gig for the application, and 1 Gig for the system.

So if your video card is now taking most - or all - of that 1 Gig system address space, you’re in big trouble. All the sudden the OS is being squeezed out. Even though you have more RAM, your PC is actually slower and starving for RAM.

The real bit of kick ass work is that the kernel can use the
old drivers. This is something Microsoft failed to figure out how to do.

They didn’t want to. They wanted a pure 64-bit kernel and an opportunity to drop support for older drivers and APIs.

I don’t know how many times I’ve run out of resources in 32-bit
world – literally, context menus would refuse to show,
Internet Explorer wouldn’t render, I’d have to reboot just to function.

This is not a 32-bit vs 64-bit issue. You were just running out of space on a table called The Desktop Heap, which you can tweak in the registry. It’s a fixed size table of handles in Windows

a href="http://www.techarp.com/showarticle.aspx?artno=238pgno=1"http://www.techarp.com/showarticle.aspx?artno=238pgno=1/a

I’m sorry but I have to ask for what reason do you think a desktop computer should have more than 2 GB memory?
I have 2 GB in mine but I have never seen a software that could make good_use of it. IMHO if a program “needs” that much, it is badly written.

I know in the case of servers it is reasonable, but we are talking about desktop PC’s. AFAIK there’s only one PC game that is labelled with 2 GB recommended memory size (Crysis, and yes, I think it’s badly written).

Any hint?

Peter, what part(s) of Crysis should be optimized to lower the memory requirements?

Honestly, your tone is so arrogant and spoiled as to be horrid.

4GB is commonplace? No, I don’t think so.
I have it, but I don’t know one other person personally, who does. And I work at the tech of the tech field. So please, nix the overconfidence in your own words.

Secondly, Vista is a pile of dog poo. I tried it and went right back to XP. Of course, that’s just on my VM, because who uses Windows for real anymore anyway? Not since Linux came around.

Oh wait, don’t like my tone? Sound familiar?

What’s all this fuss about 64 bit OS on the desktop? Solaris has been 64 bit on the desktop since 1998.

I have 2 GB in mine but I have never seen a software that could make good_use of it

Virtual machines, of course :slight_smile:

It’s an interesting notion that 64-bit might be the last transition in our lifetimes. The 64-bit address space sure is vast.

But consider the 32-bit address space. In human terms, a billion addresses is literally unimaginably huge. Yet we’re fast approaching the day when 3 or 4 billion addresses won’t be enough. That’s not because 3 billion is in any sense a small number, though. It’s not (in principle) difficult to do most computing tasks in a lot less space than that.

So why is a few GB no longer enough? Largely because memory is cheap, developer time is expensive, and so we have (quite rationally) built a tool-set, an ecosystem, and an engineering culture that regularly trades boatloads of dirt-cheap storage for human convenience.

That’s a trend that doesn’t seem to be slowing at all. Even if it’s never possible to build a machine that actually contains 2^64 bytes of storage, there may be new approaches to OS/software design that gobble up that address space in order to make something else easier.

We’ve got a similar situation with the IPv4 address space, where an address space that once seemed unimaginably vast was carved up very inefficiently, simply because allocating addresses efficiently is a lot more work.

Linux migrated at least two years ago Jeff… This post is so biased to a windows users perspective it is hilarious…

Secondly, how about worrying about the 16-bit software… when did that really leave windows ?

I cannot believe the “technically literate” rubbish here. It matters nought to me how good Unix/Linux might be, or how bad Windows might be. Every single one of my clients runs Windows. I never get asked to write software for Unix, or Mac either. I could swap to Unix and feel very smug, but I would starve to death.
Back to the issue. I recently updated my desktop to 64 bit and 8 gig of RAM. It was overkill but memory is cheap. I run a 4core processor, following a similar spec to the machine you built for Scott Hanselman. It is great for running a few virtual machines which you need when you have clients still running Win 98, and others on XP or Vista.
I think 64 bit and a lot of RAM is future proofing yourself. Especially out here in Australia where prices are high compared to US and Europe. I don’t want this PC to become outdated too soon.

dvk wrote:

“By the way, Linux had full support for x86-64 for a couple of years already.”

A COUPLE years?! I was using 64 bit Linux back in 1996 on a DEC Alpha! I think it went 64 bit in 1993. That means Linux has been 64 bit for 14 years. And there are no driver problems. Everything that I have ever used in 32 bit works in 64 bit. This is because the Linux world distributes everything as source. For more things going 64 bit just means a recompile. Occasionally a minor code tweak is necessary but those were all done 14 years ago. No driver drama for Linux!

@Reimar
Your assertion about true 64-bit chips is false. You are correct if writing about Intel. You are not, however, when writing about AMD.

I have been taken off guard by the strange responses to this one. For example - per santana - because someone writes Windows specific posts does not mean he is “just another writer who think he knowns whats up but doesn’t have a clue.” It is that he has “a clue” about dealing with Windows. Whether or not you believe Windows should not exist is irrelevant - it does. Like Steve S, I have many moments of disdain for the OS, but I choose to learn as much as I can dealing with it because there is a reason to. Perhaps you do not have this reason - and perhaps that would also mean you have no clue but you think you do?

We are now at the breakpoint of needing 64 bit for the desktop. The standard memory size needs to double 32 times before we reach the limit of 64 bit adressing. Is 18 months a reasonable time for memory to double? Well then, in 48 years most of us will be either dead or retired: 128 bits you say? bah! I remember when…