Due to fallout from a recent computer catastrophe at work, I had the opportunity to salvage 2 GB of memory. I installed the memory in my work box, which brings it up to 4 gigabytes of RAM-- 4,096 megabytes in total. But that's not what I saw in System Information:
Brandon Paddock wrote âTo those claiming Linux doesnât have this problem: Youâre wrong. A 32-bit Linux OS will have the exact same limitations, right down the the kilobyte.â
Jeff:
Your math is incorrect. In both calculations, you forget to convert from bits to bytes (8 bits = 1 byte). Also, youâre not considering memory paging. Without paging, x86 processors using a 32 bit address word can only access 0.5 GB of memory. See http://spreadsheets.google.com/ccc?key=pYK6MlUiNyheNUOvbGxggNQhl=en_US for the math. NOTE: Clicking the sheet names at the bottom displays different calculation sets.
FWIW, I completely agree with Ian (and others) when he says that the issue isnât quite as facile as Jeffâs thesis makes it seem. Look at Mac OS X: itâs (now) a 64bit kernal, but the maximum amount of RAM a MacBookPro will take is 4GB, while a MacBook will only handle 3GB. The MacPros, however, handle up to 16GB of RAM. Different chip set, same OS = different specs. Apple simply didnât include the extra hardware to allow higher memory configurations.
Jae:
x86 is a processor architecture. âNbitâ (where N is 16, 32, 64, etc) refers to the word size a processor operates on. x86 processors began with 16bit word sizes, moved to 32 and are moving to 64.
One (very inexact) way to think about it is to say that CPUs are like internal combustion engines. They may differ in how the bits fit together, how many bits there are or even how quickly they do their jobs. But, basically, theyâre all pretty much alike: fuel comes in, combustion happens, piston moves, and wheels turn.
Word size can be likened to the amount of pistons an engine has; more pistons means more fuel is combusted (work is done) every cycle. Also, relevant to the above discussion, larger word size means more physical address lines are available (32 more, in this case). See above link for the math behind this.
If your hardware is physically capable of addressing more than 4gb of RAM (this can be limited in the chipset), then you can access up to 64 gigs of RAM with Linux in 32-bit mode. Using PAE, which is invisible to applications, you just donât have to worry about system memory; if youâve got it, you can use it.
However, I think thereâs still a 2gb limit per process, 3 if you use a special boot parameter to the kernel. You can run a lot of 2gb processes, but canât run a single 32-gig program, as far as I know.
If, of course, you have a 64-bit CPU, you can use as much RAM as you like. Programs can easily use all the RAM too, but they do have to be recompiled for 64-bit. 32-bit compiles are still limited to 2 or 3 gigs.
Linux is a pretty damn good solution, overall, but the 32 to 64-bit transition is painful on all OSes. Linux runs 32-bit code quite nicely when the kernel is 64-bit, but you canât use a 32-bit plugin, like Flash, with a 64-bit Firefox. You have to be aware of the differences and work around occasional problems.
Itâll probably be least visible on the Mac. They go to great pains to hide the 32- versus 64-bit problems.
Shows that XP is limited to 4GB whereas Datacentre is limited to 128GB! Once you start dealing with kit in this range ($80K+) you want to have all your RAM available - (why do you want 16GB? well if you have a 20GB database and it gets hammered you want in RAM)
The desktop heap is set on 32-Bit systems to 48MB! Thatâs all, and on servers where processes can run as users, itâs a problem. The limit is about 90 users (each starting out with 512KB used) on a box as more than this (48/.5=96) starts to hit the usage by system/service accounts. see http://support.microsoft.com/kb/169321 on how windows can start a new process in a new windows desktop
Very good article. I work on BIOS for a living, and I have to give this explanation to testers all the time.
Which platforms are mote likely to support remapping over 4GB ⌠servers. Most desktops and notebooks wonât support it.
If the memory controller will support remapping memory above 4GB, then youâre fine. The âmemory holeâ will get pushed up over 4GB. If not, then you lose that memory to all of the PCI devices on the system.
On AMD Athlon/Turion/Opteron platforms, the memory controller is in the CPU. On Intel platforms, this is the MCH/GMCH (also called a ânorthbridgeâ).
In any 32-bit operating system, the physical address space
is limited, by definition, to the size of a 32-bit value
No⌠the /virtual/ address space is limited to 32 bits. Itâs possible to have more /physical/ memory than this, and itâs even useful if you want to run more than one large process at a time. PDP-11s did this, AFAIR.
I donât know enough about Windows or Intel architecture processors to know whether this is possible in that environment, though.
Do you need 4gigs of ram on a desktop machine though?
Well a few years ago I would have laughed at you but seeing how 2gigs actually makes a difference in Vista due to the improved memory management, Iâd have consider going for it⌠If it didnât mean I had to scrap my 4* 512 stick⌠But I guess even thatâs off the table until people stop faffing around with 32bit desktop applications and decide theyâre bored of it.
And the 36-bit PAE Intel extensions are what youâre thinking of, but theyâre still a nasty hack. Far better to lace up your boots and go 64-bit now that AMD blazed the trail.
Iâm getting confused with all these comments, eheh. Isnât there a difference between physical memory and virtual memory? Canât the x86 architecture handle enormous amounts of virtual memory?
What Iâm being taught in college is that certain memory addresses are reserved for ROM memory and I/O devices. And since the physical limit is 4GB, if you have addresses that are fixed to other devices than what we know as the RAM memory, then you wonât use all those 4GB because a portion of those 4gb is not even addressable.
Thatâs what I understand, anyway. But then I heard that there is virtual memory, but thatâs where those pagefiles come into play, I guess. I still donât understand it very well
When I built up my current PC I chose âWindows XP x64 Proâ for this very reason (plus I didnât see the point in restricting my shiny new 64-bit CPU).
Sadly support for XP x64 has been mediocre at best. Some major hardware players (e.g. Netgear) have outright refused to produce drivers for it. Others have produced cut-down software that functions, but misses important features when compared to the 32-bit version.
Even Microsoft, who marketed x64 as meeting the âincreasingly sophisticated demands for making home movies, working with digital photographs, using digital mediaâ have not issued x64-compatible versions of their âColor Control Appletâ or the âRAW Thumbnail Viewerâ - both of which would be very useful add-ons for people working in exactly those fields.
Given this experience I am watching Vista 64-bit with quiet interest. A quick trawl around various hardware websites shows that most drivers offered are labelled as Vista 32-bit only. Is history set to repeat itself?
âItâs the final solution, at least for the lifetime of everyone reading this blog post today.â
You really want to say something like that? I remember getting a 16k expansion for my Apple II+ (cost $200 and brought me to 64k). I would bet that 64bit isnât enough to last the next 40 years, probably not the next 20.
Graham, I expect the driver situation to be much better with Vista x64. The 64-bit version of XP release was a weird, out-of-band release that never had proper levels of support. XP was released in 2001, and x64 XP was released in, what, 2005? Thatâs weird.
But with Vista, x64 is a first-class citizen, released at the same time alongside all its x86 brethren.
Of course itâll still take time. Some companies still donât have plain old x86 drivers for Vista that are any good. But within the next 2 years, I expect x64 driver support to at rough parity with x86.
Iâm a little disappointed with my recent install of Windows Vista (32bit). I have three gigs of ram (2x1GB 2x512mb) but BIOS and Vista only recognizes 2GB (the 2x1GB sticks). When I run applications like CPU-Z in Windows, it recognizes all four sticks. This isnât the same issue but kind of related. Wondering if anyone had any idea what was going on? I was thinking the sticks were just not compatible but Iâm not positive.
âSuffice it to say that we wonât be running out of physical or virtual address space on 64-bit operating systems for the forseeable future. Itâs the final solution, at least for the lifetime of everyone reading this blog post todayâ
Eh? I remember when the Mac II came out, the article in MacWorld being amazed at the vast 4 gigabyte address range addressible by its 32 bit processor and proclaiming that weâd never hit that limit. Not to mention the old (if apocryphal) 640k-should-be-enough-for-anyone chestnut. Youâre falling into the same trap.
Starting with 4GB, and assuming Mooreâs Law holds constant, we hit 8192GB in 16 and a half years. Iâm 27 now, and I damn sure intend to still be alive when Iâm 44.
Julius:
âIsnât there a difference between physical memory and virtual memory? Canât the x86 architecture handle enormous amounts of virtual memory?â
Yes and yes (depending on your definition of enormous, I suppose).
Physical memory is just that, the âsticksâ you physically plug into slots inside your computer. Virtual memory, in a nutshell, refers to the method of an OS presenting non-contiguous memory as if it were contiguous. Itâs an abstraction layer. Often times this means that some of the non-contiguous memory is actually a file on the hard disk, âswappedâ into and out of RAM as itâs needed.
âWhat Iâm being taught in collegeâŚâ
Sounds like theyâre telling you the truth. During boot-time, some peripherals are assigned memory locations that the CPU reads from/writes to. This happens dynamically now (Plug-And-Play) but in the past, folks used to have to set the memory address themselves via DIP switches. That sucked. Reassigning these memory locations during runtime isnât normally done (never, as far as I know).
âBut then I heard that there is virtual memory, but thatâs where those pagefiles come into play, I guess.â
Pages (pagefiles) may be files on disk or regions of memory or even something else (Vista lets users plug in a USB thumb drive to increase system memory). They donât have to be files on disk.
You say [quote]âTo be perfectly clear, this isnât a Windows problem-- itâs an x86 hardware problem. The memory hole is quite literally invisible to the CPU, no matter what 32-bit operating system you choose. The following diagram from Intel illustrates just where the memory hole is: imageâ[/quote]
and then show an image of a DOS compatible memory layout. As far as iâm aware, the only OS to actually use this kind of layout would be from a certain company in Redmond. I know none of my machines have a DOS compatibility region in place, so i would say this in fact is a windows issue, not and x86 one.
Jeff, I do hope things will improve, but I wonât be convinced that 64-bit has hit mainstream until I see major OEMs shipping PCs with Vista 64 installed as standard.
Personally I wish Microsoft had only released a 64-bit version. After all, it is supposed to be the OS for the next generation of PCs isnât it? And I doubt many of them will be 32-bit.