Running XP with the pagefile disabled

If you have 2 gigabytes (or more) of memory in your PC, have you considered turning off your pagefile? Here's how to do it:


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2005/10/running-xp-with-the-pagefile-disabled.html

The classic problem with not having a page file is that the commit kills you.

For many kinds of memory allocations, windows commits to providing the memory asked for, meaning that if the program really goes and touches all the pages that it asked for, the system will be able to provide the memory.

With a page file, this is easy - it merely has to make sure it never commits to more than the sum of the physical memory plus the page file size.

Without a page file, it can never commit more memory than you have physical memory.

Why’s that a problem? Well I was told by a guy I know who years back used to work for SCO (remember when that was a brand of UNIX, rather than a lawsuit factory?) that the OS typically needs to commit a whole lot more than is normally used.

So the net result is that turning off the page file means that 10-20% of your memory goes unused because the OS had to commit it just in case the application used all the space it asked for.

However…

First, I’m not sure if the commit:actual usage ratio in Windows happens to match the circa 1990 ration in SCO Unix.

But more importantly, Windows is able to use memory which is technically committed but not actually in use. It uses it it in at least two ways. First, when DLLs get unloaded, the memory they used goes onto the free list, but the OS tracks the fact that the memory still contains the DLL image. (Unless the memory ends up being used, in which case it gets zeroed and then moved out of the free list…)

Second, this committed but unused memory can be used for the filesystem cache. SCO UNIX couldn’t do that. (Or at least it couldn’t back in 1990.) File system caching was done in a fixed-size preallocated chunk of memory.

The file system cache is a pretty significant performance feature, and in practice, I would expect it usually uses a much higher proportion of your memory than the committed-but-unused memory. (Right now, my filesystem cache has about 1GB in it!)

So in practice, the classic concerns about why you need a pagefile to get the most out of your physical memory probably no longer apply.

The possible exceptions you describe would correspond to an abnormally high commit : used ratio.

I don’t see any benefit - only risks. Sure, the stuff that gets pages out because I never use it might load faster, if I need it. But wait, I never need it. Oh well.

What this says to me is that by forcing the OS not to page anything out I’m therefore dedicating part of the physical RAM ($200/GB) to it instead of using disk ($1/GB). Not to mention the hard limit of 4GB of RAM, while I have 1.3TB of disk. And the free bonus offer with this is that it might make my system less stable, and some programs might crash more often.

In my case I actually use the 2GB I have - I run 7z with the 128MB dictionary (uses 1.3GB of RAM when compressing, 128MB decompressing), as well as Photoshop (which would like more than 2GB if I had it) and a couple of DB apps (which again would like more than 2GB). Bring on the AMD64 motherboards and OS that take 8GB or more.

So in practice, the classic concerns about why you need a pagefile to get the most out of your physical memory probably no longer apply.

This is what I am thinking too… although not having a pagefile may end up being a negligible perf benefit. I have yet to see anyone show any hard data on this, other than “look ma, no pagefile!”

It is cool though. :wink:

I don’t see any benefit - only risks.

Yes, but some of these risks are getting fairly theoretical. I say try it and see what happens… breaking stuff is FUN!

I run 7z with the 128MB dictionary (uses 1.3GB of RAM when compressing, 128MB decompressing)

And you get, what, maybe 0.005 better compression from this? I’ve experimented with the 7zip “ultra” setting and it’s always a tremendous disappointment. Way, way more time spent for a tiny, tiny fractional improvement in compression over “normal”:

http://www.codinghorror.com/blog/archives/000313.html

I routinely disable the pagefile for all of my computers and fresh installs, even for machines with as little as 512Mb of memory.

Theoretically, Windows should move pages to the swapfile only when needed, but the apparent reality is that Windows will move pages from applications that have have been unused for a while, even if memory isn’t scarce.

You can try this yourself: open up a bunch of applications, then walk away from your computer. Return after a few minutes and then click each taskbar icon in turn. Watch Windows thrash.

On the other hand, disable the page file, and your computer will remain responsive at all times. And, being a power user, I’d rather have an “Out of memory” error, telling me that I’m overloading my system, which I shouldn’t be able to get away with even if I can’t see it because of the pagefile. Not that I ever got that message – I haven’t had a single problem yet since running without page file, and that’s for about a year now.

Bottom line: I’d rather have a more responsive computer, so no pagefile for me.

I run 7z with the 128MB dictionary (uses 1.3GB of RAM when compressing, 128MB decompressing)

Ok, so I tried this on the VMWare “Browser Appliance” image files (817mb):

http://www.vmware.com/vmtn/vm/browserapp.html

winzip max portable – 206.6mb

This was EXTREMELY fast. Around 3x faster than 7zip normal.

7zip normal – 145.9mb
7zip ultra – 135.2 mb (-7% size, 2x time)
7zip “hyper” – 131.3 mb (-10% size, 5x time)

“hyper” used the settings you listed and took 1.2gb of memory while compressing and actually did cause me to get an out of memory error in other apps (!)

Who cares if it takes 5 hours to do it? I have spare machines at work for that

Then you shouldn’t care if the paging file is disabled on your box, right? :wink:

re: 7zip

Yeah, I did a pile of testing with various tools. I use a 2GB/dual core box so multithreaded compression helps me. WinRK was about 10x slower than 7zip in my early tests, and SBC can’t do files over 2GB so I didn’t test them. Sample below:

21,474M 20GB-XP-VS-Of-D7.vhd
4,868M 20GB-XP-VS-Of-D7.vhd.gz
4,583M 20GB-XP-VS-Of-D7.vhd.bz2 5% smaller
4,155M 20GB-XP-VS-Of-D7.rar 10% smaller
3,830M 20GB-XP-VS-Of-D7.7z 8% smaller

I use this to put 10GB+ VPC/VMware images onto a single DVD. To do that I clean up temp files, defrag, set the system to zero page file on shutdown, zero the free space, then compress.

My standard WinXP+ VS2003+ MSDN+ Delphi2005+ code goes on one DVD that way (it’s a 20GB virtual disk). So two or three copies of the single DVD make it unlikely that I’ll have a failure. With two DVDs per set, I’d need 8 copies to get the same security as 3 single disk copies.

The backup/live storage issue is also pretty significant - I have about 5 live images, and another 20-odd archived on the hard disk. Getting those rarely-used but important images down to the smallest possible size is quite useful. Who cares if it takes 5 hours to do it? I have spare machines at work for that :wink:

I’m a big fan of archiving release points, or any other significant thing (just before we upgrade a 3rd party component or tool, for instance). Smaller images = more points archived.

your OS will use your pagefile before you’re anywhere near close to using up all of your memory. if it waited for you to run out of ram before using it, then the things in your pagefile would be whatever you’ve just loaded - which is ridiculous, that’s the stuff you need to run fast. what windows does, is it moves things that don’t look like they’re being used into the pagefile, to free up physical mem. what this means though, is that you can easily find yourself using the pagefile when you actually have pleanty of spare ram.

my home pc is switched on all the time, and i was sick of all my minimised windows (which is sometimes all of them) moving into the page file. i shouldn’t have to site around waiting for my hard disk to spin up just to restore something that i’ve already loaded. i’ve stopped this problem now by switching off my pagefile (i have 1.5GB ram, and leave things like flash and open while i’m playing fear), but if you’re worried about doing that i suggest you at least set your pagefile to 2meg default size, with a much larger maximum - so it normally doesn’t get used.

Well weird enough XP still uses virtual memory even
if u DISABLE swapfiles completely.

Easily noticed when typing dxdiag

On my system it uses 236 mb of 1560 mb although
i have shut down the swap files 100%.

And sort of annoying having 2GB of ram, telling my
OS NOT to use any disk paging and it obviously still
does what it wants.

is there another, better way of switching the pagefile off?

Dopeshow:
Are you sure? Is there a pagefile.sys on one of your drives? It seems to me that once you’ve disabled your pagefile, any statistics about it refer to your actual RAM instead. So a reported PF usage of 250MB actually means a RAM usage of 250MB. I find it quite convenient ;).

Although it has happened to me once, that Windows spontaneously started generating a pagefile (pagefile.sys and all), even though I had disabled it.

Still don’t know what caused that…

Update: I have reverted to a standard “system managed” pagefile after running for about a month without a pagefile, both at home and at work. Although I have 2gb of RAM on both systems, I would occasionally encounter some rare, bizarre behavior that I couldn’t explain.

Since nobody can quantitatively define what, exactly, we’re gaining by running with a pagefile off (benchmarks, anyone?), I no longer think running with the pagefile disabled is worth the risk.

So, interesting experiment, but ultimately I think people are better off with the default pagefile. Until someone presents compelling perf benchmarks that show benefit, it’s too risky.

Hi,

I know this post is a bit late, but I thought you might want to read this post by Larry Osterman, about the risks of running without a paging file:

"Mea Culpa (it’s corrections time)"
http://blogs.msdn.com/larryosterman/archive/2004/05/05/126532.aspx

I’m unconvinced by the points regarding the way Windows pages out applications that are idle.

The reason I’m unconvinced is not because I don’t think this happens. On the contrary, I know that it does happen. The key points is that I also happen to know that a lot of this ‘paging out’ doesn’t actually use the swap file.

When Windows decides to trim the working set of a program, it doesn’t necessarily evict pages to the swap file. For some pages it has another option that has thus far been ignored in this discussion: it may be able to simply free the page without writing it anywhere!

Typically the working set of a process contains a lot of read-only pages: the code that makes up the application. The OS can evict these pages without touching the page file. ‘Swapping out’ such a page consists of moving it to the free list. It doesn’t write the contents to disk.

After all - why would it write the page somewhere? It has a copy of the original page sat there on disk in the DLL that it came from. If that page needs to be swapped back in, it’ll just go and reload the original off disk.

So the only pages that get swapped into the swap file are (1) allocated memory such as the heap or stack, (2) writable data segments that have actually been modified (Windows supports a copy-on-write scheme) and (3) code pages that contain relocations in DLLs that had to be rebased when loaded. (I guess they could technically redo the relocations, but my understanding is that this doesn’t happen.)

So I wouldn’t expect switching the page file off to stop idle processes from having pages evicted. It would just change which pages got evicted.

In fact it might make matters worse. The OS would be unable to evict dynamically allocated pages like the heap or stacks meaning it would have to get more agressive about evicting other pages.

In the ‘I have a pagefile’ situation, the OS can choose to evict any page it likes. If it needs to trim 10 pages out of my application due to memory pressure elsewhere, and it detects that I don’t seem to be using 5 pages in the heap and there are 5 more pages from readonly sections that I’m not using, then it can evict those 5 and 5 respectively. But if I don’t have a pagefile, it has to take 10 from the readonly sections even though there were only 5 I wasn’t using…

That’s not going to improve matters.

Its really UGLY to see guides just for Windows XP, there are still Stability Fans that use Windows 2000, can anyone please explain how to disable the paging file throught Registry Editor?

What I think as good or bad as things seem, WindowZ requires a pagefile(PF) and depending on WHAT applications are used, that should be relative to the amount of RAM and the PF combined. I don’t mean 1.5x or 2x etc, I mean e.g. if you have 2GB RAM, not much of PF goes on, so you could have a smaller PF of say 512MB! Once you RAM goes over the top, you still have another 512MB extra for PF! It’s the way I’ve been using systems the last 3+ years without any problems. There is another case when larger RAM-gobling programs like e.g. Premier would require a little more RAM (but still wouldn’t go over the 2GB RAM, but having just the extra 512MB for ‘WindowZ’s Sake’, is not a problem.

The set-up:

  1. Partition a disk with slightly over 512MB, because WindowZ needs a part of it for “internal affairs” (say 533MB and label it “swap” ;-).
  2. Disable the pagefile and set it to only that new partition created. Min and Max? I say 128min, 512max -or try other and see the difference.
  3. Start using this solid and stable solution to WindowZ.

Now this configuration should work for most PCs with 1GB to 2GB RAM, allways depending on WHAT applications are being used, but in some cases might not be the best choice! Why? Background running programs, ANTI-virus and all other sorts of resource demanding software could make this not the right config., which I don’t have any ;P.

The system:
WindowZ XP SP2
AMD FX-55, 2GB RAM, ASUS AI-SLI M/board,
256MB gForce 6800 ultra
80GB Raptor 10000rpm
SB Live 5.1
HP1210 PSC
Camera

The applications (activley used):
Firefox, Thunderbird, OpenOffice, The GIMP, Nero 7.2, Premier Pro 7.0, Alias Maya 7.0, Bunch of 3D-type Games like Doom3, FarCry, HL2, CS , POPWW, SCCT, Multimedia: M$N Msgr, MediaPlayerClassic and other minor apps used inbetween (no antivirus or of any kind -utilise all user-type securities, no Admin. rights!)

Tip: Try and use applications that don’t need installations or use very little system resources!

I think most of the posts I read come from people who are assuming the page file works correctly. My problem is I have a high raid system (not by my control, the server I am assigned is a 4 disk completely raid system). To use any of this for swap space seems to kill performance. I have noticed in ALL windows systems I have viewed a rather large page file usage WITH PLENTY OF RAM AVAILABILE.
The page file is not just used when the sytem runs out of RAM.
So maybe this memory is just allocated and not used, but if I have 1GB RAM available, and 512MB paged, wouldn’t it be more efficient to use the RAM and still have 512MB available? This is not what I have seen.
OK. So my main problem is on a Windows 2003 server running MS SQL 2005, I have turned off the page file completely, have 4GB physical RAM, and the page file seems to almost always match whatever physical RAM is in use, even with the page file supposedly gone and disabled and physical RAM available. If the server is using 1GB RAM, the PF usage is (about) 1GB. This behaviour server seems inconsistant with the majority of tech posts I read, which seem to assume if the PF usage will only be up if you are low on physical RAM. Of the dozens of tech posts I read, it seems like I only come across one or two people that recognize this behaviour. Is this “OS not listening to instructions” specific to Server 2003?

On a high-RAM XP Pro pc I checked I see 156MB PF usage, even though there is 1.5GB of physical RAM still available.

Hmmmm. Obviously foot in mouth. To be honest, this is the first post I have read where the majority of people recognize this PF usage reported when there is plenty of RAM.
Is there any reference article by MS that says the PF usage reported when there is no PF is actually RAM, or are we just assuming? I am just curious because I still don’t believe it is not using a PF based on the performance I see. The IO still seems to be hit hard.

“In the Task Manager utility under Windows XP and Windows Server 2003, the graphical displays labeled “PF usage” and “Page File Usage History” are actually reflecting not the pagefile contents but the total (or current) commit charge. The height of the graph area corresponds to the commit limit. Despite the label, these do not show how much has actually been written to the pagefile, but only the maximum potential pagefile usage at the moment. In Windows 2000 and Windows NT 4.0, these same displays are labeled “Mem usage” but again are actually showing the commit charge.”

http://en.wikipedia.org/wiki/Commit_charge