Why Does Vista Use All My Memory?

Windows Vista has a radically different approach to memory management. Check out the "Physical Memory, Free" column in my Task Manager:


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2006/09/why-does-vista-use-all-my-memory.html

I’m all for more performance. (Who wouldn’t be?) But super-aggressive brainiac schemes like this don’t appeal to me, at least, not in theory.

Responsiveness is important in a UI, sure. But another thing that’s important is predictability, which makes a system more learn-able, something a user can become familiar with and adapt to.

If Vista is guessing what apps I’ll want to load at a particular time of day or on a particular day of the week, its responsiveness will change when my routine is broken.

I think that’s bad. A major app that always takes a few seconds to start is OK. It would be hugely irritating to have major apps usually start up almost instantly but sometimes take many seconds.

With hybrid flash-cached hard disks and pure-flash hard disks supposedly on the horizon, the benefits of such a complicated scheme may be short-lived anyway.

This performance dip will go away as people programming games learn to deal with Vista. This is only a problem for the first generation of things, it will go away. Probably quickly; if Linux is suddenly the best option for running XP and older games, M$ is going to be in for a big fat surprise.

All it really would take for them to lose their home market dominance in a few years would be WoW / Starcraft / Warcraft III not running as smoothly on Vista as is it did on Linux.

So the question becomes: why do they bother having the graph if it’s going to read 100% all of the time?

The second question becomes: how do you know how much memory is really being used, and how much is a cache that the OS will throw away if needed?

Linux systems have a similar philosophy - any file read goes into cached memory until it is full (lacks the super-fetch approach, though). But at least you can easily tell what’s really used and what’s a cache.

Caching… hm. well…
Maybe Vista is better doing this, but I’m currently using XP with 2Gig, and for me, XP is sucking most of the memory for its System Cache, especially when doing lots of file-activites it’s using every bit of the memory, and after a while the computer is just getting sluggish. it looks like it can’t handle the size of the system cache, and doesn’t throw things out… so, I reboot the system, and it acts normally again…
I really hope this will work better in Vista…

As long as there’s some sort of manual control over what to prefetch (similar to how the system tray will guess your commonly used tasks, but still give you manual control over what shows), then it could be kinda neat.

Fast startup times since it asyncronously loads after boot, combined with fast app launching times for mozilla and WOW… that’s about all I’d want anyway.

Oh come on you’re clearly covering up the fact that memory dealers are in bed with Microsoft to sell unnecessary RAM to poor old grandma.

But seriously, I’ve noticed this happening more with OS X with each release as well (though others may call it bloat I swear it’s being more aggressive caching things in memory)

That said, I’ve often wondered; as there is a miss penalty for caches, is preloading the main memory in a cache-like manner only going to increase thrashing for many end users who may still somehow be surviving on less than a gig or two of ram under xp?

e.g. if you have 4gb of ram you can pretty safely start throwing stuff in there and still have room to spare. With a more paltry (normal?) 512 it seems like you’d have to have a pretty smart and judicious algorithm for managing what goes in there. Though I suppose we do have decades of cache population/eviction theories to rely on. It’s just the penalty of swapping to disk is so tangible, and even audible.

I think any perceived dip in performance per Jeff’s anecdotal game performance story is unlikely anything to do with SuperFetch. There is no way to know that SuperFetch itself is causing this difference - it could be any number of complex side effects that cause this to be the observed behavior. After all there are 1000’s of difference in Vista any of which could be impacting the behavior directly or indirectly.

The issue SuperFetch looks to address is this:

In a demand paged virtual memory system, things get flushed out to disk automatically based on needing to make more physical memory available. That is, less recently used pages gets pushed to the page file to make more room available. A user is affected by this when they go to lunch and something low priority (or periodic) runs causing all the desktop apps and data to page to disk. However, when the operation finishes, nothing causes anything to get swapped back INTO physical memory proactively. Therefore you have a bunch of free physical memory after returning from lunch because everthing is now idle including the background processes that ran… It takes the user to return from lunch and try to click on MS-Word window or something to swap stuff back in…again on demand.

SuperFetch looks to address this proactively for the user by attempting to bring things back into physical memory auotmatically via some clever usage statistics of predicting what you are going to try and do when you return to using the computer.

This is stictly on an opportunity basis as a low priority operation itself and should never affect actual normal priority behaviors. Anything proactively restored to physical memory can be easily be discarded since it is already backed by the pagefile…no differently than the case where you were already using MS-Word and simply switched to a new application.

The only case SuperFetch should be able to affect anything negatively is the following pathelogical cases:

-It predicts wrong at the exact instant you were going to do something else and it starts to chew up some physical memory you all of a sudden need for something else. This impact should be trivial as SuperFetch’s actions will cease as soon as you do something else more important.

-The added size and complexity of the operating system to implement this feature and to track statistics and such. Again, should also be fairly trivial in the grand scheme of things.

on my RC1 build, it initially appears that you can start/stop superfetch just like any other service.

As far as having manual control over whats cached and whats not, I personally don’t care. the L2 and L1 caches have served us pretty well through the years without us nerds micro-managing what is/isn’t cached. I’m not approaching Superfetch any differently.

So it is a service that you can go and shutdown?

Also if you look under processes does it show up how much it is memory it is taking up.

did game play speed up after the first few frags? How about trying to window out of the game hitting IE7 and then back to the game, are there bad performance hits for that?

Caches buy you less and less the bigger they are. I just can’t imagine what it could even populate 2GB of RAM with. I don’t think I even have 2GB of applications on my computer. Maybe if it cached data files too, but even then I’d be hard pressed to come up with 2GB of data I use on a regular basis.

This would be a tool I’d definitely want to play around with before enabling. Loss of game performance to keep grandma’s cookie recipe in memory at all times doesn’t seem like a very good trade-off to me.

Jeff, you need to read “Windows Internals, Fourth Edition”. Task Manager in XP is a big fat liar. Windows Vista’s Task Manager is not comparable.

The figure quoted as ‘available’ in XP is the sum of zero, free, standby and modified lists. The ‘system cache’ figure is the sum of the system cache working set (amount of physical memory used by the file system cache plus the physical memory used by pageable code and data in drivers, plus the kernel’s paged pool) and the standby and modified lists. Your screenshot shows the double-counting: Available + System Cache is 1.5 times physical memory!

When a page is taken out of (trimmed from) a working set, it isn’t immediately reused. Instead it is put on either the modified (if modified since last written to the page file or memory-mapped file it belongs to) or standby list. Links are kept from the process to the page. If the process then references the page again before it’s paged out, it causes a page fault to occur but Windows can satisfy it simply by fixing up the page table entry - this is termed a ‘soft fault’.

So how do pages get onto the other lists? Modified pages become standby pages by being written out to disk - a couple of background threads do this, mapped file pages are written after a maximum of five minutes on the list, and all pages start to be written after the modified list reaches 800 pages (about 3MB). Standby pages become free pages as the balance set manager (a thread that wakes up every second) runs, if the free list is too small. Finally, free pages become zero pages as the system zero page thread (runs at priority 0 and therefore only runs when at least one processor is idle) writes zeros to free pages.

When allocating physical memory Windows prefers zero pages for private user-mode page allocations, and free pages for kernel mode or mapped-file page allocations. (The use of zero pages is a security requirement to prevent processes being able to read other processes’ data). If the zero page list is exhausted, Windows will use the free list and zero the page on demand; if the free list is exhausted for a free page allocation, it will then use the zero page list. If both lists are exhausted it will then try the standby list - it then needs to unlink the page from its original process at this point - and in the pathological case that that is empty, it has to take a modified page, write it out, unlink it, zero it, and then give it to a process.

This is why these lists exist - Windows has done work when idle to ensure that there is physical memory available on demand.

Speaking of on-demand: when VirtualAlloc is called to allocate memory, no physical memory is actually allocated. The ‘commit charge’ is simply incremented. It reserves space in the page file to make sure it can’t overcommit on memory, but this reservation is a logical one - nothing is written to the page file. Only when a page is touched, and a page fault ensues, does a physical memory page get allocated (and the data loaded from disk if touching a memory-mapped file).

This commit charge is the value actually shown on the ‘PF Usage’ meter in XP’s Task Manager, and in ‘Commit Charge’ total. ‘Limit’ is the sum of all page files plus physical memory minus whatever Windows can’t page out.

There’s no limit to how large the standby list can grow. The file system cache has a limit on its virtual address size of approx 300MB IIRC, but since this is implemented as a working set, pages discarded from the cache go on the standby or modified list! However, if files are opened as sequential scan, they go on the front of those lists, not the end, so are likely to be reused more quickly.

While on the subject of Task Manager, the Process tab’s ‘Mem Usage’ column is actually the process’s working set. However, this includes a lot of shared pages from system DLLs, so the sum of ‘Mem Usage’ normally is significantly larger than physical memory. It’s impossible to tell if your program has a memory leak from this column. Switch on the ‘VM Size’ column (actually process private bytes) to monitor this, although this won’t necessarily drop when you free from a heap.

So XP normally has a lot of moderately-recently-referenced data sitting in memory (but not recently enough to keep it in the working set) - well, once the system has been running for a while. The difference is that Vista is actively preloading data that it thinks you might use soon.

It is legitimate to have a blog on BlgSpt, you know! (couldn’t post originally due to URL)

A few questions.

First, even though it says only have 6MB free in the text of the task manager. The graph shows 905 vs 334 MB being used back in XP. Is the OS truely using that much more memory?

Second, why is your page file 4gb?

I wonder what effect Vista will have on the life expectancy of RAM…

hurts realtime game performance badly

I wouldn’t say “badly”. It’s just annoying for the first minute or two of gameplay. The game is plenty playable.

I should also note that I’m preternaturally sensitive to hitches in framerate.

2Gb a mainstream memory configuration? Maybe among us techies but not for 95% of the PC buying population.

I’ve no evidence to back it up other than what I see non-developer friends buying and they generally get machines with 512Mb in.

I also share some of Aaron G’s concerns. Memory intensive applications such as video editing and image editing are optimised around the OS’s current memory management strategies. Until they’re updated there may well be some performance drops.

Time will tell I guess.

This is very interesting, but can you tell me why my system doesn’t FEEL any faster?

I just can’t imagine what it could even populate 2GB of RAM with.

Just wait for Office 2007, or Visual Studio 2007
:slight_smile:

I wonder if this has any effect on power consumption (higher RAM usage), especially on notebooks?