If you want Vista to perform, you need the 64-bit version. Trying to run with the 32-bit version just blows.
Am i gonna be the only bloke on the block to think (and admit to thinking) that Vista, even before SP1, feels a whole lot faster than XP on the same hardware? Vista doesn’t make me wait a few seconds before allowing me to move windows or showing me the context menu i asked for. Redrawing the screen after opening the start menu doesnt take minutes even if i am stressing my system with a whole lot of background processing. Opening help from Visual Studio doesn’t lock op the system for two to three minutes and it’s the built-in taskswitcher (not flip3d) is about a zillion times more accessible. In my humble opinion, Vista knocks the socks off of XP. Some stuff might take a bit longer to finish, but almost nothing locks up the entire system anymore. Vista has its flaws, but from my Mac OS X user viewpoint, it sure is better than XP on too many levels to stick to XP.
Just my two cents.
At a complete tangent to this, Vista virtual memory management is one area which is terrifically improved in performance/behavior compared to XP, but AFAICT almost nobody has noticed.
I’ve written some trivial test programs which try to allocate huge amounts of memory, with and without writing to it, then free the memory and repeat. Playing with the options in this arena and running multiple instances of this program, makes it clear that the Vista VM has stepped up nearly to the quality of traditional Unix or Linux virtual memory management. Memory required for the OS is prioritized, memory for the active application is secondary, any memory not allocated to either is apparently used for file buffer cache (but is instantly discardable if needed for application space.) This is the way it should have been working for the last 8-10 years. Big win.
Unfortunately… this is all completely invisible to the end-user, except in so far as they may notice, “Hey, Task Manager says there’s almost no free RAM! Vista is a memory pig!”
Great article, Jeff.
I’m not a professional developer, but I’m often involved in some project as analyst or manager, and yous point of view is really useful to avoid a lot of “dead end” during analysis and testing.
Thank you.
Great article… could you also go into explain why Vista takes forever to unzip large (10 mb) zip files compared to XP? Vista’s built-in unzip function takes almost twice to three times as long as WinRAR.
The irony is that years ago, Microsoft put a lot of emphasis on perceived vs actual performance in their MCSD courses.
Re: copying buffers (Franz Bomura)
In that case (cd /path/to/sourcedir tar -cf - *) | (cd /path/to/targetdir tar -xf -) should be the fastest tree copy in the world.
No I don’t use that much anymore on local disk, but I use a variant of that for network copying as latency doesn’t matter anymore when doing it that way.
It’s exactly how I thought it would be.
Seeing those progress bars go fast and then hang at the end is just frustrating.
Doing it the other way around would surely be better. A bit like accelerating a car, that’s where you feel you “go fast”.
Well the next service pack will make the dialog box disapear two seconds after it appears and VOILA problem is solved.
Every winders user will be impressed as heck about the new super-fast-copy feature.
File copy sucks in XP AND Vista. The main reason is that Microsoft doesn’t utilize memory, it uses a copy buffer of 4KB. Try copying a file of 200MB, it is tremendously slow, because the OS has no clue it can stream the file into memory and then write it back to disk.
The main problem with file copies is that the disk head stepping is the true delay time you’re running into, so the no.1. thing a copy algorithm should do is MINIMIZE the stepping of the diskhead. If you don’t read an entire file into memory before writing it, you’ll waste time by stepping the head back/forth between source and destination. This is extremely expensive compared to the write speed.
That said, file copies being slow on vista is also caused by DRM check code, at least on some types of files.
So, Jeff, when I copy files on XP or Vista on the command line, you’re sure the copy action on Vista is done faster than on XP? Or equally the same (except with mp3’s) ?
This copy-speed thing strikes me as a rather unbalanced trade-off, but it’s something only Microsoft can blame themselves for.
At some point in their dev cycle, they said “Hey, we can make this seem faster by letting stuff finish in the background.” Sure, there were plenty of cases where someone might see the dialog disappear and unceremoniously cut the power rather than a proper shutdown (which in my experience people are already walking away from, and don’t think much about shutdown speed. (except in the case of a lengthy reboot…then people get back to thinking about dropping the ceremony again))
Unfortunately by making this choice before, they set the bar on this, and possibly other, file i/o operations. Go find a raid controller that can do both write-through and write-back modes and do some benchmarks. Pretty painful, right?
It strikes me that a lot of the performance penalties people see in vista can be boiled down to “fixing the performance hacks done before.” Accumulate enough of those with feature bloat and some new/redefined APIs that people haven’t optimized for, and there you go.
That they’re also a victim of loading up more and more services to account for every possible configuration out of the box (Oh, I dunno, Distributed Links? Never used 'em, even back in XP. DHCP client running even if all my network interfaces have static addresses. And what does the Browser service do again?)
My vista box is slow to boot, usually throws a ton of crap at me (I’m especially fond of the “This nvidia driver you installed doesn’t seem to be compatible, even though everything’s been fine for weeks. Let us replace it with the default nvidia driver we have.” (Hint. “Later” gets rid of the dialog. “Cancel” throws me into a fail-safe driver at VGA resolution.) This is not a more pleasurable experience for me. I love to come home to a computer I’d thought I’d put into hibernate having been rebooted and then giving me a bunch of dialog boxes telling me it had bluescreened. (yes, really. If it happens to you, look at the report they want to send home. The error code really is “bluescreen”)
I’m giving SP1 one month to convince me they threw a huge chunk of pointless interface changes and pandering at me for some reason. After that, I’m wiping the box for something else, probably slack.
"it uses a copy buffer of 4KB"
That should have been 64KB, my bad.
Additionally: with a lot of small files, it also doesn’t understand that you can read them sequentially into a buffer, say 64MB, and write them sequentially after the buffer is full.
What’s the point of having a single copy engine which has a truckload of tradeoffs for all possible scenario’s? Why not have a couple of parameter sets, one per scenario which needs attention, which makes the copy operation truly faster? Local partition copies, or same-disk-different-partition copies are horribly slow under windows. If this is because the same engine is used for network-copies as well, the user doesn’t care, the user wants the fastest copy action ever for that scenario. If localdisk copies are slow because the copy engine is designed to work in all scenario’s, it’s a bad engine: it should know (it can know, so why not use the info) if the copy action will be slowed down by a low buffer size. So we have these 2-4GB of memory in our system but aren’t using it and are using 64KB buffers for file copies… clever…
This is only true when using explorer to copy files. I never use explorer to copy files, as it always was bad and unreliable. So all these UI tricks are not important to me. But performance of CopyFile OS call is important, because it shows what we can achieve with this kernel, if we use the right shell.
Very interesting reading !! But when I read the pdf-paper something caught my eye. Now, I’ll be the first to admit that I’m no math-wizard, so please correct me if what I’m about to say is wrong.
The progress funtion for Inverse Power is f(x) = 1 + (1-x)^1.5 *-1. As far as I can tell, this function will fail when x 1. This is due to the part (1-x)^1.5. This is the same as saying (1-x)^(3/2) or Sqrt((1-x)^3). When x 1 then (1-x)^3 will always be negative and a negative number doesn’t have a square root.
Am I way of track or what?
I have noticed that animation speed of continuous animations also effects the users perception.
Take the continuous animations on www.Ajaxload.info depending on which you use the users seem to think the process was quicker.
I think the vista way of showing the copy dialog until the file is flushed to disk is the wrong aproach. I would like the copy to be instantanious. The system can do some filesystem magic to make the copy appear at the right place immediatly so i can continue work.
If i need the file to be flushed i can use fsync or umount or something similar.
“Doing it in the background” is just a magical statement. The question is how ? It can be done poorly, or greatly, as much as a foreground algorithm.
Why dont they use a system similar to the linux slab allocator for handling commiting chunk of files to the file system ?
I mean a cache copy daemon that would preallocate files of different size, and thus diminishes drasticly the cost of the write to file statement due to filesystem cluster allocation ?
If I was to implement it in the most basic way, I would ask the programm for the size to download, in return, I’d give him the adress of a shared memory zone of a good size preallocated slab, I’d multiply it in two contiguous zone, and each time one half is plain I’d commit the slab to the file (as a pingpong buffer for video), and with the beauty of % operator, the programm would seems to write continuously. If writing is far easier than commiting, I’d use a ring buffer instead, and I’d use semaphor to prevent problems.
Idealy, the programm would not copy downloaded data to the memory zone, but use the memory zone as the place where to download the data avoiding a useless memcopy.
If my file is let’s say bigger than my biggest slab, I’d merge the chunk in background.
Is it dumb ? I have the feeling that most actual windows buffer algorithm are dumb because they fail to understand that “one size doesn’t fit all”. A good algroithm for small file (aka web pages and images that are 80% of actual file traffic), fails for bigger files, therefore, the linux slab pre allocator, is smarter since it implements various strategies depending on size and use (okay I take it outside of its real usage context).
I have done c# system programmation, and I am pretty confident that it could be done in a matter of weeks, since I have coded a multi threaded agent in 4 weeks.
As a matter of fact, the idea being patent free (as far as I know), why don’t they steal interesting free software ideas instead of making war to them ?
Well, I think I have the solution to all of this…
In XP, file copies used to be fast for one simple reason: after the file had finished copying, the user thought “Wow, that 34781125 days seemed to pass in no time at all!”. Whereas in Vista, it tells you that the copy will take 4 hours and it actually does.
In seriousness though, no amount of twiddling with user perception is going to change the fact that copying a file on XP takes 30 seconds whilst copying the same file on Vista took 36 minutes - that’s your real, common all garden variety of time rather than perceived time, unless my stopwatch was also impressed by how fast the progress bar updated.
Carl
Great article! Perception is key for your users, not necessarily performance. Funny, users pay for software - which is all that really matters in business, right? So if you build a shitty algorithm that appears do work very well (or operating system, or widget), you’ll make boatloads of cash?
Worked for Bill.
WTF? I don’t see how Jeff is supporting Microsoft, what just because he doesn’t spend half the article slamming them?
This particular post is based on Windows Vista, why should the word Linux come up at all?
Oh, I see, you’re confused, you must have thought you were reading /.