If you've used Windows Vista, you've probably noticed that Vista's file copy performance is noticeably worse than Windows XP. I know it's one of the first things I noticed. Here's the irony-- Vista's file copy is based on an improved algorithm and actually performs better in most cases than XP. So how come it seems so darn slow?
This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2008/03/actual-performance-perceived-performance.html
Knowing all of this, I definitely prefer Vistas version, because once the dialog is dismissed, I -know- it’s finished copying completely, so I can kill the power or something similiar.
Why is it that you always seem to support Microsoft. Never heard of you using linux, have you?
Brian: 7zip’s explorer integration loves you, for zip files. I don’t know what MS did for a zip engine, but it’s not efficient in comparison.
Like Leo, I’d also like to know why people think DRM is making Vista copying slow.
I’m not aware of any supported DRM scheme that prevents multiple copies of a Ifile/i, or copying a file around on a disk. DRM schemes are about licensing for Iviewing/i (ie, using the CONTENTS of the file), after all, in every scheme I’ve ever encountered.
Maybe because they guy is a dotnet dev and makes his money from the Windows ecosystem
Fascinating stuff, and definitely something to think about when writing my own applications!
I have a pet peeve with all copy dialogs. They’re all too terse for my liking, and you could think they could provide a few different views and more than just a cancel button. They shouldn’t be linked to an explorer window either. Why stop that window from accepting input?
Copy dialogs are all pretty finicky as well, blowing up and disappearing at the first sign of trouble, leaving the user wondering what happened and what they should do next. For a big or important copy job, I never rely on explorer, but instead use RoboCopy because I want a guarantee the copy happened, and if there’s a problem it handles it and can be restarted.
I don’t recall there was ever a problem with XP copying speed in the first place, especially to warrant a new algorithm. Typical engineering a fix to a problem that didn’t exist.
It all has to do with how well the experience was, and that’s what needed improving. No OS provides this, that I can tell. (haven’t used linux in a while, so feel free to correct me)
So Vista engineers/managers missed that boat by thinking the problem lay with the algorithm, and instead it lay with the user experience. MS usually has good usability labs, but then they also produce some doozies.
Win some - lose some. Upgrade. Repeat.
Better usability means learning more psychology, not improving performance.
case in point: that psychologist who is in the top 5 for the netflix prize.
Well, I haven’t got Vista yet, but I’m looking forward to a copy which actually works.
On XP recently I tried to copy lots of files, and copy failed with some pathetic error like “path too deep” or was it “couldn’t be bothered”?.
Can you tell me if the miserable “I’m trying to move 100 files, but can’t do one of them, so I’ve just given up altogether” problem has been fixed too?
You know perception isn’t everything; I actually want something which works properly.
The main point of your blog post is that one can have a slow process but if one develops the UI properly, the user may perceive the process in a positive way. The problem is that he uses Vista file copy as an example to make his point. Vista is not just perceived to be slower in copying files because of bad UI, it is slower because of an implementation that acheives data security at the expense of performance. I frequently download multi-megabyte files from the internet using IE and save them to a network file share. Let’s say I download a couple of 10 MB files simultaneously. This may take about 15 minutes at my data rate to copy to my local temp folder. Then when that portion of the download is complete, the file is copied from my local folder to the network fileshare over a 100Mb network. This stage of the file copy (a total of 20 MB) will peg my local processor at 100% and lock up my computer for over 30 minutes, at least double the time for the initial internet download. I have to just walk away and let it do its thing. This is just not acceptable performance from a modern operating system, and I have no confidence that SP1 will resolve this problem.
to test if Vista DRM is crippling file copy speeds of media files, simply archive a password’ed copy without compression. Compare the original file and archived copy (very close file size) and you’ll know the answer. I no longer have Vista so I cannot test.
The Windows stupidity and breakage continues unabated. And thank God, because our support jobs are safe for another service pack.
perception is everything.
If you decide something was succesfull then it was succesfull. Doesnt matter what the error’s on the screen was.
Same the other way around.
Someone can change your perception, but in the end your perception is your (semi)final conclusion
To me it seems funny that this PC Computing magazine needed to do the usability testing for behalf of the companies that deliver the software. Even more hilarious is that the biggest software company in the world fails to do their testing. Is it the market position?
The copy dialog is not dismissed until the write-behind thread has
committed the data to disk, which means the copy is slowest at the
You completely misunderstood Mark’s article here. There’s NO write-behind thread for files copying in Vista, because it doesn’t use cached I/O infrastructure for copying (where written data are copied to buffer and the write operation ends as soon as the data are in kernel’s buffer; the buffer is than flushed to disk by OS kernel without blocking the process any further). It uses direct I/O (which bypasses all caching mechanisms and writes all data directly to the disk, so the process cannot finish write operation until the data are physically written to the disk).
So it’s not true at all that Vista’s progress bar is slowest at the end. The problem is simply that XP hides its progress bar BEFORE copying finishes (because as far as Explorer process is aware, copying finished – except that the kernel really finishes it later behind its back).
One of the most difficult lessons for programmers to learn is that as soon as you start lying/faking things to improve your benchmarks, the benchmarks NO LONGER MEAN WHAT YOU THINK THEY MEAN.
The fact that XP would dismiss the copy dialog before the copy was really done (whether deliberately or accidentally, it doesn’t matter) was both good and bad. Good because it drastically improved perceived copy time. Bad because it confused the issue by dismissing the “in progress” UI before the bits were at the destination. Oops … so now the naive benchmarks don’t actually measure how fast the file copy is, but instead they measure how long until the window disappears. That’s a different benchmark. Still useful, arguably, but it’s measuring a completely different thing.
And once you’ve changed the meaning of a benchmark, you’re now constrained by the new meaning. As Microsoft discovered, you can’t then switch to a faster algorithm which needs the dialog to stay up a little longer… because it will be perceived as slower.
Explorer/Finder is unfortunatly one of the most important bits of an OS, from the users point of view. Everyone has to navigate their way around, and even good desktop searching (google,etc) doesn’t get around that completely.
IMO Vista’s explorer especially misses the boat when compared to Leopard’s new finder on OSX. File-copy aside, in Leopard you can view most types of file in the new finder without launching external programs, and it’s very fast (quicklook coverflow). This is such a boon because often I’m wondering exactly what’s in that word doc, etc, and there’s always a disconnect to opening a few files to compare (say they’re similar) and then remembering which is which; it’s a painful process. (http://www.apple.com/findouthow/guidedtours/leopard.html)
Then again I’m always pissed at my mac w/ Tiger (or whatever comes before leopard) because it treats images just like any other file and gives me a small icon. Double-click and it opens in preview (equivalent of Adobe Reader) which doesn’t allow you to easily look at other images. Even MS w/ XP had this done right for a long time (filmstrip view and built-in slideshow), and it’s taken Apple quite some time to fix this issue. Sure there’s iPhoto (mine is iLife '07), but I don’t like it, and I don’t want to have to put all my photos into iPhoto just so I can page through them. But Apple fixed this, so that’s good, all I need to do is upgrade
I’m no mac fanboy, but the reason Apple’s getting more popular is because they work pretty hard to make it seem easy. I like Vista too and it’ll get over these speed bumps. I think we need both OS’s because competition keeps them both on their toes.
You completely misunderstood Mark’s article here.
Nope, I didn’t. The Vista copy is slower at the end than XP, exactly as I stated.
The problem is simply that XP hides its progress bar BEFORE copying finishes
If this is a “problem” then where is all the data corruption and data loss in XP? Can you provide any evidence whatsoever that this is a “problem”?
At any rate, the point is moot, because Vista SP1 switched back to the XP style cached copy.
Actually I think the whole actual perofrmance / perceived performance issue is prevalent throughout vista. For example take superfetch and readyboost, which increase your system performance over time based on usage patterns. This means that for a good benchmark you’ll need to test using the system for say a week for your day to day activities and then compare it to a regular xp benchmark, nearly impossible I think. Many reviewers didn’t seem to realise this, especially when Vista first arrived. So many articles were written comparing an unoptimized vista experience to an optimized xp experience, however in my experience (and RAM seems to help here) I find vista speeds up considerably over time.
Of course, I know that you know this, but in server-land it’d be a different issue–when there’s no user, then it’s the actual performance that matters more.