TraumaPony
Knowing all of this, I definitely prefer Vistas version, because once the dialog is dismissed, I -know- it’s finished copying completely, so I can kill the power or something similiar.
Knowing all of this, I definitely prefer Vistas version, because once the dialog is dismissed, I -know- it’s finished copying completely, so I can kill the power or something similiar.
Why is it that you always seem to support Microsoft. Never heard of you using linux, have you?
Brian: 7zip’s explorer integration loves you, for zip files. I don’t know what MS did for a zip engine, but it’s not efficient in comparison.
Like Leo, I’d also like to know why people think DRM is making Vista copying slow.
I’m not aware of any supported DRM scheme that prevents multiple copies of a Ifile/i, or copying a file around on a disk. DRM schemes are about licensing for Iviewing/i (ie, using the CONTENTS of the file), after all, in every scheme I’ve ever encountered.
Maybe because they guy is a dotnet dev and makes his money from the Windows ecosystem
Fascinating stuff, and definitely something to think about when writing my own applications!
I have a pet peeve with all copy dialogs. They’re all too terse for my liking, and you could think they could provide a few different views and more than just a cancel button. They shouldn’t be linked to an explorer window either. Why stop that window from accepting input?
Copy dialogs are all pretty finicky as well, blowing up and disappearing at the first sign of trouble, leaving the user wondering what happened and what they should do next. For a big or important copy job, I never rely on explorer, but instead use RoboCopy because I want a guarantee the copy happened, and if there’s a problem it handles it and can be restarted.
I don’t recall there was ever a problem with XP copying speed in the first place, especially to warrant a new algorithm. Typical engineering a fix to a problem that didn’t exist.
It all has to do with how well the experience was, and that’s what needed improving. No OS provides this, that I can tell. (haven’t used linux in a while, so feel free to correct me)
Cancel
Progress - 12 seconds is an eternity for computers. How many clock cycles do you need to start estimating. With all the metadata stats already stored for files and folders, this should be easy and quick enough.
So Vista engineers/managers missed that boat by thinking the problem lay with the algorithm, and instead it lay with the user experience. MS usually has good usability labs, but then they also produce some doozies.
Win some - lose some. Upgrade. Repeat.
Better usability means learning more psychology, not improving performance.
case in point: that psychologist who is in the top 5 for the netflix prize.
Well, I haven’t got Vista yet, but I’m looking forward to a copy which actually works.
On XP recently I tried to copy lots of files, and copy failed with some pathetic error like “path too deep” or was it “couldn’t be bothered”?.
Can you tell me if the miserable “I’m trying to move 100 files, but can’t do one of them, so I’ve just given up altogether” problem has been fixed too?
You know perception isn’t everything; I actually want something which works properly.
The main point of your blog post is that one can have a slow process but if one develops the UI properly, the user may perceive the process in a positive way. The problem is that he uses Vista file copy as an example to make his point. Vista is not just perceived to be slower in copying files because of bad UI, it is slower because of an implementation that acheives data security at the expense of performance. I frequently download multi-megabyte files from the internet using IE and save them to a network file share. Let’s say I download a couple of 10 MB files simultaneously. This may take about 15 minutes at my data rate to copy to my local temp folder. Then when that portion of the download is complete, the file is copied from my local folder to the network fileshare over a 100Mb network. This stage of the file copy (a total of 20 MB) will peg my local processor at 100% and lock up my computer for over 30 minutes, at least double the time for the initial internet download. I have to just walk away and let it do its thing. This is just not acceptable performance from a modern operating system, and I have no confidence that SP1 will resolve this problem.
to test if Vista DRM is crippling file copy speeds of media files, simply archive a password’ed copy without compression. Compare the original file and archived copy (very close file size) and you’ll know the answer. I no longer have Vista so I cannot test.
The Windows stupidity and breakage continues unabated. And thank God, because our support jobs are safe for another service pack.
To me it seems funny that this PC Computing magazine needed to do the usability testing for behalf of the companies that deliver the software. Even more hilarious is that the biggest software company in the world fails to do their testing. Is it the market position?
The copy dialog is not dismissed until the write-behind thread has
committed the data to disk, which means the copy is slowest at the
end.
You completely misunderstood Mark’s article here. There’s NO write-behind thread for files copying in Vista, because it doesn’t use cached I/O infrastructure for copying (where written data are copied to buffer and the write operation ends as soon as the data are in kernel’s buffer; the buffer is than flushed to disk by OS kernel without blocking the process any further). It uses direct I/O (which bypasses all caching mechanisms and writes all data directly to the disk, so the process cannot finish write operation until the data are physically written to the disk).
So it’s not true at all that Vista’s progress bar is slowest at the end. The problem is simply that XP hides its progress bar BEFORE copying finishes (because as far as Explorer process is aware, copying finished – except that the kernel really finishes it later behind its back).
One of the most difficult lessons for programmers to learn is that as soon as you start lying/faking things to improve your benchmarks, the benchmarks NO LONGER MEAN WHAT YOU THINK THEY MEAN.
The fact that XP would dismiss the copy dialog before the copy was really done (whether deliberately or accidentally, it doesn’t matter) was both good and bad. Good because it drastically improved perceived copy time. Bad because it confused the issue by dismissing the “in progress” UI before the bits were at the destination. Oops … so now the naive benchmarks don’t actually measure how fast the file copy is, but instead they measure how long until the window disappears. That’s a different benchmark. Still useful, arguably, but it’s measuring a completely different thing.
And once you’ve changed the meaning of a benchmark, you’re now constrained by the new meaning. As Microsoft discovered, you can’t then switch to a faster algorithm which needs the dialog to stay up a little longer… because it will be perceived as slower.
Explorer/Finder is unfortunatly one of the most important bits of an OS, from the users point of view. Everyone has to navigate their way around, and even good desktop searching (google,etc) doesn’t get around that completely.
IMO Vista’s explorer especially misses the boat when compared to Leopard’s new finder on OSX. File-copy aside, in Leopard you can view most types of file in the new finder without launching external programs, and it’s very fast (quicklook coverflow). This is such a boon because often I’m wondering exactly what’s in that word doc, etc, and there’s always a disconnect to opening a few files to compare (say they’re similar) and then remembering which is which; it’s a painful process. (http://www.apple.com/findouthow/guidedtours/leopard.html)
Then again I’m always pissed at my mac w/ Tiger (or whatever comes before leopard) because it treats images just like any other file and gives me a small icon. Double-click and it opens in preview (equivalent of Adobe Reader) which doesn’t allow you to easily look at other images. Even MS w/ XP had this done right for a long time (filmstrip view and built-in slideshow), and it’s taken Apple quite some time to fix this issue. Sure there’s iPhoto (mine is iLife '07), but I don’t like it, and I don’t want to have to put all my photos into iPhoto just so I can page through them. But Apple fixed this, so that’s good, all I need to do is upgrade
I’m no mac fanboy, but the reason Apple’s getting more popular is because they work pretty hard to make it seem easy. I like Vista too and it’ll get over these speed bumps. I think we need both OS’s because competition keeps them both on their toes.
You completely misunderstood Mark’s article here.
Nope, I didn’t. The Vista copy is slower at the end than XP, exactly as I stated.
The problem is simply that XP hides its progress bar BEFORE copying finishes
If this is a “problem” then where is all the data corruption and data loss in XP? Can you provide any evidence whatsoever that this is a “problem”?
At any rate, the point is moot, because Vista SP1 switched back to the XP style cached copy.
Actually I think the whole actual perofrmance / perceived performance issue is prevalent throughout vista. For example take superfetch and readyboost, which increase your system performance over time based on usage patterns. This means that for a good benchmark you’ll need to test using the system for say a week for your day to day activities and then compare it to a regular xp benchmark, nearly impossible I think. Many reviewers didn’t seem to realise this, especially when Vista first arrived. So many articles were written comparing an unoptimized vista experience to an optimized xp experience, however in my experience (and RAM seems to help here) I find vista speeds up considerably over time.
Of course, I know that you know this, but in server-land it’d be a different issue–when there’s no user, then it’s the actual performance that matters more.
-Max
If you want Vista to perform, you need the 64-bit version. Trying to run with the 32-bit version just blows.
Am i gonna be the only bloke on the block to think (and admit to thinking) that Vista, even before SP1, feels a whole lot faster than XP on the same hardware? Vista doesn’t make me wait a few seconds before allowing me to move windows or showing me the context menu i asked for. Redrawing the screen after opening the start menu doesnt take minutes even if i am stressing my system with a whole lot of background processing. Opening help from Visual Studio doesn’t lock op the system for two to three minutes and it’s the built-in taskswitcher (not flip3d) is about a zillion times more accessible. In my humble opinion, Vista knocks the socks off of XP. Some stuff might take a bit longer to finish, but almost nothing locks up the entire system anymore. Vista has its flaws, but from my Mac OS X user viewpoint, it sure is better than XP on too many levels to stick to XP.
Just my two cents.
At a complete tangent to this, Vista virtual memory management is one area which is terrifically improved in performance/behavior compared to XP, but AFAICT almost nobody has noticed.
I’ve written some trivial test programs which try to allocate huge amounts of memory, with and without writing to it, then free the memory and repeat. Playing with the options in this arena and running multiple instances of this program, makes it clear that the Vista VM has stepped up nearly to the quality of traditional Unix or Linux virtual memory management. Memory required for the OS is prioritized, memory for the active application is secondary, any memory not allocated to either is apparently used for file buffer cache (but is instantly discardable if needed for application space.) This is the way it should have been working for the last 8-10 years. Big win.
Unfortunately… this is all completely invisible to the end-user, except in so far as they may notice, “Hey, Task Manager says there’s almost no free RAM! Vista is a memory pig!”
Great article, Jeff.
I’m not a professional developer, but I’m often involved in some project as analyst or manager, and yous point of view is really useful to avoid a lot of “dead end” during analysis and testing.
Thank you.
Great article… could you also go into explain why Vista takes forever to unzip large (10 mb) zip files compared to XP? Vista’s built-in unzip function takes almost twice to three times as long as WinRAR.
The irony is that years ago, Microsoft put a lot of emphasis on perceived vs actual performance in their MCSD courses.
Re: copying buffers (Franz Bomura)
In that case (cd /path/to/sourcedir tar -cf - *) | (cd /path/to/targetdir tar -xf -) should be the fastest tree copy in the world.
No I don’t use that much anymore on local disk, but I use a variant of that for network copying as latency doesn’t matter anymore when doing it that way.
It’s exactly how I thought it would be.
Seeing those progress bars go fast and then hang at the end is just frustrating.
Doing it the other way around would surely be better. A bit like accelerating a car, that’s where you feel you “go fast”.
Well the next service pack will make the dialog box disapear two seconds after it appears and VOILA problem is solved.
Every winders user will be impressed as heck about the new super-fast-copy feature.
File copy sucks in XP AND Vista. The main reason is that Microsoft doesn’t utilize memory, it uses a copy buffer of 4KB. Try copying a file of 200MB, it is tremendously slow, because the OS has no clue it can stream the file into memory and then write it back to disk.
The main problem with file copies is that the disk head stepping is the true delay time you’re running into, so the no.1. thing a copy algorithm should do is MINIMIZE the stepping of the diskhead. If you don’t read an entire file into memory before writing it, you’ll waste time by stepping the head back/forth between source and destination. This is extremely expensive compared to the write speed.
That said, file copies being slow on vista is also caused by DRM check code, at least on some types of files.
So, Jeff, when I copy files on XP or Vista on the command line, you’re sure the copy action on Vista is done faster than on XP? Or equally the same (except with mp3’s) ?
This copy-speed thing strikes me as a rather unbalanced trade-off, but it’s something only Microsoft can blame themselves for.
At some point in their dev cycle, they said “Hey, we can make this seem faster by letting stuff finish in the background.” Sure, there were plenty of cases where someone might see the dialog disappear and unceremoniously cut the power rather than a proper shutdown (which in my experience people are already walking away from, and don’t think much about shutdown speed. (except in the case of a lengthy reboot…then people get back to thinking about dropping the ceremony again))
Unfortunately by making this choice before, they set the bar on this, and possibly other, file i/o operations. Go find a raid controller that can do both write-through and write-back modes and do some benchmarks. Pretty painful, right?
It strikes me that a lot of the performance penalties people see in vista can be boiled down to “fixing the performance hacks done before.” Accumulate enough of those with feature bloat and some new/redefined APIs that people haven’t optimized for, and there you go.
That they’re also a victim of loading up more and more services to account for every possible configuration out of the box (Oh, I dunno, Distributed Links? Never used 'em, even back in XP. DHCP client running even if all my network interfaces have static addresses. And what does the Browser service do again?)
My vista box is slow to boot, usually throws a ton of crap at me (I’m especially fond of the “This nvidia driver you installed doesn’t seem to be compatible, even though everything’s been fine for weeks. Let us replace it with the default nvidia driver we have.” (Hint. “Later” gets rid of the dialog. “Cancel” throws me into a fail-safe driver at VGA resolution.) This is not a more pleasurable experience for me. I love to come home to a computer I’d thought I’d put into hibernate having been rebooted and then giving me a bunch of dialog boxes telling me it had bluescreened. (yes, really. If it happens to you, look at the report they want to send home. The error code really is “bluescreen”)
I’m giving SP1 one month to convince me they threw a huge chunk of pointless interface changes and pandering at me for some reason. After that, I’m wiping the box for something else, probably slack.
"it uses a copy buffer of 4KB"
That should have been 64KB, my bad.
Additionally: with a lot of small files, it also doesn’t understand that you can read them sequentially into a buffer, say 64MB, and write them sequentially after the buffer is full.
What’s the point of having a single copy engine which has a truckload of tradeoffs for all possible scenario’s? Why not have a couple of parameter sets, one per scenario which needs attention, which makes the copy operation truly faster? Local partition copies, or same-disk-different-partition copies are horribly slow under windows. If this is because the same engine is used for network-copies as well, the user doesn’t care, the user wants the fastest copy action ever for that scenario. If localdisk copies are slow because the copy engine is designed to work in all scenario’s, it’s a bad engine: it should know (it can know, so why not use the info) if the copy action will be slowed down by a low buffer size. So we have these 2-4GB of memory in our system but aren’t using it and are using 64KB buffers for file copies… clever…
This is only true when using explorer to copy files. I never use explorer to copy files, as it always was bad and unreliable. So all these UI tricks are not important to me. But performance of CopyFile OS call is important, because it shows what we can achieve with this kernel, if we use the right shell.
Very interesting reading !! But when I read the pdf-paper something caught my eye. Now, I’ll be the first to admit that I’m no math-wizard, so please correct me if what I’m about to say is wrong.
The progress funtion for Inverse Power is f(x) = 1 + (1-x)^1.5 *-1. As far as I can tell, this function will fail when x 1. This is due to the part (1-x)^1.5. This is the same as saying (1-x)^(3/2) or Sqrt((1-x)^3). When x 1 then (1-x)^3 will always be negative and a negative number doesn’t have a square root.
Am I way of track or what?
I have noticed that animation speed of continuous animations also effects the users perception.
Take the continuous animations on www.Ajaxload.info depending on which you use the users seem to think the process was quicker.
I think the vista way of showing the copy dialog until the file is flushed to disk is the wrong aproach. I would like the copy to be instantanious. The system can do some filesystem magic to make the copy appear at the right place immediatly so i can continue work.
If i need the file to be flushed i can use fsync or umount or something similar.
“Doing it in the background” is just a magical statement. The question is how ? It can be done poorly, or greatly, as much as a foreground algorithm.
Why dont they use a system similar to the linux slab allocator for handling commiting chunk of files to the file system ?
I mean a cache copy daemon that would preallocate files of different size, and thus diminishes drasticly the cost of the write to file statement due to filesystem cluster allocation ?
If I was to implement it in the most basic way, I would ask the programm for the size to download, in return, I’d give him the adress of a shared memory zone of a good size preallocated slab, I’d multiply it in two contiguous zone, and each time one half is plain I’d commit the slab to the file (as a pingpong buffer for video), and with the beauty of % operator, the programm would seems to write continuously. If writing is far easier than commiting, I’d use a ring buffer instead, and I’d use semaphor to prevent problems.
Idealy, the programm would not copy downloaded data to the memory zone, but use the memory zone as the place where to download the data avoiding a useless memcopy.
If my file is let’s say bigger than my biggest slab, I’d merge the chunk in background.
Is it dumb ? I have the feeling that most actual windows buffer algorithm are dumb because they fail to understand that “one size doesn’t fit all”. A good algroithm for small file (aka web pages and images that are 80% of actual file traffic), fails for bigger files, therefore, the linux slab pre allocator, is smarter since it implements various strategies depending on size and use (okay I take it outside of its real usage context).
I have done c# system programmation, and I am pretty confident that it could be done in a matter of weeks, since I have coded a multi threaded agent in 4 weeks.
As a matter of fact, the idea being patent free (as far as I know), why don’t they steal interesting free software ideas instead of making war to them ?
Well, I think I have the solution to all of this…
In XP, file copies used to be fast for one simple reason: after the file had finished copying, the user thought “Wow, that 34781125 days seemed to pass in no time at all!”. Whereas in Vista, it tells you that the copy will take 4 hours and it actually does.
In seriousness though, no amount of twiddling with user perception is going to change the fact that copying a file on XP takes 30 seconds whilst copying the same file on Vista took 36 minutes - that’s your real, common all garden variety of time rather than perceived time, unless my stopwatch was also impressed by how fast the progress bar updated.
Carl
Great article! Perception is key for your users, not necessarily performance. Funny, users pay for software - which is all that really matters in business, right? So if you build a shitty algorithm that appears do work very well (or operating system, or widget), you’ll make boatloads of cash?
Worked for Bill.
WTF? I don’t see how Jeff is supporting Microsoft, what just because he doesn’t spend half the article slamming them?
This particular post is based on Windows Vista, why should the word Linux come up at all?
Oh, I see, you’re confused, you must have thought you were reading /.
There does seem to be a problem with perception of speed (I know I have seen dialogs stuck at 100% on many systems (not just Vista) and found it frustrating…)
But the file copy problem on Vista is not totally down to perception it also has to do with reality
There are many complaints (some here as well) that Vista under some circumstances actually takes much longer to copy files (for no apparent reason) in the order of XP takes 30s, Vista takes 10’s of minutes?
and Vista does seem to take much longer to open folder with large numbers of files so copying the contents of a folder as a complete procedure can take much longer (open folder, select files, copy…)
These are both things that should have been shaken out in the Beta’s, I think we can learn something about software testing …
I think that some of the perceived frustration with Windows - the Explorer “shell”, Outlook, etc., is the way that there seems to be no multitasking at the UI end. If you ask one of them to do something (look for a printer, create a new contact, check for new emails) it holds you hostage until it either completes the task or times out.
Or am I the only person who mutters “give me back my computer, Bill” several times a day…
And yes, “I can’t copy one of your files because it was in use by another application a little while ago so I’m going to give up on the entire operation” is insanely stupid and frustrating. Why in G_d’s name can’t Windows copy a file that’s in use, anyway? Or why, after I’ve saved an Excel file fifteen times and decide it’s time to close it, does Excel ask me “Do you want to save?”
Arggh!
I learnt the lesson of perceived against actual time a long time ago in the late 80’s. Developing a little embedded module that processed the data from a CCD line scanner the feedback I got said that the process took a long time. So I put a little wirly-gig (cycling through the characters / - \ |) on the little 2 line LCD display and spun it. Immediately the feedback changed to “wow!, you’ve made some major improvment to the code”. I hadn’t changed the code, except to slow it down by putting in the wirly-gig!
for a company at the size the programmers dont earn the pay if they dont take the people using the software into account.
Perception is reality here.
Xp dismissing the dialog before it’s really doen could indeed be a problem. If your’re backing up to a removable drive at the end of the day (like I tend to), the you might be unplugging your USB drive while data is still copying.
In my programs, I tend to use a progress bar with a pessimizing time estimate. If the progress is 90%, mulitply the calculated remaining time by 1.05 and show that to the user. If the progress is 80%, multiply by 1.075 and show that to the user.
File copy speed also differs if you’re copying to the same physical hard drive, or a different one.
This really does not need to become another Win vs. Linux battle.
Jeff is providing a really good point for anyone designing and/or implementing interfaces to consider.
Even if your writing something for Linux. : )
So please step back and try to realize this is just a contributing thought, there’s really nothing negative to it.
Peace!
I already googled this when I first got vista it seemed slow lots of people reporting the same thing. One of the things that slows vistas copy down I believe is that it tries to implement DRM copy protection.
But that’s not what really annonys me the thing that annoys me is the time estimate is always way off sometimes I get reports of hours on large copies when I know it will take around 20 minutes or something. This report becomes little more accurate until the process is complete. I find this rediculous I can do an estimate just doing the amount transferred so far and time taken compared to the total file size. I know fragmented and small files take longer to copy but seriously can’t we just have a simple estimate which I find much more accurate for the general case.
Nchantim,
Actually, you can specify in the device settings for your USB drive (right-click - Properties - Hardware - Properties - Policies)whether to enable write-caching or not on that device. If you enable it, then you need to use the “Safely remove hardware” command in the system tray. If you disable write-caching, then you can remove the drive no problem.
This is a great example of how difficult it is do get something just right. So many factors are involved in producing really good software. So many things can go wrong. You try and try to think of all the possible scenarios. You test and you test and you test, but once again, some things you just can’t test for. You’ve got to set it loose in the wild and hope it thrives (or at least survives!) This is one of those cases. The important thing is to learn from your mistakes and apply that knowledge for future development.
“Xp dismissing the dialog before it’s really doen could indeed be a problem. If your’re backing up to a removable drive at the end of the day (like I tend to), the you might be unplugging your USB drive while data is still copying.”
Amen. That has bitten me more times than I care to mention. I always use the “safely remove drive” function but it often tells me that it is still in use. Now I know why!
Thanks!
Why spend money on a new os when the old one works much better than the new one.And off course,people are actually complaining that its bad so it has to be a half baked thing.Well sorry to all the people who brought copies.The prices have been slashed etc since no one wants it and even if you brought it why go through all the pain when everything works with XP.So it has a fancy UI and bloats.Oh yes we are all consumers and we need to consume,throw it all at us we will lap it up and empty our wallets for Microsoft.
No more!!!
BOYCOTT VISTA !!!
I strongly suspect you are over-thinking this. The actual problem was not some subtle perception of performance, but actual real-world disk-thrashing 30 minute copies that used to take seconds. Subtle perceptual issue can make 5 seconds seem like 10 - they can’t make 10 seconds seem like 10 minutes.
To quote a comment to the referenced post:
‘On larger codebases (50.000+ small files) this created a situation here. We’ve actually measured the time that it takes to boot Windows XP to make a backup copy before booting back into Vista to continue development and that’s actually FASTER than waiting for Vista to get the job done.’
The relevant point in the article is ‘with Vista’s non-cached implementation, Explorer is forced to wait for each write operation to complete before issuing more’ - the new algorithm was synchronous per file. If it has to copy 10,000 files, it does the first one completely, including close and sync, then moves on the the second, then the third. This means the minimum file copy time was of the order of (number of files to copy * disk seek + write time).
The take-away lesson from this should be that the benchmark they optimised to should have contained an extra case: a few hundred thousand small files. In general, any time you are optimising to the point of making trade-offs, instead of just removing slow code, you need a flll representative set of benchmarks. If you don’t have, and can’t create, such a thing, best leave the code alone.
To be fair, I have not run Vista myself, but I was at a friend’s house and he wanted to open a folder with about 300 pics in it and it literally took over 10 minutes to display (this was pre SP1)! Holy cow!
Dude on March 3, 2008 05:01 AM
–8--
Dude, You are right, that happens, especially when the folder hasn’t been opened before (no thumbs.db). But it doesn’t prevent me from using my computer in another application, or even another explorer window in my experience on Vista whereas the same sort of thing completely disables my XP machine from handling any user input until it’s done. As with everything, your mileage may vary.
Kris
At any rate, the point is moot, because Vista SP1 switched back to the XP style cached copy.
Exactly one more reason not to switch to Vista.
Sorry for the double post(hit Enter too soon).
What I find most annoying: “Estimated time to completion: 5 seconds” and it sits there for almost a minute. Does my software “lie” to me?
poor windows…still working on that file copy algorithm. They can’t get anything write…hehehe
cp
If you’re moving enough files in either XP or Vista to warrant arguing about speed and efficiency, then neither of the two GUIs win as you should be using the command line. But that’s not what Jeff’s post is about, it’s about stepping outside of the programmer mindset and putting yourself in the user’s shoes.
I don’t quite get it… files in the disk cache are readable, right? Once XP put the files in disk cache, I could read and write them. Except for the rare case when I was powering down, it was DONE with the operation as far as I was concerned. Vista made me wait longer.
Plus, installing software is damn slow. Maybe the installer is waiting for everything to get written to disk before proceeding to the next file? What the hell is Vista’s disk cache for if you don’t get to use it?
At least two posts have mentioned DRM slowing down Vista file copies. Could you provide some evidence for this?
It may be true but I never saw any actual evidence for it so I have assumed it to be FUD that resulted from that awful, speculative and largely bogus Peter Gutmann article.
Hey Now Jeff,
Should the Vista’s team listened to there users? Would UAT (user acceptance testing) helped the file copy performance perception? I like the post since it’s so true.
Coding Horror Fan,
Catto
That is funny. I have XP and I have vista.
Moving a media file (from my digital video camera) used to take seconds and now takes 15 minutes.
Seconds Minutes.
You can’t tell me it was sitting there in the background copying, because it wasn’t.
Part of the reason why I’m still on W2K SP4!
I found that except for large video files, Vista copied faster than XP every time - the more the files the better the performance.
SP1 did indeed improve the prediction delay and that definitely improved the experience for me, but it did not make it copy any faster.
Maybe it is because I open the dialog and what the data flow rate more than anything else. I find that bit of information very comforting.
Oh, and as I mentioned on Mark R’s Blog Vista still has one VERY annoying behaviour from XP. If you move a file on drive X from folder A to folder B, it moves instantly. If folder B already contains a file with the same name, instead of deleting the target and performing the same instant move (just relinking directory structures), it COPIES the file over the target in a pedantic byte by byte method and then deletes the original. Suddenly moving a GB size file goes from instant to inexcusably slow (slower than if you moved it to another HD)
From a Hanselminutes show on x64 Vista:
Franklin: How many developers does it take to copy a file on a Mac?
Hanselman: I don’t know.
Franklin: What’s a file?
Hanselman: How many developers does it take to copy a file on Vista?
Fanklin: Don’t know.
Hanselman: I don’t know either, still calculating…
Just last night i was unzipping a 2gig file. There was no notification that the process was happening. The Calculating messagebox showed 7bytes/sec. 7bytes per second I thought there was a problem, killed and restarted explorer, tried again, same thing. I went back and forth to figure out the problem. Turns out there was no problem except for lack of notification.
After letting it sit, looking like the process was hung up, it eventually finished in about 2 minutes, what I would expect time wise from an xp box. Unfortunately I wasted ten times that thinking there was a problem.
Personally I hate progress bars. They are never right. Ones that move fast at the beginning trick you into sitting and watching it, because you think they will be done soon. But they usually hang at the end, making me mad, and I think “Thanks for tricking me into sitting here, I could have gotten up and done something else.” I would prefer a blanket “This will take 5-10 minutes depending on your machine.” That way I can multi-task. That is the true efficiency lost.
If you don’t read an entire file into memory before writing it, you’ll waste time by stepping the head back/forth between source and destination. This is extremely expensive compared to the write speed.
@Frans Bouma: I have Windows XP SP2. If I connect my camera via USB, and copy files to the disk, it loads the whole file in memory. One of the worst bugs I have seen… No problem with 2MB pictures, but took many minutes of swap thrashing when I copied a video file larger than my physical memory.
The idea is loading bigger chunks, not the whole file .
I agree with brian above
progress bars that lie are the worst
123232534 hours is unhelpful
calculating time remaining is useless
10 minutes when it will take 5 is unhelpful
5 minutes when it will take 10 is useless
The best ones I have seen are the ones that just cycle (rather like the spinner mentioned above) it says “ok ok I’m doing it as fast as I can”
But beware of lying spinners: the File copy animation is one of these it is unconnected with the actual file copy operation so it can be showing the pretty animation of a file flying across when in reality it is actually doing nothing, and eventually shows a timeout error …
Dunno if you guys thought of this either, but the iPhone seems like it’s the fastest performing smartphone on the market, when in reality it’s just the most clever at hiding performance clogs. Rather than put the presentation layer somewhere beneath other “more important” layers, the presentation layer is seen as something UBER necessary, and as such it is prioritized higher than other OS level dependencies.
That’s why the iPhone seems so much better than windows mobile, just because it seems faster, regardless of actual specs.
Who cares about Windows Vista!? There’s a bigger point here, and it’s one that I found independently in the past.
We had an intranet app that blocked until a process was completed. The duration of this scaled with the number of items selected to process. In some cases, it could take 20 minutes to a half hour.
People used to tell me that they were sitting at their desk watching their IE progress bar, and it had “stopped at about 80%” even though it was still running. No matter how many times I told them that this progress bar meant nothing in this context, they still went by it.
So we decided to build a background processing subsystem where these requests would launch a background process, returning a confirmation screen immediately. We’re using PHP on Linux, so this was easy. We then gave them a page to review all of their running background processes, view results, etc.
After more than a year, I still get comments about how much faster it is now, even though I know it’s slower due to increased load on the server from other projects. In this case, they now have an accurate progress bar (even if it requires them refreshing their status page.)
Additionally, they could have multi-tasked just as well if they had opened up a second IE window, but this concept seemed to defy reality for them.
I must add something about Explorer. And before I even say anything, I’d like to say that I never did use Vista. I went from XP to OSX and never turned back.
OSX I noticed early on, has some really awesome icon management. Icons are big, and not only are they big, they are actually very articulate. I mean by this that actual thumb nails of the files are displayed as large as 128 pixels wide. Thumbnails of anything… images obviously, but even web pages or text files, or excel documents.
This kinda befuddled me because I remember working on a Shell extension on windows that would do this sort of thing, and remembering just how fricking slow it was. Same thing when you (in XP) set the folder type to contain music files for example.
Then it hit me. It’s because finder does it in the background. And very lazily. Say you have a folder full of 1000 files, if Finder doesn’t know what the second page looks like, it does not stop you from scrolling forward. Not that Explorer prevents you, but it does slow the system down to a halt and you hear the grinding of the folder being processed.
I often have a big reorganize of my files as i am sure alot of you do. This usually means multiple copy operations running. Why is there no option to pause copy operations so I can run them in order as they complete to avoid the disk head jumping around all over the place slowing all the operations.
Better still let’s have an intelligent copy queue so copy operations using the same discs are queued for speed but for instance a network or internet download would run concurrently as it would require less disc resources and you generally want maximum performance off a network resource. Also if a second disc to disc transfer was requested and it used different discs to the first it could run at the same time.
I hope I have explained that ok.
Mike, moving to OS X just because Explorer sucks seems a bit drastic, if that’s what you are suggesting.
Judging by the number of people who beg for an OS X port of Directory Opus (the Windows file manager I help answer questions on) I can only imagine that Finder isn’t great, either.
Explorer does suck very much on Vista, though, I can’t argue with that. The UI is an ergonomic nightmare with it taking multiple clicks to toggle the preview panel, them removing the parent button (breadcrumbs are useful, but the parent button is usually far more convenient and it’s always in the same place), and forced full-row selection making deselection and folder-context menus difficult to reach… Not to mention the crappy file copy performance prior to SP1 (which only affects Explorer and a few of the lesser file managers which use CopyFileEx instead of their own file copying routines).
I suspect one of the reasons I like (or at least do not dislike) Vista so much compared to other people is that I don’t use Explorer. The rest of the OS is actually pretty good, or at least no worse than XP, IMO.
When I want to copy/move lots of files in XP or Vista, It was much faster to open cmd and issue a command: copy or xcopy or move …
Hi, all. I am not a programmer, but i have a question relating to vista file copy speed.
If it has problems copying due to its i/o buffer or whatever it is. Surely this will have effect on the read times and thus loading times. So I see it was mentioned that they went back to xp algorithm I can see no real evidence of that. Yes the speed has improved but it is still slow.
I backup my 1tb of data from my two internal driver to a firewire 800 external hdd. Now under Xp this takes about 3.5 hours. Under vista sp1 it takes… over 16 hours. Which personally i dont want to wait for.
Anyway, I shall be stopping with XP until someone mods vista until it works and a reasonable OS. However I do feel it will be another WinMe experience…
What is amazing is that with faster computers and hardware things haven’t gotten much faster than the old 33mhz workhorses of old. Files have gotten bigger and the pleasure of the operating system handling these files has gotten better. I, personally, hated the folder actions in mac os x when it wanted to open every file in that preview windows. 10.5 made it a lot better, but it takes heaps more time to open the preview (when i actually want it). I suspect there is just a convenient delay. Ahh, the glory of PEBKAC.
I laughed when I saw that different kinds of progress bars seemed faster. I myself find it very difficult to write progress bars in programming. Sometimes it is just trial and error to see what takes the most time. I then just fudge the numbers and have it look like it is smoothly progressing to 99% when in reality there are 17 things to complete and they all take different amounts of time.
Please people , if your doing anything more than 50 files or 100meg, use a dedicated file copy program from MS (they should have had this built in)
http://www.downloadsquad.com/2007/10/25/synctoy-2-0/
Yes its sad that it uses a micro MsSQL db, but its .net
It copies well and fast. Basically a copy of unix RSYNC which is good though probably not aware of the media types to adjust algos.
Or use rsync win32, or visaversa.
I do think MS should force their developers to use OSX daily to see what to aspire to.
While we’re on the subject of files, can anyone tell me why XP/Vista takes so long to delete the bloody things? Why does XP have to “calculate” before deleting? Grrrrr.
Sorry KG, that (copying time takes just adding up the bytes and dividing by bytes/sec) turns out to be wildly inaccurate (I’ve tried it). The start and stop takes far longer than copying bytes onto the end of the same file.
Give me a break. Forgive me for providing a perceived troll, but the performance is still crap, perceived or no. Alternate file systems like ext3 and reiserfs have already proven that the Microsoft method of file systems is inefficient at best, and outright grotesque at worst. But I wonder how much of this article was a payout from MS. Hmmm…
It’s pretty sad when your OS needs third-party software to copy files correctly.
user performance in simpler terms, static bar == slow ,dynamic bar == fast. so why don’t you push that bar gradually to take away the static perspective.It’s all relative.
LOL. I was backing up my “downloads” directory last night onto my second hard drive. Mostly small files, but a few CD/DVD images from MSDN:AA. The “estimated time remaining” kept fluctuating between 7 and 30 minutes (winxp).
I understand some algorithms are difficult to estimate accurately, but file copying? Shouldn’t it just add up the size of all selected files and divide by # bytes/sec copied?
Copy time depends to varying degrees on OS cluster sizes, hard disk (or whatever media) cluster sizes, internal file fragmentation, hard disk fragmentation (some of which could be very natural fragmentation simply from moving disparate files that have no reason to be unfragmented with respect to each other), memory fragmentation, amount of memory, other hard disk characteristics, other I/O that competes with file transfers for bus bandwidth, thrashing from other processes, what other software is consuming resources, what is sitting in memory at the present time, the size of the swap file, the transfer error rate, the HDD cache, and (I kid you not) the ambient temperature, amongst many, many other things. The bytes/second copied for any given operation cannot be known beforehand. We can only try to improve our guesses.
If any computer operation was as obvious as the one you mentioned, everybody would be doing it that way and progress bars would never be inaccurate.
[OS X Tiger] treats images just like any other file and gives me a small icon
Make sure “Icon view” is on, then right-click in the finder, click View Options and tick Show Icon Preview. Fixed!
Of course its all much better in Leopard
Man, I never thought of that! I’ll make a small piece of software with those varying progress bar styles and ask my friends to share their thoughts… will I get the same results of the study.
Brazilian Cheers.
Just a couple of things:
The perception part is so important. Andrew Moyer’s point is exactly right: even increasing the load (and time) by providing feedback can result in better perception and user happiness. Been there, done that.
jaster’s point about spininess is also well taken. You have to be careful with just cycling something without providing actual feedback. Sometimes the Mac “shows” progress by using the a spinning multi-colored ball. But they use the same thing when an app hangs - so some call it the “the beach-ball of death”, and it causes the expectation of a hang. Quite the opposite of the intention I think!
Thanks for the article. It provides an interesting aspect for determining the speed of a process, and I think is something that all software should take into account.
I think that the write-behind can be bad when the write fails.
Unfortunately MS won’t get any love from increasing reliability at the cost of performance.
People only value reliability after they’ve been hurt by the lack of it.
“I tend to use a pessimistic time progress, where if the task is 90% I report the time left multipled by 1.05, and if 80%, multiplied by 1.075.”
Anyone else find this comment troubling?
As an electrical engineer I swore a professional oath, and I don’t recall any place in the oath where it said “lie to the user to make them feel better”. I’m sure software engineering’s oath also lacks that “lie to the user” phrase.
When I’m given a progress bar or time left estimate, I expect the programmer to TELL ME THE TRUTH, not lie about it. Otherwise, if it’s a lie (estimated time multipled by 1.075), then there’s no reason to believe what you’re telling me. I don’t know if “5 minutes left” is the actual truth, or if it’s a fudge-factored number and the actual time is 4 minutes???
Be HONEST to your users.
Remember your ethical oath that your swore to uphold. I may not like hearing “5 minutes left” but I’d rather hear the truth, than wonder if the programmer is lying to me.
Try launching simultaneous copies from a CD/DVD, that is also fun.
Somehow Perceived and Actual performance match: iffy.
SuperCopier is another tool that improves Window’s file copy.
( http://supercopier.sfxteam.org/modules/mydownloads/ )
The existence of such tools speaks loudly for the quality of the OS…
I remember earlier versions of IE, whenever you download a web page (in the 90s, with dial-up) the progress bar always speeding towards the middle of the bar no matter what (trying to deceive us). When the first 20% of the page REALLY downloaded, then the progress bar would MOVE BACKWARD back to the 20% position of the progress bar.
It seems Microsoft staff already known this human trick long ago but it is hard to understand why the Vista team failed to learn from history.
This is totally bullshit.
I remember a number of times in Vista where copying large video files simply didn’t work, and cancelling them caused the whole OS to shit itself while it slowly tried to stop doing something it never fucking started.
Get yourself one of those fancy Scuzzy/RAID setups they had in the 80’s and all your file copy blues will be solved. Did I say that out loud?
Just to return to this old (and quite interesting) post, it’s of note that the latest Safari version (4.0) no longer gives any progress bar whatsoever – it’s just a spinning wheel. And indeed, I perceive that nothing is happening!
I’m still waiting for a skinable file copy so we can have creative people devise clever animations to replace the boring flying folders.
Perceived performance is the cornerstone of multi-tasking on computers, i.e. doing more things at once than you have processors available.
two general points though:
To me it doesn’t matter if the new copy is 2% faster, I just want to know what is actually happening. 10% would start to change things, but at that point, the problem is probably not progress reporting.
This reminds me of the story of people complaining that a new fancy elevator was slow. The designers proved it was fast, but complaints continued. The designers did research, and put a big mirror in the lobby.
Complaints stopped.
People could now preen in the few seconds before the elevator arrived, just like in most elevator lobbies.
(Sorry, I can’t remember who to attribute the story to - probably a ted.com presentation.)
This concept is almost a century old… I’m sure you all remember Einstein’s hot stove, hot girl explanation.
Actual time is crap, as you said. Nobody gives a damn. Perception is everything.