The Computer Performance Shell Game

The performance of any computer is akin to a shell game.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2009/03/the-computer-performance-shell-game.html

A cool tool for performance monitoring is Spotlight by Quest. Spotlight for Windows is now a freebie download. It has helped me out several times… It has also wasted hours of my life :frowning:

The screen i/o is a huge bottleneck! Especially writing to a textbox.

I once increased the file processing speed of a file parser (read line from file, insert to database) by nearly two times by changing the onscreen log/status to fill up the current view without scrollbars then clear it, repeatedly. Performance was doubled again by disabling the onscreen log altogether.

Console I/O is a bottleneck…on Windows. It is blistering and an absolutely trivial part of program execution time on other operating systems.

In Vista, it’s under a big button on the Task Manager Performance tab labelled Resource Monitor…

And yes, it is truly addictive, especially the Disk activity list… which process (PID) is pounding which disk file… ooooh look there goes Norton Ghost! Disk and Network! :slight_smile:

The performance monitor completely misses the other memory bottleneck: memory latency. On modern systems, cache-efficient data structures can be a lot faster than ones that require less CPU instructions but more access to main memory. This overhead shows itself as CPU load even though the CPU sits idle. With multicore where many cores compete for the memory bus things get even more interesting.

Now Jeff, you didn’t put those IP addresses out there on purpose while subtly hinting at the ban as an effort to make the offender famous now are you?

@Charles - agreed. Lack of morning coffee has the biggest impact on performance.

I’d just like to point out the obvious factor left out here - that performance monitoring should be part of development and should be done pre and post change in order to determine the performance of a change.

Sure you can try to monitor and deal with individual applications, but when developing you should be aware of your program’s impact - both on resources and other applications.

Let’s not forgot the other key monitoring variable and that’s monitoring frequency. On the *nix side of the world I’m amazed how many people run sar with the default monitoring interval of 10 minutes/sample, thinking anything more frequent than that is going to hurt them and don’t realize [or care] that the resultant data is mush. While it may be nice to summarize the day with 144 data samples, it’s next to worthless for any real analysis.

When I wrote collectl (thanks for the earlier plug), not only did I make the default monitoring interval for the daemon 10 seconds (that’s 8640 samples/day), the default interactive sample is 1 second and you can even go sub-second when you need to and yes, there are times you need to. And like other posters noted it like many other tools DOES show cpu, disk, network, memory and more. How about Infiniband? While it IS a network of sorts most tools don’t show it. See http://collectl.sourceforge.net/ to read more.

That also raises another point - the infatuation people have with graphics. While they certainly DO have their place to give you a high level view, you still need to occasionally drill into the raw data. Collectl gives you both, the ability to look at text and the ability to save the data in a format suitable for loading into a spreadsheet or pumping through gnuplot.

At the very least if you really do insist on runing sar, please lower your interval to 10 seconds! I promise, it won’t hurt. Or at least drop it to a minute.

-mark

Or you could use the Windows Performance Toolkit

http://blogs.msdn.com/ntdebugging/archive/2008/04/03/windows-performance-toolkit-xperf.aspx

First!

btw, Nice post Jeff. I’d like it if you write more about performance and reliability :slight_smile:

Actually, this utility will never show you a thread exhaustion problem. CPU, Disk I/O and working set can all be fantastic, but if IIS is just plain out of threads to handle the requests, you’re hosed and your response time will show that. To diagnose that, look at Http queue length.

Perfmon has been the utility of choice for a long while in diagnosing performance problems on Windows. It has WAY more information than the utility mentioned here. Netstat also helps.

To be fair, Process Explorer does show disk performance as well, although I’m not sure if it’s as good as RaPM does it.

cool tool! (-:
I get a nice rare item because of you.
Thank you, Jeff.
It is a very useful tool for young jedi knight just like me.(-:

Aha, thank you for pointing this out!

For those of you curious how to access it, run perfmon /s.

I love how MS decided to include this in Home Premium, rather then have users shill out money for the Ultimate edition to get it. (I’m looking at you, lusrmgr.)

You forgot the 5th bottleneck: Bad Code. Processes that loop excessively, mismanage memory, read/write disk badly or use the network poorly will appear to be the wrong bottleneck, when the real problem is a poorly written program.

But it’s not available for XP, is it? Oh well, Windows 7 coming along shortly…

Now this looks good damn good. Much better than Task Manager and Process Explorer. I wish they would make a version for Windows XP users though. I haven’t migrated to Vista concerned about the heavy DRM and the performance of the operating system.

Someone had to say this:

But in *nix land we’ve had text based tools like truss (that attaches to running processes and lets you see what they’re up to) for about 20 years, no joke, plus top and so on. The GUI tools are there now too, of course.

The tool is renamed, at least in beta, on win7. It’s called Resource Monitor there.