Everything Is Fast For Small n

Let's say you're about to deploy an application. Said app has been heavily tested by your development team, who have all been infected by unit testing fever. It's also been vetted by your QA group, who spent months spelunking into every crevice of the app. You even had a beta test period with real live users, who dutifully filed bugs and put the application through its paces before finally signing off on the thing.


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2007/09/everything-is-fast-for-small-n.html

I’m happy to say that I’ve always had an eye for optimization.

However, we bought a 3rd party weather app a few years ago. Tested it out for a few weeks internally… and launched it. Literally 5 minutes later the webserver went down under the load.

It never occurred to me that a vendor would sell software that would fall over so easily.

Like Bill Cook said at INFORMS this year, the speed improvement in linear programming from 1996-2004 is a factor of about 5.3 million (3600 due to algorithms and 1300 due to computing).

The improvement by using better algorithms was almost three times that reached by computing power!

You can bump into a similar problem with SQL servers.

Your app runs great on test data, even on medium sized data, but when confronted with a large number of rows, the database query optimizer decides to change its strategy, and suddenly you are wondering why the query that runs near instant on one database suddenly takes half a minute on another.

Always fun to discover after release…

No worries, Jeff. I think this time around you have many more readers anyway, since you’ve grown :slight_smile:

By the way – Stuart: by indexing, don’t your searches (and inserts) become O(log n) rather than O(1)?

Greg

On the other hand, if you know your n will be small, because you control the creation of the data structure completely, there is not much use in implementing superduper optimization schemes ( especially when they involve caching ).
I have seen perfectly simple code, with n = 100, being transformed into a brittle mess because of unnecessary cleverness.

But I have also been guilty of not using a suitable algo when n was growing beyond expectations.

Jeff,

Quicksort’s worst case is O(n^2), not O(n logn). However, the expected average is O(n logn) and practice shows that the worst case is almost never reached when using Quicksort (and it can be made to disappear, but then we’re no longer talking about Quicksort).

It’s difficult though to draw the line between necessary and unnecessary optimisation. I always remember in these cases Knuth words, “Premature optimization is the root of all evil (or at least most of it) in programming”. May it be to excuse my laziness? :slight_smile:

If you’re treating SSNs as character strings for sort/search, you can create fake sensitive data that cannot be mistaken for the real thing by using letters A through J instead of digits, e.g. SSN = ABC-DE-FGHI. Those should perform identically to real SSNs.

If you’re treating SSNs as numerics, you’ve got bigger problems.

I couldn’t agree more with the post. A college of mine did recently an implementation of a tag cloud using for testing a data set of 6500 entries. The tricky part - the whole building and DOM injecting of the cloud was made with javascript which holds in the memory the array containing the test data …

Performance was terrible - ~ 10 sec page load time on localhost. But this was considered acceptable, as long as it stays under 10 sec. Now, one may thing 6500 data entries sounds pretty good. Well, it isn’t. Real live data would have circa 30.000 entries and I can only imagine what the performance would be.

The story ends with a nice shot of me rewriting the whole damn thing from scratch to use AJAX and to lazy load the data. Performance (XHR loading and injecting) is hold under 1 second, well even under 400 ms as measured by firebug.

The moral of the story? The algorithm is always fast with a small n. :slight_smile:

I always thought Bubble Sort is O(n) in the best case. When the data is sorted there is no need to bubble items upwards, thus the loop ends after one cycle. Selectionsort is always O(n) because every cycle the biggest/smaller element is determined and put in the right position.
N loops with O(n) elements each makes O(n) altogether.

As others had said it’s not simply a matter of knowing your big O. You also have to know how your algorithm works for all inputs, and what inputs it will be getting in operation.

Of course if you don’t know big O, then you’ll never get that far.

As KristofU says, there is a flip side to this.

I’ve seen simple code that does a linear search looking for a value (e.g. simple for-loop through an array) being replaced with considerably more complicated dictionary-based approaches.

A good thing when n is large, but rather pointless when n is ten.

I find it incredible (literally, I just can’t bring myself to believe) that there are people WORKING PROFESSIONALLY as programmers without knowing this extremely basic high-school level fact. Surely every programmer in the world knows that there are different kinds of sorts with different O(n) performance characteristics.

[run-on-sentence]The fact that people are deploying actual products to actual users, without thinking even for a minute about how their choice of algorithm and data structures will perform in large but fully EXPECTED loads is a very sad reflection on the state of the software engineering profession.[/run-on-sentence]
Why is it that people are deploying (or even sending to QA) solutions that they KNOW won’t perform well in real loads? How come no one did an even basic stress test of the application? Does anyone ever THINK anymore, or do they just copy and paste javascript sorting code they found on some website?

Data structures and algorithms are the ABC of software engineering, of software programming. Thinking a bit ahead is the ABC of any engineering or construction practice. Imagine if the first floor ceiling could easily carry itself, but would collapse if more than 2 people stand on it.

On a different note, the tables you presented are probably easier to grok as graphs. You might even want to show one graph in a linear scale, and one graph in logarithmic scale.

“Heck, the Movable Type installation that powers this blog is a perfect example of failing to test with large datasets. Why does it take 15-20 seconds to enter a comment?”

Because movable type regenerates thousands of static htmls with every minor change. It was pretty obvious that it was a bad way to do things back when we first looked at using it 7 years ago.

I’m happy to say that I’ve always had an eye for optimization.

However, we bought a 3rd party weather app a few years ago.
Tested it out for a few weeks internally… and launched it.
Literally 5 minutes later the webserver went down under the load.

It never occurred to me that a vendor would sell software that
would fall over so easily.

Well, if you tested it for a few weeks without any problems and then it falls five minutes after launch, then your own test were not, ehh, very good either, I guess.

Hey! I thought Donny Knuth wrote almost everything I wanted to know about the subject. Your article is very nice, although I recommend the book “Sorting and Searching Algorithms” for all those curious readers that happens to be out there in the cold using whatever sort routine is provided at his/her office.
A pretty old book, for sure. But it covers the problem IN DEPTH.

I agree that testing is necessary. Big O notation only works if you’re comparing two algorithms with different orders of complexity.

Look at how textpattern works versus MT.

When Jeff posts a new blog entry, MT creates a new html file and updates the section indexes. This is pretty much O(n). Adding new sections increases runtime linearly.

When someone posts a new blog entry on textpattern, textpattern inserts a new row in a database and inserts new rows in the categories table and sections table. This is also pretty much O(n).

The problem is that inserting a row in a database table is not the same as regenerating an entire html page. Big O doesn’t address that. O(1000n) is the same as O(n) in Big O notation.

I disagree that the pathological O(n^2) case of quicksort rarely happens. With generic implementations it will happen when you try to sort data that is already sorted - and that is a pretty likely occurrence.

So just because it has average nlgn performance, you don’t get to say Quicksort is O(nlgn). That is making a very specific claim, and it is false. (Of course the article didn’t say that!)

I think it should be noted that the TRS only overtakes the Alpha at an execution time of close to an hour - a point where the advantage is likely to be irrelevant (unless you are talking about some special application to do some serious calculation thats expected to take more than a second).