Your point is funny but from personal experience there is a quite small limit to how often you can just "add a server" and see a real increase.
I recently upgraded our ERP server from a quad-core with 3 gigs of RAM and 3 disks to an dual quad-core with 32 gigs of RAM, 17 hard drives all 50% faster, separate physical RAID 10 Arrays for the OS, Applications, Database, Log Files, and Temporary database files.
The net result? A 5% increase in performance. Now I know this ERP system like the back of my hand (but didn't write it) and I spent a long time making sure that my hardware configuration exactly follows the vendor's so-called "best practices".
It's simply that the system it runs uses a poorly designed and very leaky database abstraction which makes poor assumptions about its physical environment. (you know like one of those academic papers, "assume the network is of infinite bandwidth and hard drive access is instantaneous").
Bottom line, we paid 15,000 Euros for a 5% performance increase, how many more times do you think any given company could afford to do this? Plus, even if I wanted to, there is a physical limit on how fast a server I can get.
The only way to fix the problem is to fix the abstraction, i.e rewrite the shitty code.