NO, too many things to think about! My head hurts so bad!
No offense Jeff, but this is exactly why I’m so happy that I’m not a Microsoft guy
Very interesting to see the comparison though, thanks!
I’m not sure if you also have considered the cost that scaling out brings to your software side.
It’s not easy to write a reliable piece of software that will scale so well that it will run on n machines without hassle.
As to throwing a lot more processing power at one installation usually works with not too much pain on the application side.
And when that one monolithic machine dies due to any number of reasons, which one is better then?
Spend is not a noun. Did you mean “budget”?
Hey Jeff, don’t forget about the cooling costs too. It’s a hell of a lot more difficult to cool 83 servers than one
You left out the most expensive component.
Comparing 1 DL785 against 83 RS110 is not quite fair I think, because you usually don’t buy servers until your budget is spent, but until the site runs smoothly. Any idea how many RS110s you would need to replace one DL785? Might be interesting to do the calculations again with that value.
(Though I assume the result will lean toward Scaling Out, which is more similar to Googles strategy).
One thing that you didn’t touch on was redundancy. It seems mighty short-sighted to have one beast of a system that runs an entire database without any sort of backup system. Especially since this db machine IS such a beast, I doubt they would have one just sitting in the corner waiting to be utilized if the primary goes down.
I could see a compromise approach working well. Have 3-4 ‘semi-beast’ machines to run the db so that if one goes down you still have the other 2-3 running the site, you may have slower performance, but you still have performance and you don’t have the large licensing/power/rack costs associated with many running machines.
This is a very simplistic comparison. Too simplistic.
What about redundancy? Even a $100k server can crash! A couple of clusters would give you so much better
Computational power? Does the applied use lend itself well to batch or clustered processing?
Datacenter fees? 7U isn’t cheap but it’s a lot cheaper than 83! Colocation fees per hertz is strongly weighed toward the 7U uberserver. Where does the price point cross over?
That’s impressive food for thought. Thanks for another enjoyable and thought provoking read, Jeff.
I just want to 2nd what artsrc says … just because the RDBMS doesn’t scale-out, doesn’t mean that databases don’t. BigTable is an excellent example of this. I run HBase, the open source version of BigTable.
As my article explains, RDBMSs have abstracted away the storage of your data at the cost of many other things - scalability, performance, domain solvablitiy (SQL or nothing – try doing PageRank in THAT), and many other things.
To all the people advocating scaling out for redundancy, isn’t the promise of cloud computing that your install/deployment is as simple as with a single-server but the stability and load handling is as robust as scaling out? I haven’t looked much into cloud-hosted apps so I don’t know if that’s the idea or not.
Another fun Microsoft license note. If you are non-profit the licenses come at a 95% (approx.) discount. So for a non-profit to scale out would be cheap.
What about the “blade” type servers that were all the rage a while ago?
It sounds like you get all the benefits of “scaling out” while ostensibly reducing power/cooling costs.
You’re paying a tax for proprietary hardware for sure, but the economies of scale has to work better than a bunch of 1U racks.
I think the article would be more interesting if you took into account specing them out to similar cpu’s and ram etc. That way you can see how many U’s different they are along with power consumption and software cost.
Also I’m assuming M$ chargers per install of SQL. Oracle chargers per CPU core for most processors.
Building and then maintaining 83 servers is a serious time drain. You also need all sorts of new systems in place to handle updates, maintenance, log checking in any reasonable way.
Since you seem to value your time quite low (you like to do all these menial tasks yourself based on your statements on the SO podcast and this blog) but for many people, that would be a huge piece of the puzzle.
With 83 servers, you probably need to hire a full-time sysadmin, which should be factored into your figures.
"scaling out is only frictionless when you use open source software"
AND when your time is free.
captcha: “BOSTON expound”
Your comparison relies on your 83 1U servers being equal in performance to the single piece of big iron, so of course the licensing/power/rackspace costs will be far higher.
In reality, filling the same 7U space with 7 1U servers is likely provide quit a large saving in cost (especially the initial hardware) for only a small increase in complexity. This would also provide redundancy as already pointed out.
This particular example has more specific considerations than the more general “scale up/scale out” question. Here we’re dealing specifically with Microsoft SQL Servers. (I don’t think my SQL Server knowledge is that out-of-date, but please forgive me if it is.)
SQL Server doesn’t “just cluster” in a way that you could add another server to some magical configuration and everything will work just fine. You have failover clustering, in which a second server can be primed and ready for traffic if the first server fails. You also have federated databases, which each system stores part of the data and knows how to access the other parts if necessary.
The point is, a SQL Server cluster requires a lot of custom code, either on the application side or at least the SQL DDL side. You need to take the cost of writing and maintaining all of that code into account when considering this particular question.
(Look’s like SQL Server 2008 also has “Peer-to-Peer Replication”. I don’t think this is just magic, still, but it may help.)