Building Servers for Fun and Prof... OK, Maybe Just for Fun

In 1998 I briefly worked for FiringSquad, a gaming website founded by Doom and Quake champion Thresh aka Dennis Fong and his brother Kyle. I can trace my long-standing interest in chairs and keyboards to some of the early, groundbreaking articles they wrote. Dennis and Kyle were great guys to work with, and we'd occasionally chat on the phone about geeky hardware hotrodding stuff, like the one time they got so embroiled in PC build one-upmanship that they were actually building rack-mount PCs … for their home.

This is a companion discussion topic for the original blog entry at:

I take it if you are building servers to have cohosted, you are working on a new project? What are you working on now?

I’m not sure the AWS numbers you are quoting match up with the four servers you are building. Using the AWS Web Application “common customer sample” it quotes 6 servers, 4 storage volumes, IPs, bandwidth and load balancing.

Backing out everything but the 2 Web and 2 Database servers drops the bill to $670 / month.

I’ve been building my home PC’s since I was in elementary school, and have built servers for work before, and so I almost entirely agree with you. The prices of cloud hosting seem to be outrageous (even just cloud data – google drive, which is the cheapest option, is about 6x per year the cost of just buying drives when comparing 1TB option).

However, while I’ve always been able to find the time for maintenance, and always had fun doing the labour, I’ve always ended up with issues with network bandwidth. In the case of data storage, there is also an argument for better accessibility since those services usually come with mobile access, os/browser plugins, and all sorts of web api’s etc.

Anyway, its really hard to evaluate options sometimes, but your point that building locally isn’t impossible is spot on. Advise that I’ve been given is to run servers locally for the every-day workload, but make sure you have a scalable cloud backup plan in case usage jumps through the roof, or disaster strikes, and pay the prices only for those cases.

Well, the calculator shows $1,413.70 / month for me when I select the “Web Application” common customer sample on the right hand side. Screenshots:

Oh, I see – you mean each server has two instances. So it’s not 3 EC2 servers, it is 3 x 2 = 6 EC2 servers. Fair enough, but the financial math would work even for six of them, although it’d go from just “massive overkill” to “gratuitous massive overkill with extreme prejudice” from a performance perspective.

I recently built a single server for our company and we found that the Mac Mini colo services offered the best price/performance for a server or two.

Compared to hosting with AWS, over several years it’s cheaper to buy a Mac Mini and colo it somewhere, with the Mac Mini simultaneously offering better performance. (Note: we’re upgrading the Mini to 16GB ram and SSDs with 3rd-party gear)

Compared with colo’ing a 1U server… well, actually most companies aren’t interested in selling 1U of colo space. I found most companies want to lease you at least a quarter of a cabinet. Obviously, a build-your-own 1U server is going to offer more compute power, but finding hosting may be difficult. Quad-core 2.2ghz i7, 16GB RAM, and 256GB SSD is overkill, actually, for a lot of server needs.

I think the economics work because the Mac Mini colos basically rent colo space by the cabinet, and can fit 96+ Minis in a single cabinet, thus driving down the cost for end customers.

Just a thought that seems to be working for us.

I agree. Even after you throw in a rack, cabling, switch and backup power supply and assorted paraphernalia and include your monthly electricity bill I’m sure you will still be ahead. And especially so if you consider it on relative performance. I run my own servers and every time I look at the relative value proposition of switching to EC2 I can’t justify it on cost alone.

That said, the time spent building and maintaining them along with an inability to easily and quickly scale or to deal with server outages etc. means I probably wouldn’t make the same decision again. But it was a lot of fun.

To be clear,

  1. I’m sure the economics start tipping in the favor of standard rackmount servers fairly quickly once you’re talking about more than a few servers, or if raw CPU performance is your limiting factor.

  2. Apple only officially supports 8GB of RAM on the Minis; third-party sites sell 16GB upgrade kits that have been pretty extensively validated. If that makes you nervous when talking about server hardware, I don’t blame you because it should, but according to our own experiences and those of others it’s not an issue.

Yes, BYO puts a lot of labor and risk-management on your shoulders in exchange for the reduced ongoing cost.

OTOH, this past year, I’ve been busy migrating an app to The Cloud, so it no longer shares a non-redundant 8 Mbps connection with the rest of The Office. We get better bandwidth, network latency, system software (2012 is long past Fedora 9’s sell-by date), MTTR for hardware problems, and scalability (up and/or out is trivial vs. having someone physically plugging things locally). On-box performance is similar because our aging server fleet was already 1 ECU and the memory footprint is laughably small.

Even with some waste due to cutting some corners in the rush to get it out there, our bill is around $250/month for 3-4 servers and the rest of the AWS cloud they live on.

The other question to factor into your cloud calculation is whether you reserve any of those resources. That allows you to convert part of the per-hour usage charge into up-front payment, which we lean on heavily.

Forgot to mention. Our DB also gained from the transition because now it’s a multi-AZ RDS instance. There would have been a lot of manual fiddling and loading to recover the DB if that server went down in flames when it was local.

Also a clarification, the local servers were 1 ECU each, not total.

You are not just comparing apples to oranges; with that default aws setup, you are comparing apples to donkeys.

The correct comparison is your server vs a single EC2 High-Memory Double Extra Large instance with a 3 year heavy-utilization reservation. This instance costs $3100 upfront plus $0.14/hour. The total 3 year cost for this server on AWS would be 3100 + (.14 * 24 * 365 * 3) = 6779.2, or about $188.31 per month.

Sure, its more expensive. But AWS provides an insane of value on top of the server. Like instantly being able to provision additional capacity. I wouldn’t be at all surprised if, on a full-loaded cost basis, it is extremely competitive with building your own server. Heck, the employee salary expense of building your own server will easily drive the cost of the server well beyond the $3100 up front amazon fee.

I love building hardware too (never had a computer I didn’t build except for laptops). But my mind boggles at AWS value proposition.

First off, Jeff thanks for the link :slight_smile:

Second, the site has grown from a micro instance a year ago, to a small instance and now requires a medium instance just for the main site.

Still looking to colo a machine in the near future. Big impacts are much higher storage I/O, significantly more memory available. The other main impact is that the EC2 compute units get destroyed.

Just by way of comparison, the AMD Opteron 6128 is a low cost (~$100) server CPU that has 8 physical cores. An Amazon EC2 micro instance is about as powerful as a single Opteron 6128 core under ESXi.

The one thing that you do get with AWS is a suite of tools that makes management easy. Also, scaling is made much easier with AWS and you get things such as the ability to quickly add components such as load balancers to the system.

Still, if you need things like SSD performance, high speed CPUs and lots of memory, it is usually less expensive to build a 1U similar to yours. A really good application example is a Minecraft server where you need all three. AWS cannot cost-effectively handle that workload.


Please include your hourly rate and the number hours you spent researching, building, installing OS, etc. Please also include an estimate for the monthly hours you’ll spend on maintenance and updates.

Without this data there is no comparison to the cloud services.

OK Sam Thomas, but only if you factor in the collective learning value of this article to the world, in hours, by each person’s hourly rate. :slight_smile:

Backblaze designed and built their own servers that are specialized for their business model. Everything except the case is commodity.

Allows them to offer a service with a price point that would not be anywhere near possible with AWS or other enterprisy solutions.

Of course enterprisy solutions would not have you (would not give you the option of) asking your friends and family to mail you hard drives, so you can stay in business:

(No financial interest. I am a customer, Backblaze fits well in my multi-tiered backup strategy.)


Nice Dodge, but I’m being serious. I work for a small software company that does its own hosting and server maintenance is killing us. I don’t think your comparison is useful without some estimation of these costs. It’s easy to assume these things are free while burdening your developers with support tasks and wondering why all the software projects are late.

$1500 a month? That’s a lot.

You can rent a Linode from $20 to $150, I didn’t think Amazon was that much more expensive!

I want my servers up 100% of the time on a fast link, I’m not going to risk it building my own kit. Even though I do enjoy it. I’ve been running a VM, now a cloud VM, for 6 years and increased it over time as the system became popular. Currently serving upwards of 13,000 unique users per day with over 40K transactions globally. And still, it costs only $300/year. PER YEAR. I’d never pay as much as what is being discussed here. On the other hand, it’s all running in Java. Had I used PHP, Ruby or some other slow junk then I might need that $500/month cloud vm.

I love this article! For my hobby/entrepreneurial websites and, I self-host at a local colocation facility. It has been fun learning experience for me to build enterprise apps without big iron. I don’t like having the upfront cost of the server and network hardware, but I do know I’m spending less per year than I would if I went to the cloud. My colo was nice enough to give me 4U at a flat monthly rate, and I don’t have to worry about blowing through any bandwidth limits and having costs go haywire if I have a spike in visitors. Having that dependable monthly expense enables me to pay for it out of my mad money and not anger the CFO (my wife :wink:

I’m surprised the cost of energy hasn’t been mentioned yet. I built datacenter monitoring tools for a few years and I remember energy working out to 2/3 of the total cost of ownership.