The Cloud is Just Someone Else's Computer

What was your reasoning behind the intel processor and not a competitor like AMD or even an ARM based chip?

When I see things like “The cloud is just someone else’s computer” I cringe a little bit, because it isn’t. It’s someone else’s computer run by a very good operations team, with services run by thousands of developers learning edge cases, and building an ecosystem of distributed applications that handle tons of automatic scaling.

Pretty powerful stuff… if you can pay for it and don’t mind sharecropping off someone else’s land.

The risk of the cloud is the One Vendor To Rule Them All. It’s just a new version of commercial software, but an even more evil one, because these new commercial vendors embrace and extend open source projects. I’m sure Redis and MongoDB just love AWS right now with ElasticCache and DocumentDB. This is the thing that drives me nuts about the cloud. It isn’t that you’re somewhat at risk to AWS “crushing you” by cranking up costs randomly. It’s that AWS might be killing software freedom. So your ability to even colocate using open source products is in doubt, because Amazon may be hurting the ability for the open source business ecosystem to thrive.

This is why I’d like to see a little less flippant comments about what the cloud being this simple “VMs on steriods argument”, because it is not. Cloud services on AWS, Azure, or GCP are very easy to use, powerful, and depending on use cases, generally aren’t much more expensive. But, your approach of colocating a server… just has so much less institutional risk. I’d really like to see it thrive as an option, because I want the option in the future to negotiate with places like Redis, Elasticsearch, or MongoDB with support contracts, instead of all my money going to a massive company like Amazon or Google.

1 Like

This is stated in the first lines of the blog post: 1GB RAM, fast dual core CPU, solid state drive with 20GB+. That’s an absolute minimum, though. Depending on the size of the community you’ll need substantially more. Database size is a significant factor; ideally the whole DB should fit in RAM, at least the indexes and a big chunk of cache, so if you have a 16GB DB…

It is fair to note that if you only need a $20/month VPS – this gets you 4GB RAM, 2 vCPUs, 80GB disk at current pricing – it’d take 50 months (over 4 years!) to recoup that initial $1000 investment in the hardware.

It would be very cool to have Ryzen in this NUC form factor! Does anyone offer it to your knowledge?

Obviously I agree, though I completely understand why people use cloud services because of the incredible flexibility (literally push a button on your screen and it happens), plus cloud prices have declined somewhat over time. What people tend to overestimate is the difficulty of colocating, and the risk of hardware failure. Today’s hardware is extremely reliable, largely due to advances in power supplies and SSDs. And 90% of reliability concerns can be mediated by simply dumping more super cheap, super reliable hardware in the rack – which gives you even more capacity per dollar as well while reducing your overall risk at the same time.

I think it is dangerous for colocation options slowly to die off over time, so that all server hardware eventually belongs to “the big five clouds” :frowning:

1 Like

In 2011 i was in charge of testing out new Dell R820 servers that cost about $35k each. One of the proof of concepts we ran at my suggestion was taking our 4000QPS postgres database and forcing it to run in 640KB of ram. The machines could do it, as long as you were able to first sync the entire database into tmpfs. I believe the database was around 180GB at that time, and this was a top 500 site on alexa (at that time)

The main thing i learned is that QPS has a lot more to do with filesystem bandwidth than anything else, and the second thing is that while my artificial constraints on postgres were cute, it increased the amount of slow queries by at least an order of magnitude.

The Partaker B18 would absolutely run circles around those Dell R820 for 95% of all use cases. I have a system that’s nearly identical to those old R820s (40 xeon HT cores, 192GB of ram, 8x10gbit NIC, SSDs, etc) and i really only use it for workloads that are either embarrassingly parallel, or those which would require more than about 15 minutes to run on my desktop, but are too large to practically get out to external servers like the cloud. My desktop specifications are vastly weaker in core count (2) than the partaker B18, although my single threaded speeds are probably faster. All of this is to say, the “cloud” isn’t a one size fits all prospect. Certainly newer hardware beats older hardware, especially since 7th generation Intel processors. And i just found out i can colocate for about 33% less than codinghorror, so now i am seriously considering just putting a few of these B18s together and ditching AWS altogether.

1 Like

That’s $2,044 for three years of hosting

That’s a typo, right? Should be $1,044?

It is an all in cost.

$1000 for the computer plus $29 x 12 x 3 = $1,044.

$1,044 + $1,000 = $2,044

Ahh, I was only looking at raw hosting costs e.g. $29 × 12 × 3 = $1,044. Totally makes sense now, thanks!

Forgive me if this is asked and answered, but what software are you running to handle fallover? You’ve got your 3 colocated nodes… are you using an ELB? Nginx on one of the 3 nodes? What does that part look like? If one node dies… what happens? Are you just running 3 identical nodes with db + application on each?

I assume you are not kubering all the netes with these puppies.

1 Like

Automatic backups are sent to Amazon S3 regularly, so it is more of a “hot spare” kind of situation. Recovery means downloading the S3 backup, restore to the hot spare. Probably an hour of downtime.

Maybe the cost of S3 backup should be included too :wink:

1 Like

It is tiny, on the order of $1 per month, if that.

How about a P2P cloud? We’re stable with subutai (and plan to create a Blueprint to deploy Discourse to p2p cloud environments soon).

While doing these tests on various DO droplets and comparing with a local NUC, I have managed to boot up a 4 vCPU CPU-Optimized droplet that was slower than a 4 vCPU Standard(starter) droplet:

sysbench cpu --cpu-max-prime=20000 run
3801 vs 3947

sysbench cpu --cpu-max-prime=40000 --num-threads=4 run
5671 vs 6053

Xeon® CPU E5-2697A v4 @ 2.60GHz (Q1’16) vs Xeon® Gold 6140 CPU @ 2.30GHz (Q3’17)

$80/mo vs $40/mo

The result was very surprising especially given the CPU Optimized droplet is twice as expensive.

After some digging and inquiring DO tech support I found out that with CPU Optimized droplets you’re not actually paying for better “optimized” CPUs but for having the vCPUs threads dedicated to your droplet instead of sharing them with other droplets on the same server. This has been confirmed by DO support.

So… when comparing the cost of a scooter computer with virtual droplets from DigitalOcean, one should use the CPU Optimized prices.

1 Like

I had it in my head that you were using these as dev machines that you remote into somehow and was wondering how you do the client end… but upon closer inspection you mean server hosting things like people typically use the cloud for. That’d be cool though!

Gives me an idea though! My NAS that I hate running at home, but don’t move to the cloud because storage is expensive and the data isn’t important.

Is there any thing like endoffice on the west coast?

1 Like

So i’m not sure if I’m missing something here. But wouldn’t it be $29 per mini pc to colocate?

So instead it’s:
$29 per month * 3 PCs * 12 months * 3 years = $3132

Then tack on the machine cost + taxes/shipping/misc coming out to $4132.

Not sure what the discount for multiple machines is…

You forget the internet price (in my country ADSL with 20Mb down 1Mb up cost 10$/month (and fibre infrastructure isn’t ready yet)

1 Like

I forgot to mention there is a reasonable(ish) priced aftermarket KVM-over-IP, the Lantronix Spider SLS200.


It is USB powered which is nice, and everything runs in the browser, no software required though there is a Java runtime dependency for the browser console.

I found these used on eBay for $200 - $300 and they work fine to retrofit, but it’s a big price jump over the base box. You still won’t have power control, but EndOffice (for example) provides a managed power rail as part of their package already.

To be honest, if you really need a remote admin backdoor, that implies your setup and needs are sufficiently complex that you should be looking at entry level “real” servers instead which tend to have this built in.

Thanks for the pointers Jeff. I recently deployed a scooter at and have been very happy with the service at Endoffice and the scooter box itself.

Also, recently I’ve found this box: which looks very interesting if you want to go containers all the way.

1 Like

I’ve been really happy with ASRock DeskMini 310 as a powerful desktop replacement.

i7-9700 with 65W TDP it’s an absolute beast compared to much noisier laptops (I’ve put a Noctua NH-L9i CPU cooler in there). It’s volume is just under 2L and it would be amazing as a colo server I’d imagine.

A beefier i9-9900 (non K version with 65W TDP) would offer 4 additional megs of L3 cache and hyperthreading compared to 12 MB L3 and no HT of 9700.

1 Like

I totally agree with you. Most sys admins today don’t care to know (study) about hardware and they run in herds to the cloud without a proper cost analysis… that is where CFOs with an engineering background should enter and say no, learn to do an actual cost projection and we’ll see.

Specially today that hardware is “modern” longer than ever due to the fact that Moore’s curve is long gone.

Anyone that says you need a cloud for mundane tasks (almost everything you do today) should ask themselves why doesn’t Google sell its data centers to Amazon or Azure and contracts back cloud computing services (They must be dumb).

1 Like