Building Servers for Fun and Prof... OK, Maybe Just for Fun

Apologies in advance - I am that lowest of orders: an end-user and RentaServer renter.

I have been getting screwed by Hosts ever since the Web was invented. (literally)

Numerous SERIOUS marketing efforts have been started over the years and all have ended with crashed Sites.
To explain: I am a serious marketer and learned to drive traffic the hard way in the hard world with pay-in-advance ads.
The Web should have been a paradise for me as a small operator with miniscule costs compared to the ā€œrealā€ world.
Almost EVERY promotion Iā€™ve run - and they are still expensive even out in the Cyberbog - has worked thus resulting in Server crashes from even minor peak traffic.

We arenā€™t talking Markus Frind figures here, just a few thousand hits.

After all these years, I have never got a straight answer and never had a Server service stay up for one month without downtime.

I donā€™t need PeerOne, I keep hearing of guys running Servers from home, like Plentyoffish.com did and all these years later, with traffic that would make ME a billionaire on 10% of it, he still runs everything as almost a one-man band on literally, 1/300th of the number of Servers his competition uses.

I had high hopes for the ā€œCloudā€ with its lies of distributed loading and and from 1&1 who promoted it heavily, to others, they fall over, so where is this redundancy?
My three current ā€œtrialsā€ have all fallen over in the last 3 months - all ā€œcloud-basedā€ā€¦

  1. Run a few Forums.
  2. No high-demand music/flash/video downloads, mainly text.
  3. Have 5000 concurrent users.
  4. Handle spikes of visitors of 10,000 per hour NOT second or minute, per hour.

plentyoffish.com was handling 100 times that with the colossal demands of a dating service system, on a home PC. Running Windows as the final insult! :-}
I donā€™t even need big data pipes. No videos, no music.

With all the tech expertise Iā€™ve seen on this Board here, someone must be able to tell me the secret.
Or, at least how Markus did it.
Why do others need 600 Servers and 500 staff and he needs a couple of renta-boxes and his girlfriend?

Thank a lot for this blog. I bookmarked it last year knowing that this year I would be building a new server. Now I am ready to build it.

Would you still use the same specs? Or would you move to an Intel E5-2620 (6 core) platform?

My server will be mainly file storage, but I would like to leave the opportunity to expand into Virtualization in the future.

Hereā€™s what we built in 2013

  • Intel Xeon E3-1280 V2 Ivy Bridge 3.6 Ghz / 4.0 Ghz turbo quad-core ($640)
  • SuperMicro X9SCM-F-O mobo ($190)
  • 32 GB DDR3-1600 ($292)
  • SuperMicro SC111LT-330CB 1U rackmount chassis ($200)
  • Two Samsung 830 512GB SSD ($1080)
  • 1U Heatsink ($25)

$2,427

vs. what weā€™re building in 2016

  • Intel i7-6700k Skylake 4.0 Ghz / 4.2 Ghz turbo quad-core ($370)
  • Supermicro X11SSZ-QF-O mobo ($230)
  • 64 GB DDR4-2133 ($680)
  • Supermicro CSE-111LT-330CB 1U rackmount chassis ($215)
  • Two Samsung 850 Pro 1TB SSD ($886)
  • 1U Heatsink ($20)

$2,401

About the same price, but twice as much memory, twice as much (and probably 50-100% faster) storage, and ~33% faster CPU.

Some load numbers:

  • 2015 Skylake build ā€“ 14w (!) at idle, 81w full CPU load
  • 2012 Ivy Bridge build ā€“ 31w at idle, 87w full CPU load
1 Like

No concerns about not using ECC memory in the new build?

I like racking servers and the price savings myself, but the reason AWS is killing it in the market is instant provisioning. Getting quotes back and forth with data center people was the most annoying part of the whole thing to me. The hardware folks are great at getting you hardware quickly, but getting it into a rack under a new contract was always a huge PITA and waiting around.

New blog post related to the ECC issue going up today! Keep :eyeglasses: out for it.

1 Like

Over the past few years, to leverage the benefits of instant provisioning which AWS provide, I have implemented VM environments with SALT or Puppet to empower the development teams to provision their own servers. As previous updates mention it is far cheaper to Co-Lo rather than AWS - its up to the imagination of your SysAdmin or DevOps guys to turn wish lists into reality

@codinghorror: Thanks for the write-up but ESPECIALLY thanks for continuing to post newer builds long after the original article. This is very helpful. Thank you.

1 Like

Been following this post for a few years now. And actually looking to build a server. Did you update the server config in the 2020s?

2 Likes

Proudly updated our servers to AMD Epyc, as they are pushing the boundaries of cores (way more!) and ECC for everyone, not just ā€œserver brandedā€ stuff. The big wins were

  • 2Ɨ amount of memory
  • 8Ɨ speed of storage (SATA SSD ā†’ M.2 PCI NVMe)
  • 4Ɨ number of cores

This is Zen 2 so it isnā€™t necessarily a ton faster per-thread, but so many more cores, so much more bandwidth, and double the memory.

2 Likes

Curious if you saw any issues with non-ECC memory (following from To ECC or Not To ECC ) in production?

Also, whatā€™s your take on the Ryzen 16C/32T chips ā€“ wouldnā€™t they be faster for running Ruby and JS compared to EPYC?

1 Like

Once you get a lot of servers with a lot of memory, the ā€œinsurance policyā€ of ECC starts to make a whole lot of sense. Weā€™ve definitely had failed memory incidents, both with ECC (!) and without, but the without is a bit more common. At scale ECC is a must have, but I still remain confused as to why the mass market of Dell and Apple and HP are mostly selling machines with regular plain old memoryā€¦

ā€œat scaleā€ means, to Dell and HP and Apple, the cost of dealing with RMAs for failed ram is not outweighed by the cost of adding ECC to every system they ship. :thinking:

I guess the logic is

  • servers tend to deal with ā€œimportantā€ data
  • there tend to be lots of servers, therefore failure rates as low as 1% start to become painful
  • servers tend to carry heavier loads

so the extra ECC cost is worth it? I dunno. More data here:

image

:point_up_2: think about thisā€¦ CPU FAILURE IS ABOUT AS LIKELY AS RAM FAILURE! :exploding_head:

Latest data

TL;DR if you have more than 10 machines, Iā€™d get ECC RAM. In the big picture all ram should be ECC, but isnā€™tā€¦ forā€¦ reasonsā€¦

2 Likes