Apologies in advance - I am that lowest of orders: an end-user and RentaServer renter.
I have been getting screwed by Hosts ever since the Web was invented. (literally)
Numerous SERIOUS marketing efforts have been started over the years and all have ended with crashed Sites.
To explain: I am a serious marketer and learned to drive traffic the hard way in the hard world with pay-in-advance ads.
The Web should have been a paradise for me as a small operator with miniscule costs compared to the ārealā world.
Almost EVERY promotion Iāve run - and they are still expensive even out in the Cyberbog - has worked thus resulting in Server crashes from even minor peak traffic.
We arenāt talking Markus Frind figures here, just a few thousand hits.
After all these years, I have never got a straight answer and never had a Server service stay up for one month without downtime.
I donāt need PeerOne, I keep hearing of guys running Servers from home, like Plentyoffish.com did and all these years later, with traffic that would make ME a billionaire on 10% of it, he still runs everything as almost a one-man band on literally, 1/300th of the number of Servers his competition uses.
I had high hopes for the āCloudā with its lies of distributed loading and and from 1&1 who promoted it heavily, to others, they fall over, so where is this redundancy?
My three current ātrialsā have all fallen over in the last 3 months - all ācloud-basedāā¦
Run a few Forums.
No high-demand music/flash/video downloads, mainly text.
Have 5000 concurrent users.
Handle spikes of visitors of 10,000 per hour NOT second or minute, per hour.
plentyoffish.com was handling 100 times that with the colossal demands of a dating service system, on a home PC. Running Windows as the final insult! :-}
I donāt even need big data pipes. No videos, no music.
With all the tech expertise Iāve seen on this Board here, someone must be able to tell me the secret.
Or, at least how Markus did it.
Why do others need 600 Servers and 500 staff and he needs a couple of renta-boxes and his girlfriend?
No concerns about not using ECC memory in the new build?
I like racking servers and the price savings myself, but the reason AWS is killing it in the market is instant provisioning. Getting quotes back and forth with data center people was the most annoying part of the whole thing to me. The hardware folks are great at getting you hardware quickly, but getting it into a rack under a new contract was always a huge PITA and waiting around.
Over the past few years, to leverage the benefits of instant provisioning which AWS provide, I have implemented VM environments with SALT or Puppet to empower the development teams to provision their own servers. As previous updates mention it is far cheaper to Co-Lo rather than AWS - its up to the imagination of your SysAdmin or DevOps guys to turn wish lists into reality
@codinghorror: Thanks for the write-up but ESPECIALLY thanks for continuing to post newer builds long after the original article. This is very helpful. Thank you.
Proudly updated our servers to AMD Epyc, as they are pushing the boundaries of cores (way more!) and ECC for everyone, not just āserver brandedā stuff. The big wins were
2Ć amount of memory
8Ć speed of storage (SATA SSD ā M.2 PCI NVMe)
4Ć number of cores
This is Zen 2 so it isnāt necessarily a ton faster per-thread, but so many more cores, so much more bandwidth, and double the memory.
Once you get a lot of servers with a lot of memory, the āinsurance policyā of ECC starts to make a whole lot of sense. Weāve definitely had failed memory incidents, both with ECC (!) and without, but the without is a bit more common. At scale ECC is a must have, but I still remain confused as to why the mass market of Dell and Apple and HP are mostly selling machines with regular plain old memory..
āat scaleā means, to Dell and HP and Apple, the cost of dealing with RMAs for failed ram is not outweighed by the cost of adding ECC to every system they ship.
I guess the logic is
servers tend to deal with āimportantā data
there tend to be lots of servers, therefore failure rates as low as 1% start to become painful
servers tend to carry heavier loads
so the extra ECC cost is worth it? I dunno. More data here: