In the last twenty years, I've probably built around a hundred computers. It's not very difficult, and in fact, it's gotten a whole lot easier over the years as computers become more highly integrated. Consider what it would take to build the Scooter Computer:
This is a companion discussion topic for the original entry at http://blog.codinghorror.com/is-your-computer-stable/
There is a version of Furmark available for Linux in the form of GPUTest. For maximum torture, I suggest running both MPrime and Furmark, at the same time. This uncovered an issue I recently experienced.
I recently upgraded the GPU in my “Steam machine” to an Nvidia 980Ti. It dual-boots between Linux and Windows, because you know, games… I noticed that some games would now crash in Windows, but not in LInux (with my old 660ti GPU, they were both stable). First I thought “that’s just Windows”, but it didn’t seem consistent with the kind of flakiness I have come to expect in MS-land. There were no bluescreens, the computer just rebooted. And the fact that it was really stable with the old GPU made me suspicious.
MPrime was stable, but running Furmark almost instantly triggered a reboot. Digging into the system logs, everything pointed to the Power Supply, a measly 600W, being just too weak to power the system under full load. But Furmark was stable in Linux, which was weird. Turns out, I had to run both MPrime and Furmark simultaneously to reach the same power consumption under Linux. And sure enough, the system instantly rebooted.
Upgrading to a 1200W PSU solved that. (It’s not overkill, it’s a safety margin!)
Well, my anecdotal experience is thus:
- I’ve never seen or heard of a broken CPU. Not unless you fried it by overclocking or improper cooling or something.
- A failing RAM is a fairly commonplace thing, although the high-end brands are usually more reliable. Anyway, RAM chips are either good or bad from factory, I’ve never seen one get bad after a while of using.
- HDDs/SSDs, of course, eventually all die. But they’re fairly reliable and I’ve never witnessed the infamous “multiplying bad blocks”. I’ve had plenty of HDDs with bad blocks, and they’ve never gotten worse, even after years of use. Still, unless you’re on a tight budget, it’s prudent to replace such a drive. (On the other hand, I’ve had HDDs with faulty electronics which really were unstable)
- I’ve had several PSUs dying. That’s the worst case, because the symptoms can manifest as anything, and nothing will point to the PSU as the culprit. Also, I’ve heard (but never witnessed myself) a PSU that fries other components. For that reason, don’t skimp on a quality PSU. (Then again, one of the dead ones with crazy symptoms was a HEC, which I’ve always thought of as a good brand).
- The component which fails most often in my limited experience is… the GPU. I’ve no idea why. Perhaps it’s got to do with the high temperatures they get subjected to. And, of course, it’s a total non-issue for servers.
I haven’t built my own computer in about a decade. Don’t think it was ever difficult. I’m considering building another, but I don’t want a big ugly trashcan computer. What’s the state of small book-size computers? Something that weights 2-3lbs and can be hidden in a corner.
why are you running badblocks in read only mode on a new hard drive? it’s less likely to find issues in that mode. If there’s no data or you’re ok erasing data you should use read/write. Note: I’m not 100% sure this advice stands up for SSD.
For laptops you have to do tripple the tests. esp cooling/hdd/memory
For testing on windows I use msi afterburner for monitoring. on it you can notice if the cpu/gpu is throttling. for gpu stress test I use msi kombustor. for cpu prime95. read smart data for hdd’s. Intel recently released a overclocking/stress testing tool too, that gives you power draw data too.(intel xtreme utilities or something)
The idea is to build the next unit of computing:
It’s taken a over a decade for the PC to see what Apple realized. No one wants or needs a trashcan. I want desktop cpu and GPU, plus a little bit of expansion.
Hi, you provided tests for every component of a computer except where all of those are plugged. (and power source)
What about motherboard testing? I had once one failing, and it was by gessing that i found out having inconsistent BSOD. Does anyone know about a test or benchmark or status checking for motherboards?
So? Get a gaming grade laptop and you get the same thing. Squeezing all that power it in mini-atx is going to cost you the same anyway. I don’t see any advantages of the NUC over a similarly outfitted laptop.
Ok. I haven’t priced it out. Thanks.
I’ve found the Phoronix Test Suite is pretty good at shaking lose a few problems:
Many of the tests are much closer to realistic than synthetic in terms of the spectrum, at least for someone who plays video games, runs databases and compiles kernels.
Well, here I must shamefully admit, that neither have I. It’s just that the last time I looked at a mini-atx PC, it cost about the same as a laptop. However, looking at that NUC link you gave previously, that looks like it could be a sign of times changing. Perhaps there is something to be had there. But, need to investigate more.
After all, in the end I would expect that the same things which drive up the price of a laptop (squeezing things but still cooling them), also drive up the price of a mini-ATX computer.
Even as someone who rarely builds his own computers any more, this is a lovely summary of the process of stress-testing one if you have to! Thank you!
Don’t forget Power Management, in general but Wake -Up from Stand-by in particular. That’s where I’ve seen some devices cause major dissatisfaction, graphics adapters mostly, but network and external drives nonetheless.
memtest forked simply because the original was no longer being updated. And it had stopped working on newer hardware.
I’m not a fan of memtest86. I’ve had more than one occasion where memtest didn’t find a fault, whereas prime95 did. Memtest just doesn’t load your memory subsystem with enough load. Prime95 on blend will get the job done every time.
*I’ve even had arguments with hardware vendors about said issues
Recommendation for GPU stability testing
Unigine Heaven / Valley Benchmark.
Make sure Vsync is turned off for these tests.
If anything weird happens during your Ubuntu Server install that prevents it from finalizing the install and booting into Ubuntu Server … your computer is not stable
Hmm, I’m not certain I’ve ever gone through the full Ubuntu install process without anything weird happening; my default assumption is always that it’s going to be so bug-ridden as to be borderline - if not actually - unusable.
In fact, I’ve tried this in the last few weeks on my desktop. I can’t remember why, but there was something I wanted to check in Linux before upgrading to Windows 10, and my first attempt was with the Ubuntu live CD: it got through GRUB okay, but after trying to load the OS the screen just went blank and stayed that way. I ended up using Debian instead.
This is pretty much par for the course - I’ve tried Ubuntu literally dozens of times on a variety of hardware, and not once in the last decade have I seen it get all the way to the live desktop without at least something crashing on startup.
So suggesting that it should be considered a benchmark for stability seems like pretty terrible advice.