The Golden Age of x86 Gaming

I've been happy with my 2016 HTPC, but things have been changing fast, largely because of something I mentioned in passing back in November:


This is a companion discussion topic for the original entry at https://blog.codinghorror.com/the-golden-age-of-x86-gaming/
1 Like

I’m quite excited, but reserved about all this. The cost is the biggest factor here, $500 just for an external GPU case. And if I upgrade the GPU in a few years, will the new one be CPU constrained? Would I just be better off with a console sized PC (and is there a setup that could accommodate a 1070)?

Intel mentioned they were not pushing an external GPU themselves, so there’s very little push it seems to expand this market? How do other people feel about this?

I’d personally love this for laptops. I could truly get rid of my desktop if this were to become mainstream.

Surely putting the Samsung PRO SSD in that thing is completely overkill. You could save a lot by putting the regular Samsung EVO in it and you probably wouldn’t notice the difference.

@theotherlinh

Yes, but assuming the GPU chassis doesn’t go bad, you just keep upgrading the GPU. Eventually (5+ years?) you will want to upgrade because you may start being bandwidth limited, but you’ll probably have done 1 or 2 GPU upgrades by then and gotten your moneys worth. You certainly could use the eGPU case with a laptop. You’ll want one with the newest Thunderbolt, and a good SSD inside and at least a decent i5 CPU though. And you may or may not be able to use the laptop screen, you might need an external one. Otherwise, go for it!

I think the eGPU is great for people who like smallish (12-14") that like to game, but want to be able to take their system with them on the go and still be reasonably portable. Generally most i3’s and i5/7 can easily handle nearly any game these days.

2 Likes

For those worried that external Thunderbolt 3 connected GPUs won’t have enough bandwidth, take a look at

GeForce GTX 980 PCI-Express Scaling - Conclusion | TechPowerUp

For the majority of games, there is no significant performance difference between x16 3.0 and x8 3.0 (and x16 2.0, which offers the same bandwidth). The average difference is only 1%, which you’d never notice. Even such bandwidth-restricted scenario as x16 1.1 or x8 2.0, offered by seriously old motherboards, only saw a small difference of around 5%

Contrary to intuition, the driving factor for PCI-Express bus width and speed for most games is the framerate, not resolution, and our benchmarks conclusively show that the performance difference between PCIe configurations shrinks at higher resolution. This is because the bus transfers a fairly constant amount of scene and texture data - for each frame. The final rendered image never moves across the bus except in render engines that do post-processing on the CPU, which has gotten much more common since we last looked at PCIe scaling. Yet the reduction in FPS due to a higher resolution is still bigger than the increase in pixel data even then.

Thunderbolt 3 3 = 5GB/s, PCIe v2 x16 = 8GB/s, PCIe v3 x16 = 16GB/s. Anandtech noted:

With only one-quarter the bandwidth of a full PCIe 3.0 x16 slot Thunderbolt 3 will be constraining at times, but various benchmarks have put the average performance hit for a high-end card at under 10%. That of course will vary from game to game, as games that push a lot of data to the GPU or games that frequently interact with the GPU (e.g. retrieving physics simulation results) will be more greatly impacted by the reduced bandwidth and higher latency than simpler, more straightforward games.

We’ll see but I would expect it to be well under 10% in practice in most games.

As for SSD prices, you’re right @DavidGA that the 850 evo is also a perfectly good choice, the price difference is $180 versus $317. However, if you do this you are giving up full NVMe bandwidth for SATA 6gbps interface, which is significant on fast SSD drives.

Unlikely, for example even in multiplayer Battlefield 4, where CPU requirements would be highest, this data showed

… it doesn’t really matter which CPU you have. The game is optimized to fully utilize only CPU’s with 4 cores or threads and, that work load is being replicated to other more powerful CPUs

Remember that the PS4 and XBox One have really anemic x86 cores (but a lot of them, at eight full), so developers won’t be loading up an anything CPU intensive for the forseeable future. Measured differences in games by CPU are almost always tiny. GPU, on the other hand… huge, huge differences.

1 Like

It’s tempting, but I wonder if for upgradeability reasons, it’s not better to just stick with a “standard” pc?

1 Like

I suppose my fear is more this won’t get accepted enough to drive prices down. Doing all that seems quite costly. I have a personal goal to have as small a computer as possible, but I probably be waiting a little longer before jumping in

In fact, Intel Iris Pro graphics have been decent for gaming for quite a few years!

Regarding integrated graphics, I’m concerned that what looks like great graphics performance now might not be such several years down the line. Things might be different this time around because of ubiquitous x86 and the arrival of Vulkan/DX12/Metal, but in the past console generation, I noticed that my PC would run newer games worse and worse (even on the lowest settings at 640x480!) despite the fact that the very same consoles — with far poorer hardware — could run them just fine. Presumably, this is because game developers would focus their PC port optimization on the hobbyist market, not mid- or low-range hardware. (Ever try asking for help regarding integrated graphics in a gaming forum? It’s not pretty!) This trend is particularly unfortunate for those of us who prefer using low-profile PCs, since the graphics performance might “degrade” over the years without us being able to do anything about it.

eGPUs really excite me, though. As an exclusive laptop user, I’m really hoping somebody will release a slim eGPU case once the new rMBPs come out so that I can get the best of both worlds! It doesn’t have to be the biggest or the fastest: just portable enough so that I could shove it in my backpack and take it on trips, and at least 2x more powerful than the Iris chip inside my laptop. The WD “My Passport” of GPUs!

1 Like

t’s also noisy under load

Boy will I want to see your face when you’ll start playing on your PS4.5. :smiley:

1 Like

Or bang the latest and greatest GTX1080 in your existing PC, and use an Nvidia Shield TV to stream all your games to the big TVs all round the house. I think that’s where we’re headed; a stupidly powerful (and probably noisy) PC sitting somewhere by a network port, and low-cost devices dotted around the house used to interface with it. With the latest Powerline networking, getting the bandwidth for HD streaming is cheap and easy.

Speaking of streaming games… Does anyone know if you can use a rig like this with a Thunderbolt 3 to an external GPU and still stream using that GPU? Anyone have experience with this?

1 Like

I am completly with you on the Xbox One. I enjoy gaming on it, love the games with gold promotion so I have lots to play. Achievements are still fun. Howevber, interacting with the thing itself is really frustrating. Its slow as you have pointed out. Really slow. Sharing game clips and screen shots is just a horrible experience.

Little tip though, for both your PC gaming and Xbox gaming, try the Xbox One Elite controller if you get a chance. Its excellent. I would love to have a setup like that NUC under the telly with the Elite controllers.

1 Like

This article doesn’t make sense. It can’t be x86 and use more than 4gb of ram. An unsigned 32bit int is roughly 4 billion

If this shill post was written by anyone in our industry other than someone associated with SO, I might be shocked or offended. Stay classy Jeff.

I think you need to go back to Computer School, or, if more convenient, just use Google :wink:

1 Like

You may want to look up the AMD64 and EM64T instruction set extensions (often known as x86_64 or x64 in common parlance) that add full 64-bit capability to x86.

Oh, and PAE is also a thing (allows 32-bit stuff to access a larger address space than 32bit), so you may want to look that up as well.

In general talk though, everyone just says x86 instead of anything more precise because x64 has been the de facto x86 baseline for the past 10 years or so, much like how MMX+SSE1 was the de facto standard for a while.

1 Like

Over the years, this blog has evolved into a satire of itself.

In 2008, “never upgrade, especially Linux!” (Let That Be a Lesson To You, Son: Never Upgrade., 31 Mar 2008). Then, earlier this year, “wouldn’t it be easier to upgrade, especially Linux?” (The Scooter Computer, 03 Feb 2016). The “everyone should buy this” recommendations have become more outlandish, as well - in 2015, we were treated to “4k monitors are only $700 - might as well buy three of them! Everyone join in!” (Our Brave New World of 4K Displays, 18 Aug 2015). Allow me to provide a gentle reminder: most people don’t spend two grand on their entire build, much less on the monitors alone, not to mention the monitor arms you recommend, which are $600 for a set of three.

And now, after barely six months with your newest HTPC, it’s already time to upgrade? This article borders on the absurd. You set out to build a PC that can compete with consoles, and it’s so “simple” that all you have to do is spend three times as much money on it as a console would cost. And then the kicker: you tell readers to “invest” in one of these machines, plus an external graphics card dock. For $500. Plus a video card to use with it, I suppose, and it had better outperform the base machine’s stock performance, so let’s call that another $200. So, congratulations, you have built a PC that outperforms a $400 console for two grand. Many people, given the option, would rather spend $400 on the console and then $1600 on games. I bet most of those people didn’t found Stack Overflow, though, so there’s that.

This external video card dock isn’t even sensible. It’s like getting a more convenient computer setup by switching from a laptop to a tablet, then deciding that you want a real keyboard, and adding a keyboard to it. (“Congratulations, you have invented the laptop. You are a genius.”) People slim down from a full tower to a NUC, then they realize they want a real graphics card, so they spend an extra $500 connecting a graphics card. What have they accomplished? Their tower PC cost an extra $500. Upgrading the card? You still have to disconnect the old one and plug in the new one, just like with a traditional setup. Viewing the card? Many cases these days have transparent panels so you can see the guts (if not, you can always just leave the case open, which is basically what this NUC+external setup does). Starting from a barely-upgradeable setup and then forcing upgrades onto it has zero benefit over starting with an easily-upgradeable platform in the first place. It’s more expensive and less powerful. This slimmed-down HTPC, with your minimal and necessary monitor setup (triple 4k with designer monitor arms), would come to somewhere in the neighborhood of five thousand dollars.

But, even so, congratulations are in order: you have just invented the personal computer. You are a genius.

1 Like

“Zero games are meaningfully CPU limited today” - that’s not the case on games I’ve worked on recently and a lot of hard work goes into having them use the multiple cores. Consoles have been good for PC gaming because the constraints have improved the industry wide knowledge of effectively using a multi core processor in games.

1 Like

Right, but who’s asking them to? Start with one monitor, upgrade and add more later as needed. Start with a regular stand, add arms later.

It is unfortunate, but I have really grown to loathe my Xbox One. Like… a lot. A lot a lot. The experience is so damn bad I find myself wanting to actively bet against it. And yes, the PS4 Neo is definitely one of those bets, but so is a better, more naturally gaming compatible x86 box in the living room.

(As @TheMasterPrawn pointed out, the controllers are amazing though. Big fan of the XBox One controllers and the Elite controllers in particular. Very happy I can use them on Windows with the USB wireless adapter kit.)

It wasn’t a priority before since I thought playing older and budget titles would be fine, and anything fancier I could play on a modern console, or stream from my gaming PC. Streaming is OK, if sometimes inconvenient @markrendle, but the modern console part of that has been a flat out disaster.

Ok, that’s fair, we are spending 3× as much, but, this box is

  • 4Ă— faster at x86 CPU work than consoles
  • way more than 4Ă— times faster at disk operations as consoles

But, GPU wise, yeah, it’s merely on par with today’s consoles. That may not sound impressive, but it’s a huge, important milestone for Intel’s on chip GPUs — and they won’t exactly be getting any slower from this point on, will they?

It’s kinda like spending 3× as much on a sports car to get one that’s 4× faster. The money isn’t just for show. The performance is there.

Whoa there cowboy, I didn’t say you had to get the external graphics dock. Skull Canyon’s built in GPU gives you an experience basically equivalent to an Xbox One or PS4. The graphics dock lets you upgrade that in the future to something that can do cutting edge games at 4k at over 30fps constant, maybe even 60fps constant. Consoles won’t be able to do that for a loooong time.

But if you’re cool with 1080p gameplay in modern games, this box will do fine all by itself. No external GPU box required.

Speaking of the price of games, they are far, far cheaper on Steam than on the Xbox One Marketplace or PS4 game store. Unless you like buying scratched up, used physical disks, and I freakin’ hate dealing with physical disks, the kinds of “digital deals” on those Sony and Microsoft stores is nowhere near the discounts on offer with Steam.

That’s a good point since streaming (probably?) requires the whole screen to be sent over the wire at x frames per second. However, I suspect they compress the framebuffer using the GPU itself so the actual data sent is not a dumb, ginormous bitmap of the whole screen? Or at worst, a high quality JPG?

Yeah that does seem to be the case, the video compression is done on the GPU so the data sent is over the wire modest in size, probably on the order of megabytes per second, not gigabytes.

It is true that the eight cores in the XBone and PS4 have definitely encouraged game devs to aggressively multi-thread their code and that’s a good thing. But at the same time, it’s much harder to be CPU limited when your CPU is literally four times faster than what’s in consoles.

I can’t emphasize enough how anemic these console x86 Jaguar cores are (look at the A4-5000 results, that’s a Jaguar core CPU):

So a Pentium 2020M, an ultra low end budget Ivy Bridge, is beating that by 2.2Ă—. Skylake is three generations ahead, and a Core i5 or Core i7 version is absolutely going to be 4Ă— faster at general x86 CPU work than that.

As Apple has shown us time and time again with iOS devices, two very fast cores beats the crap out of eight slow cores all day long.