The Golden Age of x86 Gaming

Golden Age of x86 gaming? Seriously? With all those console ports, DRM-contaminated and DLC-infested bullshit?

Sorry, but please take me back to the real Golden Age of x86 gaming, back with DOS, Win9x and Voodoo graphics, hooray 3dfx. I rather fiddle with AUTOEXEC.BAT and CONFIG.SYS than endure that stupid clusterdung.

Why do you use the phrase ‘x86 gaming.’ Any particular set of reasons?

Because these are games that run on any standard x86 CPU? Granted the OS is a bit of a complication, but “emulating” old x86 consoles on future x86 CPUs is gonna be rather trivial, isn’t it?

This mini-itx case is promising – world’s smallest that can accommodate a full size traditional dual slot video card


Having had the Skull Canyon NUC for a while now what do you think of it? I’m considering getting one, and I’d like to do some gaming on it, but nothing too demanding: Guild Wars 2 and Minecraft at 1080p. (Before anyone says anything about MC needing a beefy card, you can play it heavily modded with an R7 250 with decent if not awesome frame rates just by turning a few settings down, as I recently discovered via Logical Increments.)

1 Like

Big thumbs up. Loving the box. Disk and CPU perf are best in class. Unless you know for a fact this GPU is way out of the league of the games you want to run, I recommend it.

Well, those games work passably on a Dell latop with a i5-3230M with the graphics turned down some, so they should be fine with this.

Frys is now stocking them, too, for under $600.

1 Like

One more question if you don’t mind–did you look for and/or find out anything to indicate whether faster RAM visibly improves GPU performance? I couldn’t find anything one way or another: benchmarking of the NUC suggests no overall performance benefits, although benchmarking of desktop Skylakes with Z170 chipsets shows fairly significant improvements on general benchmarks.

Nobody seems to have tested the GPU though, with faster memory. (One site that had AIDA64 results showing 20%+ general benchmark boosts then said “but nobody’s going to drop $1000 on a fast CPU and RAM and then use the integrated GPU, right?” Uh, maybe so!) I’d probably pay extra for, say, 2666 or maybe even 2800 RAM over 2133 if I was reasonably sure it’d get me another 5-8FPS–in the context of this device that’d probably be noticeable.

Unlikely since the latency usually goes up for the faster memory. On die GPU is definitely affected more strongly by memory speed but it is still a modest effect. One of my previous htpc articles had some links with on die GPU benchmarks and memory of various speeds.

Interesting article @codinghorror. That Intel NUC looks like a beast for such a small form factor!

I noticed that you specifically mention the PS 4.5, but didn’t mention the Xbox equivalent (project scorpio).

Currently the PS4 has marginally more horsepower than the Xbox One, but the new Xbox will have a considerable performance advantage over the PS 4.5 (if the rumoured specs are to be believed).

I can’t find a decent spec comparison right now, the best I could find was here

The Xbox Scorpio will be capable of 6TFLOPs while the Neo is rumoured to be capable of 4.14TFLOPs. For the tech-savvy among us, that’s roughly the difference between a Nvidia GeForce GTX 1070 and a previous-generation GTX 970.

and here:

As you’ve stated it’s pretty unprecedented to have an iteration mid-generation. Sony is currently ahead in this console generation, but it’ll be interesting to see if this manages to turn things around for Microsoft.

I wonder if the proposed upgrade to the PS4 is going to be too modest in light of the upgrade to the Xbox news. It’ll all depend on the price points and release dates of both units.

I’m personally very interested in the PS VR. If the PS4.5 improves the VR experience then I’ll be interested in getting one. I’ve never owned an Xbox, but I might find the new project scorpio interesting due to the beefier specs. That’s if I haven’t bought a little Intel Skull box before then! (seriously, your blog is one of the main things that drives my tech purchases).

1 Like

Your blog is very cool, love the technical detail you go into.

A lot of people would argue that the golden age of x86 gaming was in the early to mid 90s when the hardware, demo scene, games, creativity, shareware and commercial sides all collided at once.

I did have to update my video drivers to the latest version, as I experienced significant weirdness when I upgraded to a 4k OLED TV*. As a reminder, I linked it above, but the download area for the Skull Canyon drivers from Intel is:

(Now that I look again, there are also some “interesting” recent HDMI and Thunderbolt firmware updates there, too.)


This is old hat for PCs, but to release a new, faster model that is perfectly backwards compatible is almost unprecedented in the console world.

What about the Wii U? It has actual IBM POWER based videogame hardware, rather than being an x86 PC in disguise, and it has perfect backwards compatibility with the classic Wii.

I hate x86, for all purposes :confounded:

I love the small form factor of the Skull NUC. One thing I wish however is the ability to add drives to it. As you stated, you put your drives into an enclosure and connected them via USB3. I think it would be awesome if there was a way or an addon of some sort that you could pop off the top (or bottom) of the case and have the ability to add two 2.5" or even 3.5" drives to this unit. I know that M2 SSD’s are getting larger capacities all the time, but they are also very expensive. It would be great to add a couple of drives for storage and have it be in one unit. Maybe some aftermarket company would be able to do this. With those drives, this could become the ultimate HTPC/NAS/Gaming console unit on the market.

1 Like

Thinking about turning one of these into a streaming box. From what I’ve read about the Steam streaming ecosystem is the cheap 80$ box requires Steam to take over the whole screen on the host machine. While having an Intel nuc or steam box to stream to you can still use your computer while gaming.

I have switched away from s/pdif to single cable HDMI because it turns out s/pdif doesn’t have enough bandwidth for modern multi channel sound formats, like the TrueHD ones!

Definitely get that summer 2016 HDMI firmware update for Skull Canyon; there were certainly bugs in audio over HDMI for newer formats.

I am routing everything through this new receiver I got, with 4K HDMI support

Only 5.1 speaker support but great ui, all the modern amenities like Bluetooth and WiFi, and very compact!

I LOL’ed because… yep, I upgraded to Hades Canyon in May 2018 :wink:

I set up Hades Canyon standalone – I swapped out the drive and RAM into the new box, let Windows 10 work its detect changes magic + installed the special Intel branded Radeon driver – and it is plenty close.

GRID 2 at 1080p (high detail)

Skull Canyon: 59fps avg, 38 min, 71 max
Hades Canyon: 161fps avg, 123fps min, 220fps max :scream:

I tested at 4k just for thrills and

Hades Canyon: 87fps avg, 75fps min, 99fps max

Uh… wow? While Skull Canyon with the external thunderbolt attached 1060 GTX is definitely a bit faster, this is really close in performance to that… with no expensive, bulky, power-hungry thunderbolt 3 box required. I think the problem is a lot of reviewers have such a freakish obsession with running “ultra” settings which are just plain dumb most of the time, you couldn’t even pick out the differences in still screenshots, yet the perf cost for that “ultra” moniker is severe.

Granted GRID 2 is not a new game (it’s from 2013), but it still looks amazing and the relative performance difference is staggering – Hades is nearly 3x faster in practice, and Skull was not exactly chopped liver in the GPU department, approaching 1080p on say medium-to-low for most games that weren’t released in the last 18 months.

I tested using GRID Autosport which is one year newer, on high settings (naturally) and:

1080p – 147fps avg, 111 min, 192fps max
4k – 74fps avg, 47fps min, 98fps max

The system itself is not that much bigger, it’s basically a double-stuf version of Skull that’s twice as tall, but the power brick is comically larger, easily quadruple the volume. I guess that makes sense as the power draw is quite high under full load?


Power-wise, looks like about 60w playing a video, and 140w+ playing a game. I checked on my little backup battery box and the LCD watt readout says between 25 - 30 watts at idle , with the display asleep. That’s not a great result, considering I’ve seen 10w and 15w idle on previous HTPC configs (albeit with way, way less GPU and CPU power). But surely better than the Skull Canyon + Thunderbolt 3 external GPU idle numbers, I’d imagine.

Also, you can control the LED colors of the skull in software, including making the eyes pulse. Yep.

Outside of hardcore gaming at 4K, or 1080p 120hz “ultra” addictions, it’s difficult to see where the Skull Canyon box isn’t a clear win, if you want something extremely compact. The only real ding on this box is the disappointing lack of 4k / HDR codec support, which is (surprisingly!) AMD’s fault:

Intel’s decision to route all six display outputs to the vastly faster and generally more capable Radeon RX Vega M GPU makes perfect sense for a desktop. But the one area where AMD’s latest GPU still trails Intel is in the media decode block. The Vega GPU can’t decode VP9 Profile 2 - so no YouTube HDR support - and more importantly it doesn’t support the Protected Audio Video Path technology required for UHD Blu-ray playback.

Also here’s the blower difference, the doubling of height has a big effect on the cooling.

It’s down to $799 on Amazon and I recommend it.

Next up in this series is Ghost Canyon, which is intriguing! It’s modular with an eye on upgrading, not quite like a mini-itx build, but closer.

And much much larger, but the GPU tradeoff might be worth it, if you need a lot of GPU power?

More details at:

The CPU differences are mostly more cores. Note that only mobile class CPUs are allowed in this form factor at all.

It looks to me like a mini / ITX (not quite as physically long) GPU only per this ASUS announcement. But that’s still up to a RTX 2070. Think of it as only two 80mm fans fit…


instead of three…

It also has a proprietary 500w power supply, but it’s internal (!) this time rather than being an external power brick as in the previous two iterations.

1 Like

I bought and built up a Ghost Canyon and I have to say I’m more impressed than I thought I would be. It’s definitely a bit expensive for what it is, but it’s a super clean build, so cleverly designed internally, and perhaps the smallest possible size for a PC with a real video card. Good review here:

I recorded some power numbers as well, this is for the i7 version with the Asus RTX 2070 video card, I personally didn’t see the point of stuffing an i9 in there:

  • screen off, idle Windows 10: 27w
  • screen on, idle Windows 10: 41w
  • full cpu load (prime95): ~140w
  • full gpu load (furmark): ~233w
  • full cpu + gpu load: ~314w

Those two 80mm fans on the top are surprisingly well behaved, even under absurd full cpu + gpu load it’s not that loud. The whole system is much quieter than it has any right to be given the power on tap.

1 Like

I have to say Ghost Canyon is very cool – but it sounds like Phantom Canyon may be more of a pure sequel to Hades Canyon:

The Intel Phantom Canyon NUC 11 Extreme features a Tiger Lake-U 28W Core i7 or Core i5 processor. Both processors will be equipped with the latest generation of integrated Intel GPU Xe graphics processors.

NUC 11 Extreme has a CPU with 4 cores and 8 threads, which has a clock base of 2.3 GHz and a maximum boost of 4.4 GHz. When tested with 3DMark, this CPU scores about 4,590 points, which is slightly lower than the Core i7-1165G7 CPU.

The Intel Canyon NUC 11 Extreme will use a discrete GPU in addition to the Intel Xe GPU. The American company plans to embed an NVIDIA GTX 1660 TI 80W GPU with 1,536 CUDA cores and a GPU clock frequency of around 1600 MHz.

Obviously I’ve heard of the GTX 1660 Ti, but this Intel Xe thing is new to me:

In terms of raw performance, the Xe DG1 GPU based graphics card scored 5960 points which put it on par with the GeForce GTX 950 or if you want an even more interesting comparison, just as fast as the once flagship GeForce GTX 680. The GeForce GTX 1050 is still a faster chip than the Xe DG1 GPU since it scores around 500-800 points more in the same benchmark.


In the Geekbench 4 OpenCL benchmark, the Tiger Lake Xe GPU scores 59845 points while NVIDIA’s GeForce MX350 scores 59828 points. There’s no difference in terms of performance but for Intel’s Xe GPU, this is a huge feat considering it is offering the same performance of a discrete tier GPU while being constrained in terms of bandwidth and power.

That’s… not bad!