I Happen to Like Heroic Coding

I’d benefit from that. I estimate I would use a GPU maybe 5% of the time. Far far more cost effective for me to buy dual CPUs.

It’s worth noting that (IIRC) Abrash started that work because the incompatibilities between different cards’ nonstandard OpenGL extensions were such a nightmare. It’s telling that he found it worthwhile to give up a generation or two of performance in order not to have to write code that is half #ifdefs (I don’t know if cross-card OpenGL programming has improved in the ~3 years since I did it, but this was typical then).

Also, as a couple of people already pointed out, there is plenty of software that isn’t performance-bound but still uses state-of-the-art graphics calls for the visual effects (Spore comes to mind, as well as certain data visualization software, and lots of kids’ games). Something like Pixomatic is perfect for these (back in the day, I used Mesa for the same purpose).

Fun fact: the Intel open source drivers on Linux are nerfed. They don’t support OpenGL 2 due to the patent encumbered S3TC texture compression scheme required by the spec.

Thus, most games which require S3TC freak out and crash. UT2004 detects it’s absence and falls back on a (~30 FPS) slower texture compression method. Performance wise, Software rendering is on par with UT’s aforementioned fallback mode when judging by FPS. Software rendering is better than the fallback mode at higher resolutions, because the fallback mode will pause for a millisecond every second or two, while software mode does not.

I’m trying to grok your point, Jeff.

Are you saying that Michael Abrash is extolling the virtues of Larabee because it plays to the strategy behind Pixomatic? And if so, you are saying it doesn’t matter, that it is a failed strategy at the top because video cards do it way better?

Jeff, I don’t understand how Pixomatic or Larrabee is utterly pointless.

Abrash’s articles imply that the Pixomatic effort produced deep insight into how to parallelize and super-optimize a pure software 3D rasterizer. By working with the Pixomatic team, Intel was able to distill these results into an instruction set and architecture that remains pretty general purpose but powerful enough to match GPUs.

Why is this pointless? It will only take the next hot game to include a feature that’s not feasible on a GPU or typical CPU and suddenly the Intel team will be vindicated.

Hmm… maybe I missed your point.

I think the best way to introduce a machine of this power has got to be through video games - specifically written for the system at hand.

There are some commenters above who don’t quite realise that to run ‘normal’ software won’t work. Parallel processing is a style of codeing in it’s own right, using a HAL is all very well but take best advantage of a system code needs to be written specifically. One can’t just take code written for an x86 and expect it to run super quick on a parallel processor machine.

I do suspect that if Nvidia worked with the open source community they could come up with a new console that would make today’s game consoles look like zx-spectrums

:slight_smile:

The JavaScript Raytracer

on my machine, this took 1.98 seconds in Chrome to render the Original JS RayTracer scene at 320x200.

(make SURE you turn off the display image while rendering option!)

A bit disappointed, today’s Chrome on a much faster machine than 2009 renders the same scene in 1.046 seconds – “only” 1.88x faster.

Also Intel’s Larrabee was cancelled. Pretty sure GPUs killed it stone cold dead.