I Happen to Like Heroic Coding

It’s all software, at one level or another.

A software implementation of something, as proof-of-concept to show that the performance of an only-software implementation of some graphics kernel can be upheld, is a terrific way to show – to ourselves, as an industry – what can be done with what toolset.

Sure, it’s in assembler. (talk amongst yourselves)

But the concept is useful to us. It’s like the person who has a homegrown, fast-fast-fast version of the BLAS (basic linear algebra routines) on which many math packages are built. It may not be for you, but it’s important to know that it can be isolated and improved.

Is Pixomatic the same to DirectX as Mesa to OpenGL?

Joe Harris, about every year someone proclaims the death of Intel, and every time Intel proves them wrong. I’m not saying that they’ll in business forever, but I think they proved that they can change architectures if needed and then put an x86 layer on top of it.

Words on Larrabee. What does it mean for you, now? Nothing. It may be the next GPU you buy, it may not be.
But if it is AWESOME (big IF) and if people LOVE it, and it is FAST, it is going to drive the standards. What does that mean? It means that before you know it, DX/OGL will have ‘reflection’ ‘shadowing’ ‘auto-transparent laying’ etc etc built into their standards. That’s right- a DX shader function called ‘reflect’ or ‘sample_s’ that lets you sample your own scene. Larrabee will be able to support new features with a firmware upgrade, while other GPU vendors will be struggling to make their hardware work. (some do a bit of ray tracing already, but it would still take a lot of work to get features like these… why else are they not there). Larrabee has the potential to push graphics to new levels.

With things like OCL- it doesn’t matter if it lets people do faster computations… they probably won’t be much faster than on a GPU, anyway.

But for graphics… If they can make it AS FAST as a GPU… It’s already 1000x more flexible. It’s going to speed up the advancement of 3D graphics.

*note- I was a fan of LRB when I first heard of it. Even if it fails- it will still teach us something

Hi Jeff,

With respect, you may want to do a bit more research before you declare something like Pixomatic as pointless in today’s world. To be polite, it’s a very uninformed statement.

Although today’s machine virtually all come with some form of graphics acceleration the cheaper laptops often lack features that traditional GPUs have had for some time. Support for more than 1.1/2.0 shaders would be one notable example. The support they DO have can often be extremely buggy - laptop OEM’s are notoriously bad about ever updating their drivers, and even then many customers will never update unless the driver is pushed out via Windows Update (which it never is, because OEMs are notoriously bad about this too).

So if you’re writing a Theme/Sim/casual game where you expect a large number of people to be using low-end or older machines you have two choices;

  1. Write lots of fallback code for lower-class machines, including a complete set of your effects/materials expressed using the fixed-function pipeline. Add device-id based workarounds for the various driver-bugs you encounter during QA. After shipping, field support calls and walk people through either updating their drivers or editing a config file to try and get the game stable. Issue patches that address problems that people later encounter.

  2. Use a software renderer (such as Pixomatic) to add a software mode that is used for either people with GPU’s below your minimum spec (shader 2.0) or a fallback for dodgy drivers. You will need to scale down resolution/effects, but that’s easy to do.

Guess which one is the most cost-effective and leads to the best user experience?

You may also be interested to know that in Windows-7 Microsoft have actually ADDED a software renderer for Direct3D (this is not the refrast present in previous versions, this is a fully accelerated and usable implementation).

Why did they do this? For all the reasons above. With Aero/WPF Windows is now extremely reliant on the features and performance of GPUs and is running into exactly the same problems that many game developers have faced for years - you can’t always rely on the hardware in someones machine.

Niniane

1 Like

Mobile device rendering
they typically don’t have GPU’s
Huge application here.

I am also a Larrabee fan. Shaders are too non-standard and they will be obsolete one day…

The javascript renderer was surprisingly fast.

I loaded the original JS raytracer seen and rendered it in 4.1 seconds (using Opera). But I guess there are lots improvements left in the JS engines, because Firefox and IE7 were many times slower.

As time goes on its becoming apparent that two types of hardware are needed on the desktop. The massively parallel world, and the low/single threaded world. Not everything is easy to run parallel and a mix of the two worlds is going to be essential.

At the moment Intel and AMD are pushing forward with more and more cores, but that is rapidly becoming useless for the desktop (the 8 virtual cores of the i7 is already too far). A reversal of that trend and complimenting it with a massively parallel chip, of which Larrabee is the first version, is going to give us the best of both worlds.

Niniane, are you by any chance the Niniane who worked on Lively? Explanation: it’s a somewhat unique name, in the context of a discussion about 3D, and as I’ve remarked on Twitter, female names are (unfortunately and sadly) rare enough on my blog that they’re frequently a sign of spam anyway.

Rick: For something like World of Warcraft, I bet this would work nicely and save them a lot of money

In theory yes, but in practice does Blizzard do this? No. And they make gazillions of dollars on WOW. That’s a pretty compelling argument, to me at least, that this complex software rendering fallback scenario just isn’t necessary.

Honestly, something like pseudo-3D delivered through the browser via Flash or Silverlight is more likely. Do casual gamers even need 3D?

Anyway, the more I think about this, the more I think that GPUs need to be on the same die as the CPU. I’m not sure radical architectural redesigns of x86 CPUs that can work as de-facto GPUs will be a successful evolutionary path. MHO of course.

On a related note, per that gamedev forum thread, looks like Intel bought Pixomatic…

One of the axioms of systems engineering is that you can optimise a system to do one (or a few things) really well but most others poorly; or you can optimise the system to do many things kind of OK, but none of them really well.

General purpose CPUs are fast enough to do very specific things like graphics kind of OK, but they wont ever be able to match the potential of a limited purpose PU like a GPU.

There are savings in chip production, but because you need to drive a general purpose processor much harder to match a designed for purpose processor the operating costs are higher. A classic example is that my $50 DVD player plays DVDs with nothing but a 1 inch inaudible rear exhaust fan while my $1k HTPC has a water cooling rig so I could watch a DVD without needing to set the volume to 11 to mask the noise of all of those fans…

I disagree with your conclusion.

The software game has no crystal balls; the things that will change in the future are in the tools farther down the waterfall than assembly. Pixomatic, with extremely fast, well written assembly code, has a long-term advantage… The things that will change over time are the DirectX libraries that they are trying to allow to process properly on-die.

In short, I think your ‘heroic coders’ are extremely capable and did extremely well for themselves. They had their company bought by Intel; and for a good reason. Intel wanted their ability to pull someone else’s core business into their core business’ realm of operation. Pure and simple.

AMD bought ATI; ever think about why? It’s just cheaper, as Moore’s Law takes individual processor power farther and farther beyond what a person can even imaginably need, to produce chips that have LOTS of power in relation to the features people want. You can reduce consumption by slowing things down, you can produce smaller stuff, but eventually you get to a point where a fast-responding laptop outperforms what people expect, and you need it cheaper.

Cheaper means less chips, less people involved in the manufacture, less shipping of individual parts around, less placement on boards and QA, and less packaging. How do you do all that?

CPU’s enough faster that GPU’s with enough backwards compatibility to make that 100$ drop in price mean more than the performance hit.

I’m not sure if this changed in Windows 7, but I do believe that the Abrash and Sartain code represents best possible performance. I don’t think you can do better, …

TANSTATFC

If you ever read Abrash, you know what that means.

Have you ever actually tried running a 3D game with the crap integrated graphics hardware 3D included on most laptops? It’s utter and complete shit.

The processors, by comparison, are usually pretty decent. Getting a fast CPU and no real 3D support is easy, getting a laptop with a good processor and good 3D performance is expensive. Bringing back a good software renderer for these machines makes all sorts of sense.

A couple of points:
The plan is for Windows 7 to support the full DirectX 11 standard. If the hardware doesn’t support an operation, then Win7 will do it in software. And on a sufficiently fast multicore machine, it’s already faster than some (all?) Intel integrated graphics devices.
Second, Intel’s admitted that when Larabee ships, it’ll be slower than the current ATI/NVidia cards. It’s better than what they have now, and it’s kinda neat, but it’s not a killer chip.

This comment thread is already very long, but I’d like to add one thing:

in his 2008 QuakeCon Keynote John Carmack said that he knows that Id Tech 5 (i.e. the Rage engine), is probably the last polygon engine that Id Software is going to develope. He said that the one guy he currently doesn’t want to be is the guy who, at a big game publisher, has to make the bet on what technology is relevant in 4 years for a next-gen game, because polygons, might be it, or not.

Intel is very much going into a direction where they add current-gen GPU technology to their CPUs so that the next generation of game engines can be build on Intel technology.

You can bet that nVidia is currently working on hardware that is optimized for raytracing and bezier curves, but I think that this comment thread framed the discussion in the wrong way.

That being said, there is another untapped application of these GPU/CPU-hybrids for streaming services like OnLive.

At least Larrabee may provide for fully open source graphics drivers for Linux on day one. Yeah, AMD is starting to move in that direction with thier newest cards as well.

What’s going to be really interesting is the nVidia Tegra, an ARM Core(s) bundled with nVidia graphics. ARM already has excellent open source support, and nVidia is better than AMD in that regard. Hopefully this can lead to a large group of consumers migrating away from Windows and actually using a system that’s secure and safe, yet has the power to do email and web browsing, as well as properly display the high definition videos they’ll want to consume. And of course, with lower power usage.

Pixomatic is HARDLY pointless. I hate to break it to you Jeff, but games are not the only applications that require real-time 3D graphics. For a business application with modest rendering needs, using a software renderer is an excellent way to make sure your app works predictably and reliably on every PC, at least with respect to the 3D graphics part.

Unfortunately, many years ago, I made the mistake of relying on Microsoft’s software renderer that was provided as part of DirectX. This was NOT the reference rasterizer. It was a go-as-fast-as-possible, don’t-get-too-fancy software renderer. It did everything I needed and then some. It was fast enough even on an old P166. It worked 100% reliably while the hardware accelerated renderer failed to start or would BSOD on all the dodgy machines/drivers out there. Once my app got out in the wild, it didn’t take that long for me to forget about the idea of the software renderer as a fallback. The ONLY viable option was to use it exclusively.

So then, what did MS do with this very valuable software renderer? Of course! They killed it off, offering no replacement. Now it’s impossible to run my app on 64-bit Windows without gutting the app to swap out the 3D engine or, more likely, doing a rewrite. (WOW64 doesn’t work on OS components like DirectX DLLs.) How ironic it is that MS is once again providing a software renderer that’s actually meant for real work.

If I had gone with a third-party software renderer like Pixomatic, my app would still be working on every modern computer running Windows. So when you call Pixomatic utterly pointless, I think you’re not seeing the whole picture. It’s about using the right tool for the job, and dependencies and their consequences. Decisions here can easily make or break a company.

One thought I have not yet read: If Intel runs with this in a large enough (read: market altering) way, the market will shift to a set of x86 extensions for which AMD does not have a license. Of course, if Intel drags it’s feet and only utilizes this in a few niches, that will give others the time needed to come up with a similar but possibly better extension set, as we’ve seen before…

You’re using the business perspective to view the effort of writing the software renderer.
I’d wager he did it mostly because someone said it can’t be done.

Also, there are still assembly hacks out there that will beat what the compiler outputs. Writing something in assembly, or at least reading it in my case, maintains the programmer’s awareness of the translation between high level language to assembly.

I’m a bit surprise at how Jeff completely misses this. I always enjoy hearing people saying that learning assembly / C / C++ is useless, while they’re using tools that are written in it.