Real-Time Raytracing

Raytracing in hardware…not for realtime though
http://www.artvps.com/page/109/raybox.htm

@Ulric

Yes all scanline renderers cull as much as it can, but it mostly culls the stuff not in view. This can make a huge difference.

Let’s say you have a super complex model with billions of triangles, if the entire thing only filled up 1% of the screen, you are still going to render all the triangles using scanline.

But with a raytracer, you’ll only render the visible surface of the triangle, which is only be a hand full of triangles. The performance difference would be huge, even on todays hardware.

The trade-off is, with scanline you can render the triangles serially, (low memory cost), but with a ray tracer you pretty much need it all in memory to do hit detection fast.

DOF and motion blur are trivial to implement with Ray Tracing, they are just prohibitive to use, and make the cost grow exponentially. That’s the problem that REYES was designed to address, and what I was replying to.

You misdunderstood. Its not significantly expensive. In big O terms its just a linear term on the end (dependent on the resolution). They do not make the cost grow exponentially unless you implement them badly. Whats wrong with distance dependent convolution based on the depth buffer contents for DOF? Rendering consecutive frames and blurring them together for motion blur?

Motion blur adds a multiplicative constant and DOF adds and additive one. The multiplicative one is annoying, but still neither actually affects the growth of the run time or the asymptotic run time…

Where can I sign up for a job at one of these companies struggling with these “problems”?

/me googles

Re: The Amiga.

Offtopic, but Im surprised to see that the Amiga is still around to this day. Someone has managed to recreate the Amiga A500 on an FPGA!

http://www.youtube.com/watch?v=HwP0t0kakW0

Hello all, very interesting discussion, please accept my humble post with a link to my website The Amusement Machine http://theamusementmachine.net which has some videos and screenshots of realtime raytracing on graphics hardware. One of the videos does contain normal hardware accelerated rasterization mixed with raytracing. All in real time… I do apologize for no demo or exe but, I promise it will be up soon.

Hi!

It’s quite funny to read all the comments about raytracing vs. rasterising. You probably should’n pay to much attention on what the big players say.
Intel want to sell their (future) CPUs so they are hyping it.
Nvidea wants to sell their GPUs, so they are biased against it.
And John Carmark? He has enough knowledge and deep enough pockets to get all the nice visuals out of an GPU, so it doesn’t really matter to him.
And at the moment it’s the choice between using rasterising with an API or even complete engine, or writing a raytracer from scratch and probably ignoring those nice graphics cards most gamers spent much more money on, than on their CPUs. Anyone who needs to sell a (non exotic) game the next few years would be pretty silly to use raytracing.

But we are almost at the point where a raytraced game is feasible. On Highend hardware it may already be.
It won’t look nearly as nice as those rasterised AAA Titles like Crysis and such, probably more like something from 10 years ago. But… if you want shadows, reflections, refractions or even combinations of that, with a raytracer you define the material, drop the object into the scene, and it works.
With a rasteriser, getting even one of those effects right in a scene, can give you a real headache.
If you can pay a dozend top coders and artists, it probably doesn’t matter, but for a small, independent group on a budget, raytracing may become interesing very soon (2-3 years?).

And raytracing will scale better than rasterising for the future.

-rasterising doesn’t scale nearly as good as raytracing over multiple processors.

-non local effects like shadows and reflektions and some kind of GI will become more and more important. Rasterising is local by desing. Only one triangele is accessed at a time. Raytracing will be more efficent for those effects, because you have access to the whole scene at any time, and always exactly know what you need to access.

-as triangles get smaller and smaller, rasterisers loose more and more of one reason for their efficency: Projecting a triangle onto the screen, and then drawing lots of pixels very fast.

-Yes, you can (and have to) divide scenes for rasterisers. But you can not divide it as fine grained as in raytracing, because then you would loose another reason for it’s efficency: Just throwing lots and lots of triangles at the screen, without any complicated checks and computations. So what, you ask? I’m pretty sure, the more complexe the scenes and effects, the more you will see the difference of only accessing what you really need to.

-memory access in a good designed raytracer is actually very good. You get into trouble if you have for example lots of non planar, reflecting surfaces. But doing something like that with rasterising is much worse.

And please don’t always assume the most naive aproach for implementing something for a raytracer.
Adaptive antialiasing only has a fraction of the cost of a naive implementation. Soft shadows can be sped up in a similar way.
And if nothing else helps, you can still use those hacks and tricks you have to use on a rasteriser.

And remember: you mostly compare software raytracers from hobby coders with little to no artistic skills to rasterisers running on highly evolved, dedicated hardware, showing the work of the industries finest artists.

Combining a raytracer and a rasteriser in an interaktive environment, like a game, makes little sense to me. If the raytracer isn’t fast enough to render those secondary effects fullscreen, you can’t allow the player to step close to an object that uses that effect. And if the raytracer is fast enough, your better off with raytracing only, because the worst case speed difference will be minimal, and it doesn’t really matter if you have 100 or 1000 fps in the best case. And getting a rasteriser and a raytracer to work nicly together is something for another big headache.

GI is just something on top of rasterising or raytracing. But most GI algorithms will work much better with raytracing than with rasterising, and may even use the same resources (procedures/hardware) for tracing rays and intersecting geometry, as the raytracing part.

So to me it’s pretty clear, raytracing is the future. It’s probably just not as close as Intel wants us to believe.

If an individual ray intersects any “object”, then a vector is calculated from the point of intersection to every light source in the scene (in your diagram this second vector is called the “shadow ray”). The angle between the original ray and this vector provides the basis for a number of different colour accumulation calculations.

Still not sure if this is for real or not:

http://www.techarp.com/showarticle.aspx?artno=526

especially the comment:
“The first quoted Daniel Pohl’s work in which he was about to modify Quake IV to work with the Intel ray-tracing engine. Just using an 8-core Intel processor, Daniel was able to achieve almost 100 fps at the resolution of 1024x1024.”

Thought this might be an interesting followup to the topic:
http://www.tgdaily.com/html_tmp/content-view-37925-113.html

Seems that intel got it to scale up to 16 cores!

Great post Jeff. I still have my Amiga 2000. I attended AmigaWorld in DC too. Totally got me hooked on CG.

I still use my amiga 2000 - mostly for music composition - but I still play with ray tracing from time to time.

What a great retrospective! Those were the days. Geeks, not users, ruled.
The idea of being able to construct and visualize a virtual mathematical
reality captured all our imaginations back in Amiga days. I remember one of my favorites was the first Caligari Truespace renderer for the Amiga. It’s now in its 7th incarnation for Windows at http://www.caligari.com although there’s lots more competition out there now.

Mercury systems (open inventor, amira) has a nice real time rendering system, they use pc clusters and networks to do some good stuff:
http://3dviz.mc.com/products/openRTRT.asp

Pixar found that the REYES algorithm that they traditionally used was falling short in Cars. All that chrome, shiny metal and reflective glass meant that Reyes could not pick up on the multitude of reflections and refractions present. Hence, they started considering Ray Tracing for Cars which is why the time frame shot up. However, remember that for each minute of rendering time, it takes nearly 4 months to render a 90 minute movie at 30 frames per second, so 10 hours on average is a bit much. Some shots may have taken that long, but not all of them.

Also, they started work on some ray trace speed ups using adaptive sampling and multi-resolution geometry. You can read the paper on it at http://graphics.pixar.com/ (there are a number of very cool papers there).

Still love the Amiga and all its Ray Tracing glory.

Re : Deano Calver’s article :

Aliasing could be solved by using an adapative technique (give rays a size, determine where each ‘corner’ of the ray hits, and if needed shoot additional rays). This also helps with refraction where a ray is skewed, each corner might hit a surface some distance apart, and thus require additional rays.

Moving scenes can fairly easily be solved by splitting geometry into static and non-static. Static geometry (walls and so on) has all the partitions pre-defined while the moving items (characters, vehicles) have moving bounding volumes that are tested per ray. The swaying grass problem can be solved by combining these two methods into a static partitioned volume defined based on the size of the moving geometry plus the maximum moving distance.

AFAIK Ray tracing is not a global illumination technique. Once you determine the point at pixel X,Y, you calculate the light off the number of light sources depending on colour, distance and angle the same as other rasterization techniques. Ray tracing gives you the basic facilities to create a better lighting model (i.e. shadow handling, ambient occlusion, and even radiosity) but it is not truly part of the algorithm.

However, he is correct that each solution requires more rays being fired off. When you consider that most solutions to visual problems with scanline techniques involve cheating in some way i.e. adding lights to simulate global illumination, pixel shaders for refraction and reflection maps, the answer in ray tracing will still always be based on firing more rays.

By the time we have enough processors on a single machine to make real time ray tracing a possiblity such hardware will allow far better results in other scanline methods.

Though you guys might find this QA on Raytracing and it’s current place in game technology with John Carmack interesting.
http://www.pcper.com/article.php?aid=532
Seems he doesn’t think it’s all that either, but I’ll let you have a butchers.

I’ve implemented a (very basic) real-time ray tracer in Silverlight. You can check it out at http://kristofverbiest.blogspot.com/2010/11/real-time-ray-tracing-in-silverlight.html