Real-Time Raytracing

You may have seen this already, but there was a story on Digg sometime last year about building a ray-tracing renderer in a weekend.

http://www.superjer.com/pixelmachine/

If you haven’t seen it, Jon Harrop has some comparisons of a ray tracer in various language.

http://www.ffconsultancy.com/languages/ray_tracer/results.html

Ocaml for the win!

And the JoCaml extension to Ocaml provides distributed program and “free” n-core scalability.

So I’m pretty sure we’re closing in on “solved problem”. :slight_smile:

Never tried out the RealStorm benchmarks or the Nature Suxx demos?

http://www.realstorm.com/

You mentioned that you’re surprised that Pixar never got into raytracing before Cars. Actually, Pixar has had a few shots as early as Bug’s Life that used ray-tracing.

They used a separately implemented renderman compliant system called BMRT, http://en.wikipedia.org/wiki/Blue_Moon_Rendering_Tools that would serve rays to their PRman implementation. They hired on Larry Gritz, the grad student who wrote BMRT, to their PRman routine. He has since left, and worked on Gelato, NVidia’s film-quality hardware accelerated rendering system.

An interesting story I heard at siggraph - I believe PRman 11 was the first version to support raytracing internally. Before it was available, Weta was working on the second film in the LotR trilogy, which features Gollum prominently. His eyes are so large that without proper refraction of his iris through the lens, his eyes are noticeably unnatural and flat. Because they couldn’t easily use raytracing to get proper refraction, they created a displacement shader that computed where rays would hit if refraction were properly taking place, and deformed the iris and pupil so that PRman’s simple scan rendering would produce the proper result.

@Parveen

AA is only as hard ray tracing more. To AA using raytracing you just send more rays per pixel. It costs a lot, but so does raytracing currently. Once we get real time raytracing just wait a few more years for AA real time raytracing. AA is a non-issue.

They claim moving scenes don’t work with raytracing due to the amount of work required to get the scene into the required structure, but this is a non-issue. Just toss another core or two to update the structure in parallel. Simple.

I don’t know you understanding of raytracing, but you can pretty much toss any 3D model that exists now into an appropriate raytracer…I don’t see how artist would not be able to affect the scene.

The another of that article goes on to talk about how GI doesn’t look good. Double U Tee F! GI scenes are the far more realistic/pretty CG I scenes I have ever seen/created.

Raytracing IS the holy grail of rendering. (Of course we currently use mostly backwards raytracing, forward raytracing is the holy grail of raytracing.)

http://www.idfun.de/q3rt/downloads.html

The above page notes that playable frame rates were only achieved using a virtual CPU of 36GHz (cluster of 20x AMD AthlonXP 1800+). Even if we assume that improvements in desktop CPU power and parallelism over the past few years have reduced the requirement to a 30GHz CPU, one would still need 5x 3GHz dual-core CPUs to make ray tracing practical on a standard PC. (And that’s not even taking anti-aliasing into account.)

Given that the processor industry seems to have hit a ceiling at 3GHz, I’m betting it will be a while before we see games that use ray tracing renderers with acceptable performance. Even with the might of Intel behind ray tracing, rasterization-based rendering has been around for more than a decade and the major player (nVidia and ATI) have become very good at what they do. Ray tracing has lot of catching up to do if it’s going to compete with (never mind supersede) rasterization.

I’m certain that ray tracing is the future of computer graphics, but I’m equally sure it will be a number of years before it becomes a viable solution to rasterization.

I agree with some of the other comments. Ray-tracing is NOT the ultimate holy grail of rendering. It’s just one of many techniques developed over time, and although it has some nice intrinsic properties, it also has disadvantages. Hybrid schemes will always produce better results than pure ray-tracing.

Ray-tracing would be all you’d need IF modelling the environment could be done at essentially molecular level AND you could ray-trace at photonic resolution instead of screen resolution. The latter is ultimately a matter of computational resources, but the former is incredibly hard, if not impossible.

Hybrid rendering schemes employ algorithms such as radiosity to model the micro-structure of surfaces without having to do this explicitly, bump by bump.

Ah Jeff, you that’s a topic I am so interested in. I am still waiting for the day when the companies stop creating another “graphics feature” that is just another of those cheats to make the graphics look more realistic, instead of investing their time and money into something for the future, i.e. real-time raytracing. Imagine future video cards with multiple GPUs that are solely made for real-time raytracing, still highly specialized, but you won’t need like 10 new features every year. It will make a much better scaling of performance with price, too.

And last but not least, possibly one day we won’t need dedicated GPUs, but rather a part of your 1024 (or how many we will have by then) CPU cores then gets assigned the task of raytracing, and the video card is not much more than a high speed output port.

That might seem unrealistic? I suppose what we have as standard hardware nowadays would have been thought unrealistic back in that 1987.

Raytracing will become THE killer-app in video graphics as soon as multi-core-ization has hit a certain level, i.e. number of cores, and the first programs and probably video games using them for RT will pop up. Especially concerning video gaming, the two bottlenecks are speed of hard disk access, which will multiply with improvement of flash memory and it replacing mechanical magnetic hard drives, and the second bottleneck, video data processing speed, will at that point DIRECTLY SCALE with progress in CPU development.

“In fact, Eric recalled recently, the Commodore legal department initially “thought it was a hoax, and that I’d done the animation on a mainframe.” He sent them his renderer so that they could generate and compile the frames themselves.”

That just about sums up Commodore management’s view of the Amiga. They had absolutely no idea how brilliant the machine the Amiga engineers had made was. IBM and Apple were genuinely scared when the Amiga (with it’s amazing custom chips and pre-emptive multitasking OS) came out in 1985, but could rest easy once they saw Commodore’s pathetic attempts at marketing the thing.

I was really into graphics at one point, and wrote a pretty fast raytracer with some pretty interesting features in under 24 hours while I was in school.

A few points:

The classical raytracing algorithm is O(log n) (albeit a very big O(log n)), while rasterization is O(n). This means that eventually (for complex enough scenes) raytracing may actually be faster than rasterization. This can already be shown for simple (eye-rays only) raytracing.

There is significant friction between the elegance of the raytracing algorithm and modern computing architectures. Raytracing relies heavily upon recursion, which modern CPUs don’t like very much from a low-level perspective. Recursion is elegant, but recursive data structures often lead to cache thrashing, which leads to lots of low-level optimizations. Some of these are as clever as storing all the nodes of your KD-tree contiguously in memory, and some of them are as hackish and evil as using the lower 2 bits of a floating point number to store extra data.

There is fascinating research going on in the field of raytracing hardware (see http://graphics.cs.uni-sb.de/SaarCOR/). Honestly, the only reason we have awesome 3D rasterization nowadays is because of dedicated hardware. Modern CPUs still can’t rasterize a modern scene at playable framerates with decent resolution, even with lots of low-level tweaking. Raytracing hardware shows some SERIOUS potential. If I remember correctly, you can match a high-end dual-core Pentium’s performance with an FPGA running at around 100 megahertz (don’t quote me on that though). Now, think of what could be possible with a dedicated ray-tracing card with specs equivalent to a modern high-end graphics card :smiley:

For those living in Zrich, Switzerland: Pixar’s Rob Cook is at ETH today (http://graphics.ethz.ch/main.php?Menu=6Submenu=1#lunch_90).

Regards,
tamberg

Is it common to use multiple light sources in ray tracing? How does ray-trace rendering scale with the number of light sources?

It is worth noting that real time ray tracing is not near perfect scaling on real world hardware because it tends to jump around in memory a lot on complex scenes.
The resulting cache misses make it scale up to the limit of the memory subsystem. (By choosing to parallelize rays close to each other you mitigate this but as you get more cores the probability of local rendering steps taking similar paths is much reduced).

Bobby,

With classical ray tracing, the number of light rays you have to test at each intersection is in general linearly proportional to the number of light sources in the scene. Simple ray tracers will test every light source at each intersection in order to determine which lights the point of intersection is illuminated by. If you have area light sources and want better shadows (with soft edges, etc) you need to fire multiple rays at each light source for each intersection.

Of course, there are always clever ways to reduce the number of tests you have to make, but conceptually that’s what you have to do.

For anyone interested in building a ray tracer while still learning and understanding the theory behind it, I would recommend the book “Realistic Ray Tracing” by Shirley and Morely. It’s very hands-on, but it also presents things from a theoretical perspective.

Raytracing easily produces excellent results for the relatively few things in the world which are highly relective or refractive. For everything else, theres rasterisation, which is nearly always faster and often more flexible.

Raytracing is a non-issue: graphics programmers are pragmatic, and will use whatever gets them the best bang-for-buck. So far, rasterisation has won hands down.

MaxL,

I was more referring to classical raytracing, not a hybrid approach. There are advantages to having a “pure” approach.

Actually tracing the light rays still yields the best shadows. The very point of using raytracing instead of rasterization is that raytracing generates images of better quality than rasterization when solving certain problems (namely shadows, reflection, and refraction).

Furthermore, whatever method you use to calculate shading will still scale linearly relative to the number of light sources. It might be a “smaller” linearly, but it will still be linear.

I actually worked at Commodore in the very early 90’s… While not an electrical engineer, I did have wonderful discussions with them about how they made the Amiga do all those amazing things.

Before that, I wrote ray-tracers all through college… everyone loved the eye-candy. Now my day job is creating tools to build near real-time plots and graphs for monitoring.

With all my tendencies towards visualization, you’d think I’d agree that ray-tracing is the future of computer graphics. But I doubt it.

I’d so much rather see better physical models for animation and innovative/immersive controls for human/computer interaction. Perhaps these aren’t really computer graphics exactly, but they closely relate. Spend the cycles here and nobody will notice whether it’s raytraced or not.

Considering the 15-hours-per-frame statistic, doing the math yields that an hour’s worth of the movie would take about 148 years to render (if rendered one frame at a time on one machine). One must wonder just how many machines they used, and what wonderous horsepower they must have had in order to render the entire movie in a practical timeframe.

Raytracing WILL eventually displace rasterization, but it won’t happen tomorrow. Most of the objections given to raytracing by other commenters are moot - antialiasing, for example, is generally viewed as a STRENGTH of raytracing in terms of ease of implementation and cost relative to the complexity of the scene. The issue of dynamically updating acceleration structures for ray tracing will be solved; there are already data structures, such as bounding interval hierarchies, that can be updated quickly and yet deliver most of the performance of traditional acceleration structures like kd-trees. Raytracing is also inherently able to take advantage of precalculated global illumination data (such as photon maps) in a way that rasterization is not; dynamic global illumination is a hard problem, but it’s hard for both ray tracing and rasterization, and the solution is likely to involve ray tracing. Photon maps are calculated using “forwards” ray tracing - that is, with the rays originating from the light source - and so once realtime raytracing is possible, it’s only a matter of time until photon mapping or a similar algorithm can be performed in realtime as well.

All that’s really holding us back is having a large number of cores to work with and a great deal of memory bandwidth; it’ll take a while before we reach the point at which realtime raytracing makes MORE sense than rasterization, but it’s not so far off as some seem to think, and the time complexity of raytracing as compared with rasterization make the switchover a mathematical certainty as scenes get more complex.

Splendid. Excellent post, realy helpful. Thank you!