Real-Time Raytracing

The nice thing about POVRay is that you can download the source code and see what it is doing. Great for learning about coding for raytracing.

LOL. I wrote a ray-tracing app in VB.NET before I even knew what ray tracing was. At univ in 3rd semester calculus, we learned a lot about 3d math (vectors, surfaces, etc…) and it just dawned on me that I could write a program that rendered 3d objects by plotting a line eminating from a “camera point” for every pixel on the screen, then test for an intersection between that line and every object in the scene. Later, I wrote my own texture mapping algorithm as well (easy, since all surfaces I had were defined using parametric equations). The lighting equations came from a physics lecture. My friends were impressed that I could write something so awesome.

It wasn’t until a year later when I took a computer graphics course that I realized the algorithm I’d “invented” already had a name.

People seem to group all kinds of new rendering techniques under the rasterization label. I don’t feel that raytracing in itself is a goal to strive for. Raytracing is a small part of the rendering calculation - visibility calculation. Just as the z-buffer rasterization method is. One is pixel order rendering, the other object order. In terms of efficency rasterization beats raytracing hands down for calculating correspondence between pixels as locations on objects. Because all 3D graphics is about making images that look correct, a good developer would use rasterization everywhere where it is feasible. Where rasterization falls down is reflection and refraction off of curved surfaces (like car fenders), and people are actually surprisingly bad at judging defects in such cases. There will certainly be more raytracing than now, there already is simple raytracing for surface details in recent games, but rasterization is not going anywhere soon. A hybrid approach is much more likely - rasterization for most cases, ray tracing when rasterization doesn’t work or is too difficult.

@Parveen:

"No! Please don’t perpetuate this myth! Raytracing is NOT the holy grail.

For a reasoned argument, check out this article by Deano Calver. He worked at Ninja Theory on Heavenly Sword.
a href="http://www.beyond3d.com/content/articles/94"http://www.beyond3d.com/content/articles/94/a

Executive Summary:

  • anti-aliasing is hard
  • moving scenes are VERY expensive
  • it is almost fully procedural, so the artist has near zero stylistic control over the final look of a shot
    "

Read the linked article. I have a few problems with it.

  1. “Exponentially” is used to mean “multiplied”. I know, it’s common slang to say something which is twice as good (or even 1.5x as good) is “exponentially better”, but when you’re discussing the arcane aspects of different visual modeling approaches you should use precise language. Antialiasing requires “exponentially more” rays? No, it requires up to an order of magnitude more, which number does not keep increasing exponentially. In practice, it requires at least 4x, and could use 10x. Dealing with “soft” light sources doesn’t require “exponentially more” rays, it requires a large multiplier more (on the lighting traces).

  2. While he seems convinced that he’s talking in general terms about ray tracing, he is using compromises being made today to make ray tracing today more efficient and saying that this means that ray tracing (which definitely describes the compromise-free variety as well as the pragmatic variety) is not really “the holy grail”. I’m sorry, but every “problem” with compromises made from the ideal of ray tracing is countered by several endemic to rasterization. As a bit of a lay person here, when I hear “real time ray tracing is the holy grail”, I read that as something we are striving for, not a description of something which has been achieved. Perhaps he is reacting to some marketing type who has proclaimed their implementation of real-time ray tracing to be “the holy grail”, but that isn’t clear in his writeup. Ray tracing as an approach is condemned universally.

  3. Somewhat along the same lines as above, it is a completely different question to ask, “which approach will yield good enough results with acceptable performance?” Perhaps rasterization could improve to the point where it is “as good as” compute-intensive ray tracing. Perhaps “the right” compromises could likewise be made to ray tracing to improve its performance while not hitting the failings he cites. The question of which tool is right for the job at hand is quite different from which is most likely to produce the most lifelike results in a “pure” implementation on infinite future hardware, etc.

You can definitely render the juggler realtime today.

Heaven 7 is the most recent Realtime Raytracing demo I can find, and it’s almost 8 years old now!

You can view it on youtube here: http://youtube.com/watch?v=scSsxrMVXh8

Or you can download the demo itself and run it on your computer. It’s only 64k. http://www.demoscene.hu/~picard/h7/h7-final.zip

Realtime raytracing is old news for the demoscene. It’s been done for many years, almost exclusively on the CPU only (without any additional GPU power).
Some more old and new real time raytracing demos:

“Chrome” was a 4KB intro that did realtime and precalculated raytracing back in 1995 on a 100MHz Pentium. It was not the first to do raytracing.
http://www.pouet.net/prod.php?which=15089

“Federation Against Nature” has specialized in raytracing, and have done some demos between 2000 and 2003.
http://www.pouet.net/prod.php?which=9461

The Realstorm Global Illumination Benchmark is a real time raytracer that also adds many other nice effects, and is from 2006.
http://www.realstorm.com/

Heaven Seven from 2000 has already been mentioned.
http://www.pouet.net/prod.php?which=5

However it seems the demoscene has moved away from the idea for now for the same reasons that other people mention:

  1. You get more “bang for buck” with rastering, especially with 3D HW.
    For anything other than shiny quadratic solids, raytracing is terribly expensive and inefficient.
  2. More creative control with rastering.

The classical raytracing algorithm is O(log n) (albeit a very
big O(log n)), while rasterization is O(n). This means that
eventually (for complex enough scenes) raytracing may actually
be faster than rasterization. This can already be shown for
simple (eye-rays only) raytracing.

I’m not sure if this is what you mean, but many people on the web are arguing the performance for large scene point by assuming that Scanline renderers render everything, using the painters’ algorithms.

It isn’t true.
As we know, renderers of all types, including real-time, cull the scene and use space partitioning techniques.

In fact performance is the reason why we use Render Man and not a ray tracer. It deals extremely well with extremely large scenes. Ray tracing requires often to have the entire scene in memory, for hit- testing, and requires tessellating all objects. The essence of PRMan, as mentioned above, is the minimal amount of work and memory required for scenes of any size.

Pure scanline rendering can’t do real complex reflections and refractions, that’s when we turn on ray tracing, and we only do so for these pixels.

When you’ve got proper ray tracing, with good anti-aliasing, you must then have Depth-Of-Field and motion blur.
So YES, the processing power grows exponentially in production.
Ray tracing is terrible at these, but it’s practically free with PRMan. That’s why we use it.

What a wonderful collection of gems? I had no idea this history existed. Thank you for sharing this with me. Thank you to Jeff and all commentators. Thank you, all ray tracers!

Interesting!

I had the juggler on my Amiga 2000 too in the late '80s. I bought a book about 3D computer graphics on the Amiga (in German…) and I’ve been interested in ray tracing ever since.

The amiga juggler in real time raytracing, in 4Kb with sound, circa 2001:

http://www.pouet.net/prod.php?which=1914

Great article. I really enjoyed reading it and following the links from the comments.

I’m an Amigan from way back and fondly remember my Amiga500 and like others here was amazed by the Juggler.

Guys, get up to date…

www.pouet.net/prod.php?which=32194 is beautiful real-time raytracing on the GPU WITH music all crammed into 4 kb.

lakseper

Pyrolistical: if you have a millions-of-triangles model only taking up 1% of screen space, you’re not going to render all those triangles even with a scanline renderer - you’ll be using LOD and rendering a less complex model.

Which you’d also do when raytracing, I bet… even if it’s only 1% of the fired rays that have a chance to hit the model, you’d still need to hit-test those rays against those millions of polys unless you use LOD?

Check out Winamp’s AVS (Advanced Visualisation Studio) we have been real time raytracing with that /in software/ using a /scripting language/ since 2000.

Its a very wasteful way to render, but is actually /more/ efficient if you have enough limitations on your resources and some suitable way to read a texture using texture coordinates (Dynamic Movement in avs, bitmap loaders on other platforms).

I never said we did a good job of it though… its just the only efficient way to texture a 3d object using AVS. Although the last preset I released doing this used bump and specular mapping on a procedurally generated corridor in very low res…

Easy peasy… :stuck_out_tongue:

Raytracing triangles is silly, you can use a fast rasterisation method then reverse engineer the rays if you need them for lighting effects etc… I’d imagine any modern implementation would do that. As the comment above suggests doing bounds testing on millions of triangles is a complete waste of CPU time and when we have hardware to render millions of triangles a second… it makes sense to use it.

This is why the juggler there is made of spheres, spheres are cheaper to raytrace than triangles (unless you do as suggested above) and you can combine them all efficiently with clever use of min and max on the raytrace parameter.

Just to add as well…

A lot of idiot comments here…

The complexity estimated mentioned above are crap, especially given that the meaning of n is not mentioned.

Depth of field and motion blur are not hard to implement on top of raytracing if you use a depth buffer and the classic “3 or 4 frames at a time” motion blur method. If your company is struggling with this, point them my way, I’d love to work for them.

As mentioned in some smarter posts above the idea of differentiation between ray-tracing and scanline methods is one born of ignorance. All modern ray-tracers and scanline renderers use methods from each others field… if they don’t then they lose efficiency and/or prettyness.

Oh and a quick note, parallax mapping implementations are normally doing raytracing… so real time raytracing… nothing big, new or clever.

I still find it amazing it took 38 hours to render a single frame for the Transformers film.

I was curious about that, Brent, and you’re right-- the Wikipedia page confirms it with two citations

http://en.wikipedia.org/wiki/Transformers_(film)

Such detail needed 38 hours to render each frame of animation,[2] which meant ILM had to increase their processing facilities[31]

[2] http://enewsi.com/news.php?catid=190itemid=11213

[31] http://www.vfxworld.com/?atype=articlesid=3337page=2

@Jheriko
Depth of field and motion blur are not hard to implement on top of
raytracing if you use a depth buffer and the classic “3 or 4 frames
at a time” motion blur method. If your company is struggling
with this, point them my way, I’d love to work for them.

DOF and motion blur are trivial to implement with Ray Tracing, they are just prohibitive to use, and make the cost grow exponentially. That’s the problem that REYES was designed to address, and what I was replying to.

Mental Image has been struggling with DOF and Motion blur performance and quality for more than 20 years.

These are the main reasons why film studios choose RenderMan instead of Mental Ray. Others that use Ray Tracing for rendering use 2D motion blur or 2D depth of field in post. Not casting other rays.

And what has Mental Ray been working on these last 10 years? The scanline mode of their ray tracer. They’re in the business of making ray tracing for production.

The people who don’t understand the current state of the technology argue the virtues of ray tracing based on 1980s ray tracing, and also 1980s quality expectation.

@Pyrolistical

Let’s say you have a super complex model with billions of triangles,
if the entire thing only filled up 1% of the screen, you are
still going to render all the triangles using scanline.

But with a raytracer, you’ll only render the visible surface of
the triangle, which is only be a hand full of triangles.
The performance difference would be huge, even on todays hardware.

and yet, PRMan or a modern scanline renderers (Like Mental Ray in scanline mode) the same scenes that a ray tracers does, faster.

You’re probably thinking of old A-buffer renders that don’t do any BSP or anything else. It’s ok, but no one uses that. In any case, this debates crops about about the domain of games and the CPU vs GPU rasterizer.

In games, a huge city is partitioned. Whether it would use the GPU or being ray traced, not the entire scene would be drawn.

Then, the GPU scales much better compared to hit testing every ray, even if it does redundant over-drawing. It’s better to draw too much than to do a lot of work to avoid drawing a triangle, it parallelizes better right now and in the forseable future.

You take a look at the John Carmark article linked above.

I think that the holey grail for rendering isn’t raytracing. It’s physically correct renderers like Maxwell Render.
http://www.maxwellrender.com/
When you compare the difference in the output between a raytracer and a physically correct renderer, the results are an order of magnitude more realistic. Problem is that they are as slow today as a raytracer was 20 years ago :frowning: