raytracing “quake” is one thing, raytracing “juggler” is much simpler (simple scene, containing raytracing-friendly spheres and checkered plane).
i’m sure it can be done in realtime on a regular modern computer nowadays!
Been a POV-Ray user for years, I even made my own renderfarm, now I’m searching about unbiased raytracers: forward raytracers, tracing rays from the light, slow as hell (let’s say slow as backwards raytracers were many years ago) but impressively realistic results. I was browsing Luxrender website when I noticed this post on the RSS feed…
By the way, why on does the “POV-Ray, which [produces some impressive results] as well” link point to the IMDB entry on Cars movie?? I’m pretty sure Cars didn’t involve POV-Ray…
In 2000 I wrote a realtime raytracer that could render a scene similar to the Amiga juggler in 320x240 at 30fps on a PII-450 using various scene optimisation techniques, so I’m not sure how true that last paragraph is…
While the Pixar Photorealistic Renderer (prman) is the industry benchmark in rendering it has ray tracing as an add on. Blue Sky Studios (http://blueskystudios.com/) has been doing ray tracing for over 20 years. The people that started Blue Sky got their start at MAGI/Synthavision and worked on TRON.
Blue Sky has ray traced all of their movies, shorts, and commercials for many years. They have a little more on the renderer here: http://blueskystudios.com/content/process-tools.php. They even won the 1998 Oscar for Best Animated Short for Bunny (http://blueskystudios.com/content/shorts-bunny.php) where the only thing not rendered with radiosity was Bunny and the moth. All environments were radiostiy renders.
[It’s essentially calculating the result of every individual ray of light in a scene.]
Well, that’s not exactly true. Your diagram shows it more accurately, i.e. it’s casting a ray from the “eye” or “camera” point for every pixel that makes up the viewport. The goal for each pixel is to calculate its colour, plain and simple. The more complex the scene and the more complex the effects desired, the more colour calculations to perform. If an individual ray intersects any “object”, then a vector is calculated from the point of intersection to every light source in the scene (in your diagram this second vector is called the “shadow ray”). The angle between the original ray and this vector provides the basis for a number of different colour accumulation calculations. When the second vector intersects yet another object rather than reaching the light source directly, it can result in yet more calculations, if you so choose (e.g. reflection). Otherwise, you simply do not add any colour from that light source. This can get more complex if you want soft shadows with proper penumbra, etc, but for the basic case you simply “do not add colour”.
The diagram itself has somewhat of an incorrect depiction in that it shows the vector “passing through” the object to still hit the light source, but fails to explain that the result is simply “do not add colour”. It would be more accurate to at least explain that the original ray had to interesect with something, in this case a plane. Otherwise, the colour added would simply be the background colour. Then, as the “shadow ray” intersects with the sphere, the light cannot reach that point on the plane and hence you do not add any new colour to that point.
For any programmers out there, it is actually quite easy to write your own ray-tracer in any language you choose. The old stand-by book “Computer Graphics in C” has a pretty good description of what’s involved. And as was stated, the result of a ray-traced scene has nothing to do with your graphics hardware, it’s all CPU crunching (unless you’re bold enough to try and hijack the GPU to do some crunching for you).
I don’t see much point in moving exclusively to ray-tracing, considering how nice results you can get with rasterization + shaders. If anything, a hybrid approach (like Pixar’s renderman…) is the best solution, imho.
The quiz made me grin. I’ve worked for years with a 3D artist who was doing 3D CG decades ago using custom software written for NASA and running on VAXen. Since he was doing TV work his frames were all 720x486x24, and in 1980 they took about 20-30 minutes each to render on the VAX and custom accelerator.
Over the years he moved to Softimage on Irix, then on PCs, then added a render farm that grew to 60+ processors, using both scanline and Mental Ray renderers. In 2007 he finished a large project with frames that took… 20-30 minutes each to complete. He has some sort of internal yardstick when designing that keeps him in that range no matter what resources he has at his disposal.
It does seem the hybrid rendering approaches work best, and that’s what Pixar’s RenderMan does. I’m really surprised they never got into ray tracing until Cars, though. Do check out that presentation I linked, it’s outstanding.
Actually, it’s the other way around: the nvidia interview is the response to intel’s article (check dates) - intel obviously wants to push raytracing since they’re mainly CPU people, and considering their experiments with 80+ core CPUs, they want something that can take advantage of it; most computer science is notoriously hard to parallelize, raytracing is almost the exception to the rule…
I’m actually surprised if Pixar did Cars fully-raytraced, considering how nice results their hybrid Renderman has done in the past - and, I assume, how much less CPU time it has taken.