Tuesday, January 31, 2012

VoxLOD: Interactive ray tracing of massive models with indirect lighting using voxels

Just encountered an impressive video of a technology named VoxLOD on Youtube today:


As the name aptly implies, VoxLOD uses a voxel-based LOD scheme to smoothly stream in and visualize the geometry of the massive model. The first part of the video shows direct lighting only (primary and shadow rays), while the second half is much more interesting and demonstrates real-time one bounce diffuse indirect lighting (filtered Monte Carlo GI). From the paper "Interactive ray tracing of large models using voxel hierarchies":
"We cast one shadow ray per primary or diffuse ray, and two random diffuse rays per primary ray. The diffuse rays are used to compute both one bounce of indirect irradiance and environment irradiance, which are processed with a bilateral filter [TM98] to eliminate noise."

"There are two light sources: a point light (the Sun) and a hemispherical one (the sky). I use Monte Carlo integration to compute the GI with one bounce of indirect lighting. Nothing is precomputed (except the massive model data structure of course).

I trace only two GI rays per pixel, and therefore, the resulting image must be heavily filtered in order to eliminate the extreme noise. While all the ray tracing is done on the CPU, the noise filter runs on the GPU and is implemented in CUDA. Since diffuse indirect lighting is quite low frequency, it is adequate to use low LODs for the GI rays."
Another interesting tidbit from the paper:

"By using LOD voxels, significantly higher frame rates can be achieved, with minimal loss of image quality, because ray traversals are less deep, memory accesses are more coherent, and intersections with voxels are free, contrary to triangles (the voxel fills its parent node, therefore, the intersection is equal to the already computed intersection with the node). Furthermore, the LOD framework can also reduce the amount of aliasing artifacts, especially in case of highly tessellated models"

The quality of the indirect lighting looks pretty amazing for just 2 (filtered) random samples per pixel and is completely noise-free, as can be seen in this picture from the paper


All the ray tracing is currently CPU based (the GI algorithm runs at 1-2 fps on a quad core cpu), but it would probably run in real-time at much higher framerates when implemented entirely on the GPU.

Thursday, January 26, 2012

Brigade 2 Blowout!

Jacco Bikker, one of the Brigade developers, posted some updates about the Brigade 2 path tracer:

- there is a new downloadable demo showing off the Blinn shader (Fermi only). I haven't been able to test it yet, but the screenshot looks sweet!

- a brand new video of the Reflect game using Brigade 2 running on 2 GTX 470 GPUs, which demonstrates some dramatic lighting effects and very low noise levels (for a real-time path traced interior scene)

All can be seen and downloaded from here: http://igad.nhtv.nl/~bikker/

There should also be a playable Reflect demo up by tomorrow!

GigaVoxels thesis online

Cyril Crassin has just posted his entire PhD thesis on GigaVoxels, the sparse voxel octree raycasting technology which supports efficient depth-of-field, soft shadows, animated voxel objects and indirect lighting at

http://blog.icare3d.org/2012/01/phd-thesis-gigavoxels.html

There are over 200 pages of voxel raytracing goodies :)


Tuesday, January 24, 2012

Real-time path traced Sponza fly-through on 3 GPUs!

I've sent the executable of the teapot in Sponza scene to a friend, szczyglo74 on Youtube, who has a much more powerful rig than my own (a PC with 3 GPUs: 1 GTX 590 (dual gpu card) + 1 GTX 460)  and who made some very cool real-time videos of the sponza scene. Many thanks szczyglo!

The maximum path depth in each of these videos is 4 (= 3 bounces max):

32 spp per frame (480x360, 1/4th render resolution, 8 fps): 

http://www.youtube.com/watch?v=fVAl-oKAL9I  :awesome video, shows real-time convergence in most parts of the scene

32 spp per frame (640x480, 1/4th render resolution, 4.7 fps): 


8 spp per frame (480x360, 1/4th render resolution, ~21fps): 


4 spp per frame (640x480, 1/4th render resolution, ~18fps):



The above videos clearly show the importance of the number of samples per pixel per frame in indirectly lit areas: despite the low max path depth of 4, it is still possible to discern some details in the corridors in the first two videos (32 spp per frame) during navigation, while the last two videos (8 and 4 spp per frame) are obviously too dark in these regions, but clear up very fast with a stationary camera. Note that these tests were made with a kernel that is not optimized for indirect lighting (no multiple importance sampling is used here). 

I'm quite happy with the sense of photorealism in these videos, especially when you consider that this is just brute force path tracing (no caching, filtering or interpolation yet nor anything like image reconstruction, adaptive sampling, importance sampling, bidirectional pt, eye path reprojection, ... which are all interesting approaches). A textured version of Sponza will probably further increase the realism, which is something I will try in a next test.  

Monday, January 23, 2012

Utah Teapot in Sponza continued

I've managed to get the "teapot in Sponza" scene to render slightly faster by treating all objects in the scene as static.

Youtube videos (rendered on a GeForce GTS 450):



Sunday, January 22, 2012

Optix 2.5 released

A few days ago, Nvidia has released OptiX SDK 2.5 RC1 which stands out compared to prior releases due to a number of major improvements:

- out-of-core GPU ray tracing: scenes can now exceed the amount of available GPU RAM (up to 3 times)

- HLBVH2 support (Garanzha and Pantaleoni): replaces the previous LBVH builder and is able to build the BVH acceleration structure on the GPU at a fraction of the time it would take a CPU, which allows for completely dynamic scenes by rebuilding the acceleration structure each frame in real-time (e.g. the HLBVH2 paper reports building times of 10.5 ms on a GTX480 for a model consisting of 1.76M fully dynamic polygons). HLBVH2 traversal speed is said to be comparable to a CPU built BVH

This will greatly benefit real-time GPU path traced games and animations as it not only reduces the BVH build times by several orders of magnitude compared to CPU builders, but also eliminates costly per-frame CPU-to-GPU transfers of the updated BVH (when built on the cpu)

- the SDK path tracing sample is enhanced with multiple importance sampling

Friday, January 20, 2012

Real-time browser based path tracer

While searching the net for ways to improve convergence in dynamic, path traced environments, I stumbled upon this neat web-based path tracer created by Edward Porten:


It uses progressive spatial caching and runs pretty fast despite being CPU-based.


Also check out some other browser-based demos from the same author at http://www.openprocessing.org/portal/?userID=6535

The Sphere flake tracer uses a cool looking frameless rendering technique, roughly comparable to frame averaging.

"Hair Ball" and  "Terrain Ray Marching"  are awesome as well.

Thursday, January 19, 2012

Brigade 2 Teapot in Sponza test

Testing indirect lighting with motion blur in Brigade 2 with the teapot and the Sponza atrium:

Youtube videos:

http://www.youtube.com/watch?v=QCPCAZlS5QI (8 spp per frame)
Some pics:

This is a particularly nasty scene, since all the light is coming from the skydome through the narrow roof  (no portals or importance sampling are used, max path depth is 4). Nevertheless, it still converges rather quickly on a GTS 450 using the motion blur trick. 

Wednesday, January 18, 2012

Brigade 2 motion blur test continued

I made some side-by-side comparisons from the Utah teapot scene (http://www.youtube.com/watch?v=KxELvSK3Gl0) to show the difference that the motion blur technique makes in areas that are partially lit with indirect lighting. Both sides in the comparison screens use only 4 samples per pixel. The low samplerate is required to achieve playable framerates. Motion blur (frame averaging) is disabled in the left image, while the right image averages the pixels of the current frame with those of the previous 7 frames (equivalent to 32 samples per pixel). The difference in obscured regions is enormous. This is the quality that can be achieved in real-time (+10 fps) using one current high end GPU (GTX 570 or better): 


Edges and shadows become much more clearly defined:


Video (low framerate, rendered at 640x480 on a GTS 450):


It's amazing what such a simple trick can do to the quality of the image, without losing any detail. My next test will involve a 3rd person car camera like the one in  http://www.youtube.com/watch?v=SOic3eE8wrs which should work really well in combination with the motion blur (no sudden sideways movement).

Tuesday, January 17, 2012

New paper about noise reduction for real-time/interactive path tracing

"Practical noise reduction for progressive stochastic ray tracing with perceptual control"

Check it out here: http://www.karsten-schwenk.de/papers/papers_noisered.html

The "supplemental.zip" folder contains some very nice comparison videos.

Brigade 2 motion blurred Utah teapot


Small update on my previous post about motion blur in Brigade 2. I've made a small Bullet physics demo featuring a holy symbol of computer graphics, the Utah teapot:


The video was rendered on a low end GTS 450. The scene is inspired by and meant to be a real-time remake of an animation rendered with GPU path tracing using SmallLuxGPU (last scene in the video). 

Sunday, January 15, 2012

Brigade 2 motion blur test

Today I've added cheap camera motion blur to the Brigade 2 path tracer using the OpenGL accumulation buffer. The technique blends the pixels of one or more previous frames with the current frame and works very well for real-time path traced dynamic scenes, provided the framerate is sufficiently high (10+ fps).  The difference in image quality is huge: path tracing noise is drastically reduced and since all calculations pertaining to the accumulation buffer are hardware accelerated on the GPU, there is zero impact on the rendering performance. Depending on the accumulation value, you essentially get two to ten times the amount of samples per pixel for free at the expense of slight blurring caused by the frame averaging (the blurring is actually not so bad because it adds a nice cinematic effect). 

Below is a comparison image of a stress test of an indoor scene without and with motion blur applied (rendered on 8600M GT). The images were rendered with multiple importance sampling using only 1 sample per pixel to better show the differences (converging is off):


Comparison of the scene with the rotating ogre from a previous demo (see http://raytracey.blogspot.com/2011/12/videos-and-executable-demo-of-brigade-2.html) rendered at 1 sample per pixel (on a 8600M GT), with (right) and without motion blur:


The following image has nothing to do with frame averaging but it is an image from an earlier test of the MIS indoor scene with a highly detailed Alyx character (40k triangles) with a Blinn shader applied. It rendered very fast:

I'll upload some videos of the above tests soon. 

Monday, January 9, 2012

A new real-time sphere path tracer (CUDA and OpenCL)

Real-time path tracing of spheres on the GPU is hot it seems :-)


It looks very impressive and pretty. The path tracer owes its speed to the fact that it is completely running on the GPU and to a custom precomputed hashing algorithm. More info at http://bertolami.com/projectView.php?content=research_content&project=real-time-path-tracer There are also downloadable executables further down the page.

Tokaspt (by tbp), Sfera (by LuxRender's Dade), the WebGL path tracer (by Evan Wallace), ... the list of real-time GPU path tracers featuring dynamic scenes with spheres keeps growing. 

Brigade 2 GI test

Just a small test with a Cornell box scene to test diffuse color bleeding with Brigade 2, rendered on my faithful 8600M GT (the female character is a high poly version of Alyx from Half-Life 2 (character model from here), containing almost 40k triangles and can be moved around the scene in real-time, the spheres are made of triangles):

Some more tests with an area light that show indirect lighting and subtle brownish/greenish color bleeding on the left side of the character:

These scenes use Brigade's multiple importance sampling kernel which drastically reduces the noise in interior scenes compared to the kernel used in previous test scenes (which converged fast because those were open outdoor scenes lit by an HDR skydome).  Brigade's MIS algorithm allows this kind of interior scenes to converge extremely fast as can be seen in this video made by Jacco Bikker. It uses two GTX 470s to render at real-time framerates with 80 spp per frame!

Tuesday, January 3, 2012

Brigade 2 website launched with source code + new videos and exe's

Great news today: the website for the Brigade 2 path tracer has been launched and includes the source code so anyone can make their own real-time path traced game. The path tracing kernels are still closed source and precompiled in a library.

The website can be found here: http://brigade.roenyroeny.com/

I have also developed two small demos showing real-time path traced interior scenes with animation. The first one is a simple study room like scene, in which the ceiling and back walls are removed to let more light enter the scene (there is a problem with the normals from the chair):
 

The second one is a more complex bedroom scene (180k triangles). The scene is a free 3ds max model from http://www.3dmodelfree.com, which was tweaked a little bit (I took out the ceiling, two walls and the curtains). The scene came without textures, so I had to set every material manually in the mtl file. Some screens:
Diffuse(.1,.1,.1) and spec(.9) for bedframe and closet:

Two videos showing real-time material tweaking and animation with physics:


The executable demo can be downloaded at  
(all CUDA architectures supported)

Expect many more demos made with the Brigade 2 path tracer in the coming weeks and months. Brigade is also going to be ported to OpenCL soon. 2012 is going to be a breakthrough year for real-time GPGPU path tracing.

Monday, January 2, 2012

Pepeland inspired Brigade 2 scene

Inspired by a Pepeland animation (one of the first truly photorealistic animations, rendered with Arnold in 1999), I've created a scene with the Brigade 2 path tracer in an attempt to achieve photorealism in real-time. Some pics (rendered with 8600M GT):

The scene contains 110k triangles (the chair and robot are free models found on the net, the ogre is the same model from the previous demo). It renders in real-time at low resolution on a high end GPU and is pretty amazing to see in action.

Videos and executable will follow soon.

Bungie talks about ray tracing and voxels

Gamasutra has an interview up with Bungie's senior graphics engineer. The whole interview can be found at

http://www.gamasutra.com/view/news/39332/The_Big_Graphics_Problems_Bungie_Wants_To_Solve.php

Some interesting fragments:
I've certainly seen how even today people are having a lot of trouble rendering shadows without a lot of blockiness or dithering. 
HC: That's kind of the problem with computer game graphics these days. A lot of things people consider solved problems are actually quite far from being solved, and shadows are one of them. After all these years we don't have a very satisfactory shadow solution. They're improving; every generation of games they're improving, but they're nowhere near the perfect solution that people thought we already have. 
What do you think might be the answer? Your potential megatexture solution, or something else? 
HC: We are still far from seeing perfect shadows. Shadows are a byproduct of lighting. All frequency shadows (shadows that are hard and soft in all the right places) are a byproduct of global illumination, and these things are notoriously hard in real time.< 
There's just not enough machine power, even in today's generation or the next generation, to be able to capture that kind of fidelity. There are also inherent limitations to the current techniques, such as shadow maps, for example. When the light is near the glancing angle of a shadow receiver, then it is impossible to do the correct thing.
With the current state of the art shadow techniques we can manage the resolution much better, and we can do high quality filtering, but we still have long ways to go to get where we need to be, even if we just talk about hard shadows from direct illumination.
I think megatextures could help, but still fundamentally there are things you cannot solve with our current shadow meshes. And until the performance supports real-time ray tracing and global illumination, we're going to continue seeing hack after hack for rendering shadows.
About the potential of using voxels for faster and more efficient global illumination:
HC: Voxels are very very interesting to us. For example, when we take advantage of voxelization, we basically voxelize our level and then we build these portalizations and clustering of our spaces based on the voxelization. And so voxelization, what it does is hide all the small geometry details. And in the regular data structures, it's very easy to reason out the space when it's voxelized versus dealing with individual polygons.
But besides this ability, there's also the very interesting possibility for us to use voxelization or a voxelized scene to do lighting and global illumination. We have some thoughts in that area that we might research in the future, but in general I think it's a very good direction for us to think about; to use voxelization to hide all the details of the scene geometry and sort of decouple the complexity of the scene from the complexity of lighting and visibility. In that way everything becomes easier in voxelization.