Wow, four years on… I pretty much quit working on this project, despite what I said in my last post, because I landed a job working on games physics simulations.
But today I fancy taking a look at the current state of the art in interactive photo-realistic rendering. Here is the best looking offering I found on Youtube, courtesy of the Unreal 4 engine…
It looks stunning, though I can’t see any caustics or refraction in there. Also it does feel a bit plasticky.
Pretty close, but have they cracked it? If you have seen better examples than this please feel free to comment!
I’ve had some new ideas about how I could take this project forward satisfactorily, so I’ve decided to continue working on it. These are motivated mainly by two factors. Firstly an increased understanding of the platforms I’m working with. And secondly a flash of inspiration regarding efficient calculation of gobal illumination.
In fact, I started working on this again a few weeks ago, but didn’t bother posting on the blog about it, ‘cos I figured I’d wait until I had something visually interesting to post.
But actually, I do have something at least mildy interesting to post, though it is not my own work. It is a picture of the kind of scene I hope to be able to render interactively when I implement the ideas. And it follows.
In this post I summarise my progress towards interactive global illumination and explain why I have reached the end of the line with this project.
In my previous post I showcased the implementation of an interactive ray tracer on the GPU. I was hoping to add photon mapping in order to be able to simulate caustics and indirect diffuse reflection. I was also hoping that it would be viable to generalise the possible shapes by using NURBs (I have been using spheres and cubes).
The following video demonstrates the work in progress photon map implementation I achieved.
As you can see it includes quality 2nd and 3rd order diffuse reflection.
My next step was to add cuboids in a move towards replicating the scene achieved with my interactive ray tracer (see previous post). The problem is that by adding two different types of shape to the ray tracing and photon mapping shaders we exceed the allowed number of instructions for the shaders, which can only be solved by reducing its parallelism and thus proportionally reducing performance.
I looked into replacing cuboids and spheres with NURBs but it seems overly complex.
In summary this project has served as a great learning experience in both C++, DirectX, and general graphics programming, and I am quite proud of the GPU based interactive ray tracer I was able to implement, which you can see in the following video.
Running at 1600 by 900 resolution on a modest system. The performance is unfortunately halved by the video capture software I use. Without the video capture software running in the background the program runs at an average of about 10 frames per second.
The scene comprises the same test scene as in the previous version. There are two improvements over that version however: 1.) The mirrored sphere now reflects the glass sphere properly (no longer black). 2.) The frame rate is much improved.
There are three reasons I am no longer worried about the frame rate, at least for the moment: 1.) The way I plan to implement non specular lighting in the future will hopefully be quite a bit faster than the current method I am using. 2.) My system is a modest one. 3.) Improving the frame rate is no longer a priority and I suspect that my code can be made more efficent later on.
In my opinion the next priorities are to include focusing of light by the glass sphere onto the floor below it (caustics), and ambient lighting (e.g. walls reflecting light onto other objects). These effects should complete the photorealism of the image as, thanks to the method I plan to use, they will come hand in hand with important effects such as soft shadows and anti-aliasing.
Running at 1600 by 900 resolution on a modest system. The performance is unfortunately halved by the video capture software I use.
The scene comprises a classic test scene consisting of a mirrored sphere and a glass sphere inside a box with blue, red, and white walls.
Whilst the scene shows off the optics nicely it is immediately apparent that the frame rate is much lower than in my previous video, to the point where it becomes a bit of a stretch to call the program interactive.
The lower frame rate is due to two reasons: 1.) The video capture software I use unfortunately halves the frame rate. 2.) Most of the time all of the rays are interacting with objects, as opposed to the previous video where most of the rays hit nothing at all, which again about halves the frame rate.
Another thing that you may have noticed is that the reflection of the glass sphere in the mirrored sphere is almost completely black. That is because I have a maximum scattering depth of 3. This means that the program does not simulate sufficient scatters to bounce rays off the mirrored sphere through the glass sphere and off the blue wall to the light source.
The frame rate problem is not as much of a stumbling block as it might appear. I suspect my algorithms could be made more efficient in more than one way, and also I suspect the way I am implementing diffuse lighting at the moment is slower than the way I plan to implement diffuse lighting in the future.
In my next version of the project I will either increase the scattering depth without decreasing frame rate, or I will make my algorithms more efficient in order to improve the frame rate, probably the latter since the frame rate is a more obvious issue than the scattering depth.
Running at 1600 by 900 resolution on a modest system.
The scene comprises a spherical light source (treated as a point light), a glass ball, a specularly and diffusely reflecting box (approximating a very shiny plastic box for example), and a diffusely reflecting box (approximating a paper cube for example).
The next step will be to compose a better test scene.