- Cornell box
- Primitives
- Monte Carlo Integration
- Importance sampling - Cosine weighted hemisphere
- Material - Reflection Fresnel (dielectric/non-conducting)
- Material - Reflection Fresnel (conductor/metal)
- Material - Reflection (glossy and roughness)
- Depth of Field (DOF)
- Aperture shape
- Image projection - Spherical
- Image projection - Cylindrical
- Image projection - Parallel
- Image projection - Alpha channel transparency
- Smooth vertex normals
- Side project - Brilliant cut diamond (3D object)
Some images to illustrate renderer functionality.
It doesn’t look much to the world, but it is the hello world of path tracing. It is used to show the pathtracer (as opposed to raytracer) features out of the box such as area lights, soft shadows, corner shadows, and color bleeding from nearby surfaces. It do also show the lack of point (and very small) lights that is out of the box features for raytracing.
Just look at 'em bleed’n walls!
Image size: 800x400 Amount samples/pixel: 65536 Max recursion depth: 8 Amount facets: 18 Amount spheres: 2 Frame render time: 22h59m38.170794209s
I have implemented three types of primitives for my raytracer: triangles, spheres, and discs.
Image size: 1600x600 Amount samples/pixel: 8192 Max recursion depth: 4 Amount facets: 154807 (1 triangle, room, and 3 detailed Gophers) Amount spheres: 1 Amount discs: 1 Frame render time: 10h48m1.2195545s
A short animation can be found at Vimeo (it should be played in a loop though).
As the pathtracer sends multiple rays through each pixel and each ray bounces and take a different path than the other, some rays hit a light source and others do not. The resulting light (or lack thereof) of each ray is accumulated for the pixel and averaged. The process is called Monte Carlo Integration and for each pixel ray fired and result included in the average, the closer to the perfect intensity value for the pixel.
The Monte Carlo integration is a calculation intense process. It is not clear on what amount of integration samples (rays per pixel) to use for each scene. The integration error (pixel intensity noise) is only halved as the amount of rays per pixel is doubled (and with that doubled rendering time, as it follow linear to the amount of rays fired).
For the image below I have intentionally chosen this "tricky" scene, for pathtracing, with a very "small" and intense light in the ceiling as it would produce a lot of noise in the image when using small amount of rays that would miss the light. (A larger light source would drastically reduce the noise, but I wanted a noisy image throughout the sampling range for comparison.)
See also the cornell box sample comparison in the importance sampling - cosine weighted sections.
Image size: 400x200 Amount samples/pixel: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384 Max recursion depth: 4 Amount facets: 12 Amount spheres: 2 Frame render time: 3.65s ( 1 sample/ray per pixel) Frame render time: 3m49.05s ( 1024 samples/rays per pixel) Frame render time: 1h0m37.03s (16384 samples/rays per pixel)
Summary: Cosine weighted hemisphere sampling will converge faster, producing less noisy result images for fewer rays (and thus in lesser time).
In the recursive step of firing a new ray from an intersection point on a perfect diffuse object you should sample the hemisphere (around the intersection normal) uniformly or even randomly for a perfect diffuse result. However, as the light coming from that ray will be multiplied by the cosine value of the angle between the intersection normal and the ray heading vector (incoming light at low angles are do not contribute much light at all to the intersection point) the rays fired close to the normal will contribute a whooping lot more than the rays close to the intersection point tangent plane.
So, instead of firing a lot of rays from the intersection point where a lot of the only will contribute a minuscule to the Monte Carlo integration sum, we do not multiply the rays by the angle cosine value. Instead, we weigh all the rays equally much but make sure we fire rays close to the intersection normal, that will give a lot of contribution, more often and fire those close to the intersection point tangent plane more seldom, in a cosine distribution way. Given a infinite amount samples it will turn out, mathematically, the same but the cosine weighted distribution will converge faster, producing less noisy result images for fewer rays/in lesser time.
Image size: 800x400 Amount samples/pixel: 4 Max recursion: 4 Amount spheres: 2
Work in progress (WIP)
Fresnel reflections at low gracing angles on objects.
I based my Fresnel implementation on a Schlick approximation.
Image size: 1350x900 Amount samples/pixel: 12288 Max recursion depth: 6 Amount facets: 18 Amount spheres: 49
Image size: 800x600 Amount samples/pixel: 12288 Max recursion depth: 4 Amount facets: 256 Amount spheres: 2 Refreaction index: 1.504 (porcelain)
Image size: 800x500 Amount samples/pixel: 16384 Max recursion depth: 8 Amount facets: 12 Amount spheres: 3 Refreaction index: 1.333 (same as water)
Reflection is not just a single "mirror" parameter on materials, but it is split in two parameters to simulate metal properties. The two parameters are "glossiness" and "roughness".
Glossiness is the parameter that is the common "mirror" parameter that most tracers implement, that is the normal reflection control. A value of 0.0 is no mirrorness at all and a value of 1.0 can give a perfect mirror (depending on the roughness value).
Roughness is how rough the mirror surface is, much like the real world material "brushed aluminum". It gives a non-sharp reflection. Roughness 0.0 is perfect clear mirror reflection and for roughness 1.0 it is the same as diffuse reflection. A material with roughness 1.0 do not differ from a perfectly diffuse material, although it has full (1.0) glossiness.
Image size: 1350x900 Amount samples/pixel: 12288 Max recursion depth: 6 Amount facets: 18 Amount spheres: 49
Image size: 800x500 Amount samples: 1800 Max recursion: 6 Amount facets: 18 Amount spheres: 5 Total execution time: 14h6m26.331560583s
A short animation can be found at Vimeo (it should be played in a loop though).
Depth of Field with a configurable aperture at the camera. The depth of field depends on both aperture size (radius) and focal length.
Read the details on how DOF is implemented.
Image size: 800x400 Amount samples/pixel: 2048 Max recursion: 4 Amount spheres: 6 Frame render time: 3h46m48.561010458s
A funny and fancy, but not so useful, feature is the ability to change the aperture shape. This will have effect in "night shots", much like as in movies with the soft blur out of focus shapes of lights at night.
A round aperture gives round blur shapes and other shapes of the aperture will give… other shapes.
Note that out of focus in the foreground gives the shape upside down and flipped left with right, while out of focus in the background will give shapes "correct" as in the aperture.
Read the details on how DOF and free aperture shape is implemented.
A short animation with luminous balls and a star shaped aperture can be found at Vimeo (it should be played in a loop though).
Spherical projection is made from equirectangular images and allow for an image to be projected onto an object from all angles.
A nifty feature is that you can place your actual scene (objects), camera and lighting within a sphere with spherical projection and you will get an environmental projection dome (sphere) as background.
Note that most of the equirectangular images are twice as wide as they are high. There are 360 degrees around the sphere and half the amount of degrees (180) from the bottom to the top. As long as the texture image has the proportion 1:2 then the "pixels" of texture will be square (proportion 1:1) at the equator.
Amount samples/pixel: 1024 Max recursion: 8 Amount spheres: 4688
Cylindrical projection can be used for any image that is wrapped around a cylinder. The projection can be used on any object of course.
The cylindrical projection projects any image around and perpendicular towards a vector of certain length from a start point.
The projection is defined by three vectors:
-
projection origin - the start point of the v vector.
-
u - the start angle for the projected image (the image x-axis is wrapped 360° around projection vector)
-
v - the projection direction and height (the y-axis of the image)
Image size: 1024x576 Amount samples/pixel: 16384 Max recursion depth: 4 Amount facets: 208780 Amount spheres: 2 Render date: 2023-03-18 Frame render time: 6h53m34.284895459s
Parallel projection can be used from any image that is plainly/straight projected onto a surface.
Textures defined by images that support alpha channel (most common png file format) can define transparency in the projected image. Surface parts of any object with a projected texture where alpha = 0 (transparent) will be treated as a transparent object and will be see-through, with or without any refraction depending on material setup.
Vertex normals, as opposed to facet normals, can be interpolated over the facet for each intersection point to produce a visually smooth surface.
The path tracer can take a facet structure and (re-)calculate the vertex normals to produce a smooth surface between neighbouring facets.
The vertex normal calculation for a vertex shared by many facets is not weighted in any way, but is the average normal of all shared facets.
However, you can specify how much the neighbouring facets are allowed to differ in angle from the current facet whose vertex normals you update.
One of the things I have been dying to render as soon as I get my pathtracer calculation shit together are diamonds.
To be able to catch the reflections and refractions of diamonds is a goal waiting to be achieved.
I realize I will most likely not come as far as different refractions of light at different wavelengths and show off some rainbows. It could be done though, but it would be a pain in the a$$ to implement light sources and define realistic materials according to their distribution of energy in the visual spectral. Not to mention the calculation burden of integrating those spectral distributions… It would probably take forever to render as I am limited in how to perform those calculations efficient. And just to produce some rainbows…
But! I have prepared, and put some serious effort in how to create 3D models of brilliant cut diamonds from parameters. Although you can fiddle with the parameters to change the aspects and proportions of the diamonds will be flawless in their surfaces and angles. Any distortions or flaws need to be added afterwards.
Read all about how a 3d model of a (perfect) brilliant cut diamond is constructed.