Skip to content

Latest commit

 

History

History
316 lines (225 loc) · 17.3 KB

functionality.adoc

File metadata and controls

316 lines (225 loc) · 17.3 KB

Functionality

Some images to illustrate renderer functionality.

Cornell box

It doesn’t look much to the world, but it is the hello world of path tracing. It is used to show the pathtracer (as opposed to raytracer) features out of the box such as area lights, soft shadows, corner shadows, and color bleeding from nearby surfaces. It do also show the lack of point (and very small) lights that is out of the box features for raytracing.

Just look at 'em bleed’n walls!

Cornell box
Figure 1. The classic Cornell box. Color bleeding.
Cornell box image details
Image size:             800x400
Amount samples/pixel:   65536
Max recursion depth:    8
Amount facets:          18
Amount spheres:         2
Frame render time: 22h59m38.170794209s

Primitives

I have implemented three types of primitives for my raytracer: triangles, spheres, and discs.

The primitives triangle
Figure 2. The primitives implemented: triangles, spheres, and discs
Primitives image details
Image size:             1600x600
Amount samples/pixel:   8192
Max recursion depth:    4
Amount facets:          154807 (1 triangle, room, and 3 detailed Gophers)
Amount spheres:         1
Amount discs:           1
Frame render time: 10h48m1.2195545s

A short animation can be found at Vimeo (it should be played in a loop though).

Monte Carlo Integration

As the pathtracer sends multiple rays through each pixel and each ray bounces and take a different path than the other, some rays hit a light source and others do not. The resulting light (or lack thereof) of each ray is accumulated for the pixel and averaged. The process is called Monte Carlo Integration and for each pixel ray fired and result included in the average, the closer to the perfect intensity value for the pixel.

The Monte Carlo integration is a calculation intense process. It is not clear on what amount of integration samples (rays per pixel) to use for each scene. The integration error (pixel intensity noise) is only halved as the amount of rays per pixel is doubled (and with that doubled rendering time, as it follow linear to the amount of rays fired).

For the image below I have intentionally chosen this "tricky" scene, for pathtracing, with a very "small" and intense light in the ceiling as it would produce a lot of noise in the image when using small amount of rays that would miss the light. (A larger light source would drastically reduce the noise, but I wanted a noisy image throughout the sampling range for comparison.)

Cornell box at different amount of Monte Carlo integration samples (rays per pixel)
Figure 3. Cornell box at different amount of Monte Carlo integration samples (rays per pixel). No importance sampling is used but simple uniform random hemisphere sampling.

See also the cornell box sample comparison in the importance sampling - cosine weighted sections.

Monte Carlo integration with different amount samples (rays per pixel) details
Image size:             400x200
Amount samples/pixel:   1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384
Max recursion depth:    4
Amount facets:          12
Amount spheres:         2
Frame render time:      3.65s (    1 sample/ray per pixel)
Frame render time:   3m49.05s ( 1024 samples/rays per pixel)
Frame render time: 1h0m37.03s (16384 samples/rays per pixel)

Importance sampling - Cosine weighted hemisphere

Summary: Cosine weighted hemisphere sampling will converge faster, producing less noisy result images for fewer rays (and thus in lesser time).

In the recursive step of firing a new ray from an intersection point on a perfect diffuse object you should sample the hemisphere (around the intersection normal) uniformly or even randomly for a perfect diffuse result. However, as the light coming from that ray will be multiplied by the cosine value of the angle between the intersection normal and the ray heading vector (incoming light at low angles are do not contribute much light at all to the intersection point) the rays fired close to the normal will contribute a whooping lot more than the rays close to the intersection point tangent plane.

So, instead of firing a lot of rays from the intersection point where a lot of the only will contribute a minuscule to the Monte Carlo integration sum, we do not multiply the rays by the angle cosine value. Instead, we weigh all the rays equally much but make sure we fire rays close to the intersection normal, that will give a lot of contribution, more often and fire those close to the intersection point tangent plane more seldom, in a cosine distribution way. Given a infinite amount samples it will turn out, mathematically, the same but the cosine weighted distribution will converge faster, producing less noisy result images for fewer rays/in lesser time.

Uniform random
Figure 4. "Uniform random" sampling vs "Cosine weighted" sampling in a side by side comparison at different amount of Monte Carlo integration samples (rays per pixel)
Uniform random sampled hemisphere. 4 samples/pixel.
Figure 5. Uniform random distribution hemisphere sampling of the intersection point hemisphere. Using 4 samples per pixel.
Cosine weighted hemisphere sampling. 4 samples/pixel.
Figure 6. Cosine weighted hemisphere sampling of the intersection point hemisphere. Using 4 samples per pixel. The faster convergence is shown in comparison to the uniform random sampled hemisphere.
Uniform random and cosine weighted sampling of hemisphere image
Image size:             800x400
Amount samples/pixel:   4
Max recursion:          4
Amount spheres:         2

Material - Reflection Fresnel (dielectric/non-conducting)

Work in progress (WIP)

Fresnel reflections at low gracing angles on objects.

I based my Fresnel implementation on a Schlick approximation.

Fresnel reflection with different refraction indicies and levels of glossiness
Figure 7. Light green spheres with fresnel and glossy reflection. Increasing refraction index from left 1.000273 (air) to right 2.417 (diamond) and increasing glossiness from bottom 0.0 to top 1.0 (roughness is 0.0 for all spheres).
Fresnel and glossy image details
Image size:             1350x900
Amount samples/pixel:   12288
Max recursion depth:    6
Amount facets:          18
Amount spheres:         49
Fresnel reflection using refraction index 1.504
Figure 8. Fresnel reflection angle. Reflection increases at the very edge of the sphere (low gracing angle). The surrounding medium is air and the sphere is white with refraction index of porcelain (refraction index 1.504).
Fresnel angle image details
Image size:             800x600
Amount samples/pixel:   12288
Max recursion depth:    4
Amount facets:          256
Amount spheres:         2
Refreaction index:      1.504 (porcelain)
Fresnel reflection using refraction index 1.33
Figure 9. Fresnel reflection. Left sphere has refraction index 1.333 (same as water). Note that the reflection increases at the very edge of the left sphere (low gracing angle) and reflection strength subside towards the center of the left sphere. Right sphere has no Fresnel as it has the same refraction index as the surrounding air, but has matching common glossiness and roughness instead. Notice that the right sphere still has stronger reflections of the walls at its center, but you can not find any at the left Fresnel sphere. (Fresnel reflection is really present all around the left sphere but is very weak at direct angles.)
Fresnel image details
Image size:             800x500
Amount samples/pixel:   16384
Max recursion depth:    8
Amount facets:          12
Amount spheres:         3
Refreaction index:      1.333 (same as water)

Material - Reflection Fresnel (conductor/metal)

Work in progress (WIP)

Material - Reflection (glossy and roughness)

Reflection is not just a single "mirror" parameter on materials, but it is split in two parameters to simulate metal properties. The two parameters are "glossiness" and "roughness".

Glossiness is the parameter that is the common "mirror" parameter that most tracers implement, that is the normal reflection control. A value of 0.0 is no mirrorness at all and a value of 1.0 can give a perfect mirror (depending on the roughness value).

Roughness is how rough the mirror surface is, much like the real world material "brushed aluminum". It gives a non-sharp reflection. Roughness 0.0 is perfect clear mirror reflection and for roughness 1.0 it is the same as diffuse reflection. A material with roughness 1.0 do not differ from a perfectly diffuse material, although it has full (1.0) glossiness.

Reflective parameters glossiness and roughness
Figure 10. Light green sphere with reflective parameters glossiness and roughness. Glossiness increasing from left 0.0 to right 1.0 and roughness increasing from bottom 0.0 to top 1.0.
Reflection image details
Image size:             1350x900
Amount samples/pixel:   12288
Max recursion depth:    6
Amount facets:          18
Amount spheres:         49
Cornell box with metallic settings
Figure 11. A Cornell box with "metallic like" settings.
Metallic cornell box details
Image size:        800x500
Amount samples:    1800
Max recursion:     6
Amount facets:     18
Amount spheres:    5
Total execution time: 14h6m26.331560583s

A short animation can be found at Vimeo (it should be played in a loop though).

Depth of Field (DOF)

Depth of Field with a configurable aperture at the camera. The depth of field depends on both aperture size (radius) and focal length.

Read the details on how DOF is implemented.

Table 1. Depth of field using aperture 12.0 (in units, not actual 12f as in camera lenses) and "view plane distance" (the distance to perfect focus point) 2000.
Depth of field (none)
Depth of field
DOF Image details
Image size:             800x400
Amount samples/pixel:   2048
Max recursion:          4
Amount spheres:         6
Frame render time: 3h46m48.561010458s

Aperture shape

A funny and fancy, but not so useful, feature is the ability to change the aperture shape. This will have effect in "night shots", much like as in movies with the soft blur out of focus shapes of lights at night.

A round aperture gives round blur shapes and other shapes of the aperture will give…​ other shapes.

Note that out of focus in the foreground gives the shape upside down and flipped left with right, while out of focus in the background will give shapes "correct" as in the aperture.

Different aperture shapes
Figure 12. Different aperture shapes for a matrix of luminous balls

A short animation with luminous balls and a star shaped aperture can be found at Vimeo (it should be played in a loop though).

Image projection - Spherical

Spherical projection is made from equirectangular images and allow for an image to be projected onto an object from all angles.

A nifty feature is that you can place your actual scene (objects), camera and lighting within a sphere with spherical projection and you will get an environmental projection dome (sphere) as background.

Note that most of the equirectangular images are twice as wide as they are high. There are 360 degrees around the sphere and half the amount of degrees (180) from the bottom to the top. As long as the texture image has the proportion 1:2 then the "pixels" of texture will be square (proportion 1:1) at the equator.

Spherical projection
Figure 13. Spherical projection.
Spherical projection - environmental projection
Figure 14. Spherical projection as environmental projection on a large enclosing sphere
Image details
Amount samples/pixel:   1024
Max recursion:          8
Amount spheres:         4688

Image projection - Cylindrical

Cylindrical projection can be used for any image that is wrapped around a cylinder. The projection can be used on any object of course.

The cylindrical projection projects any image around and perpendicular towards a vector of certain length from a start point.

The projection is defined by three vectors:

  • projection origin - the start point of the v vector.

  • u - the start angle for the projected image (the image x-axis is wrapped 360° around projection vector)

  • v - the projection direction and height (the y-axis of the image)

Cylindrical projection
Figure 15. Cylindrical projection
Cylindrical projection image details
Image size:             1024x576
Amount samples/pixel:   16384
Max recursion depth:    4
Amount facets:          208780
Amount spheres:         2

Render date:            2023-03-18
Frame render time:      6h53m34.284895459s

Image projection - Parallel

Parallel projection can be used from any image that is plainly/straight projected onto a surface.

Parallel projection
Figure 16. Parallel projection. A circular disc and three spheres, all with parallel projection. One sphere share the exact same projection as the disc. The second has a checker pattern and the third has a tree rings pattern projected on them from different angles.

Image projection - Alpha channel transparency

Textures defined by images that support alpha channel (most common png file format) can define transparency in the projected image. Surface parts of any object with a projected texture where alpha = 0 (transparent) will be treated as a transparent object and will be see-through, with or without any refraction depending on material setup.

Projection using alpha channel for transparancy
Figure 17. Transparency through texture image alpha channel.

Smooth vertex normals

Vertex normals, as opposed to facet normals, can be interpolated over the facet for each intersection point to produce a visually smooth surface.

The path tracer can take a facet structure and (re-)calculate the vertex normals to produce a smooth surface between neighbouring facets.

The vertex normal calculation for a vertex shared by many facets is not weighted in any way, but is the average normal of all shared facets.

However, you can specify how much the neighbouring facets are allowed to differ in angle from the current facet whose vertex normals you update.

Result of smooth vertex normals at different angle thresholds
Figure 18. Smooth vertex normal calculation on Beethoven bust statue. Result of smooth vertex normal calculation at different angle thresholds for facets. Facets that are in a too sharp angle of each other (defined by the threshold value) are not smoothened together. A threshold value of 0 would not smooth any facet at all as every facet would have its vertex normals equal to the facet normal. A threshold value of 180 would smooth all neighbouring facets in each vertex with each other. The angle values in the image are in degrees.
Result of smooth vertex normals at different angle thresholds
Figure 19. Smooth vertex normal calculation on Beethoven bust statue. 0 is "no smoothing" original and 180 is "smoothing of all" vertices of all shared facets.

Side project - Brilliant cut diamond (3D object)

Table 2. Images of a real (as opposed to a computer rendered) brilliant cut diamond
Diamond brilliant cut - side view
Diamond brilliant cut - top view

One of the things I have been dying to render as soon as I get my pathtracer calculation shit together are diamonds.
To be able to catch the reflections and refractions of diamonds is a goal waiting to be achieved.

I realize I will most likely not come as far as different refractions of light at different wavelengths and show off some rainbows. It could be done though, but it would be a pain in the a$$ to implement light sources and define realistic materials according to their distribution of energy in the visual spectral. Not to mention the calculation burden of integrating those spectral distributions…​ It would probably take forever to render as I am limited in how to perform those calculations efficient. And just to produce some rainbows…​

But! I have prepared, and put some serious effort in how to create 3D models of brilliant cut diamonds from parameters. Although you can fiddle with the parameters to change the aspects and proportions of the diamonds will be flawless in their surfaces and angles. Any distortions or flaws need to be added afterwards.

Read all about how a 3d model of a (perfect) brilliant cut diamond is constructed.