-
Notifications
You must be signed in to change notification settings - Fork 49
Examples
The example projects slowly introduce engine features and also act as unit tests for specific functionality across all platforms.
You can check out the live WebAssembly/WebGL examples here.
- Empty Project
- Clear
- Basic Triangle
- Basic Texture
- Basic Compute
- Render Target
- Depth Test
- Depth Texture
- Texture Array
- Debug Text
- Buffer Multi Update
- Input
- ImGui
- Blend Modes
- Texture Formats
- Rasterizer State
- Geometry Primitives
- Stencil Buffer
- MSAA Resolve
- Render Target Mip Maps
- Multiple Render Targets
- Single Shadow
- Play Sound
- Audio Player
- Shader Toy
- Rigid Body Primitives
- Physics Constraints
- Complex Rigid Bodies
- Shadow Maps
- Volume Texture
- Instancing
- Skinning
- Vertex Stream Out
- Scriptable Renderer
- Scriptable Post Processing
- Dynamic Cubemap
- Signed Distance Field Shadows
- Subsurface Scattering
- Entities
- Area Lights
- Stencil Shadow Volumes
- Global Illumination
- Compute Demo (work in progress)..
- IK (work in progress)..
The first sample shows how to hook yourself into the pmtech entry point and create an empty window.
Very basic sample just shows the bare minimum rendering code required to bind the back buffer and clear it.
This sample introduces the first basic rendering functionality required to a get a single triangle drawing into the backbuffer: clear, shader loading, input layout, raster state, vertex buffer, set render target, viewport, and draw arrays draw call.
All render states and resources are created through create functions which return a handle, so that resources and states can be re-used and passed around easily, typically a creation_params struct is used to initialised values, these structs are based on d3d11 desc structures which follow the same pattern.
Builds on top of the features introduced in the previous sample and introduces: texture loading, sampler states, texture binding, index buffer and draw indexed draw call.
Set's up a basic compute pipeline which takes a read-write texture and converts the RGB input into a greyscale output. Currently compute is only supported on the Metal and D3D11 rendering path because of the lack of compute on OpenGL for MacOS.
Draws a triangle into a render target and then draws the render target into the backbuffer, introduces the new renderering features: render target creation and binding.
Draws 2 triangles in front to back order and uses the depth test to correctly draw the teal triangle behind the gold one, without a working depth test the gold triangle would not be seen, this sample uses the back buffers depth buffer which is part of the swap chain.
Using similar code to the render target sample, this sample instead renders the triangle into a depth stencil target which is then rendered to the screen as a texture. This ensures that all rendering back-ends can create depth stencil targets and the perform the associated steps to setup resource views or image views so the depth values can be read in a shader.
Simple demonstration / test of a texture array, the array texture is generated in the pmbuild pipeline from a container folder with an export.jsn
file containing individual images. The images in the folder are sorted by name and packed into a texture array.
Draws debug text to the screen, introduces new rendering functionality of gpu buffer update, constant buffers and constant buffer binding.
This example is to test graphics API backend conformance. If the test passes you will see 4 different coloured quads, if the test fails you will only be able to see 1 quad.
OpenGL and Direct3D11 allow buffers to be updated multiple times per frame, the contents of a buffer when used in a draw call will be the same when the GPU consumes command buffer as it was on the CPU when the draw call was issued.
Metal, Vulkan and other game console graphics API's typically do not support this behaviour, the contents of a buffer during a draw call when the GPU consumes the command buffer will be that of the last buffer update.
To support this behaviour on platforms which do not natively support it a ring buffer can be used to push dynamic draw data and preserve a buffers contents at the time the CPU made the draw call, this behaviour is typically implemented by older graphics API drivers (Direct3D11- and OpenGL3- etc).
Demonstrates how to obtain data from input devices, it displays keyboard, mouse and raw gamepad input data as well as mapped gamepad data for known gamepad devices.
Implementation of the legendary dear imgui. This sample also introduces blend states and blending.
Provides a test bed for different blend modes. The application is configured using a pmfx config blend_modes.jsn
, which is used to supply different blend modes and render passes to render 2 images on top of each other. This test is useful to ensure all rendering backend blend modes are implemented correctly.
Loads and renders different texture formats which have been exported using texturec in the build pipeline. Some texture formats differ from platform to platform and are excluded by using the pen::renderer caps of the current platform. This sample also serves as a good starting point for the implementation of new texture format support in lower level graphics api's.
Renders some cube meshes with different raster states: back-face, front-face and no-face culling plus wireframe fill mode to test rasteriser state across different platforms.
Renders built in primitives, these can also be used as physics collision volumes.
source code stencil_buffer.jsn
Very simple stencil buffer example, clears the stencil buffer to 0x22 and renders a cube with stencil ref as 0x22, render passes are configured in stencil_buffer.jsn
.
Demonstrates how pmfx views defined in masa_resolve.jsn
can choose to resolve their render targets after rendering is complete. The sample shows that a colour target can be resolve with average, depth buffer with max and an additional render target with a custom shader technique that applies gradient and swaps colours.
Renders a simple scene into a render target and automatically generates mip maps for the render target. Uses glGenerateMipMap, mtlblitcommandencoder and GenerateMips (D3D11).
Demonstrates very basic sound playback using Fmod.
Shows how the audio API can be used to do advanced audio manipulation such as EQ, pitch, volume and analyse audio spectrum through FFT.
Introduces shader hot loading for shader-toy like real time shader editing, it also introduces the need to bind textures and constant buffers to different slots which can be used as a test for the shader compilation pipeline for various shader platforms and uniform buffer bindings on platforms supporting glsl 330 where location attributes cannot be used on uniform buffers.
Shows how to create and add rigid body primitives through the pmtech entity component system. Press (T) or click the hand icon on the tool bar to enter physics picking mode, you can click to grab physics objects and move them.
Shows you how to attach point to point, hinge and 6 degrees of freedom constraints to rigid bodies. Press (T) or click the hand icon on the tool bar to enter physics picking mode, you can click to grab physics objects and move them.
This sample introduces more complex rigid bodies:
- Convex hull rigid body - Generated from triangle list.
- Concave triangle mesh rigid body - Allowed only for static objects.
- Compound rigid body - Dynamic concave shapes can be created from a collection of convex bodies.
Press (T) or click the hand icon on the tool bar to enter physics picking mode, you can click to grab physics objects and move them.
Demonstrates variable number of shadow maps via texture arrays. Orthogonal directional (texture 2d array), perspective spot lights (texture 2d array) and omni directional point lights (texture cube array)
Procedurally creates a 3D volume texture and renders it using ray marching.
Shows how the entities inside the entity component system can be grouped into instances to reduce draw call requirements. The sample can easily run at 60fps on a variety of different hardware showcasing the excellent performance characteristics of the data-oriented entity component system. Even though the cubes in the example are instanced, they still get unique transformation and update per frame:
- Update 32k nodes modifying their rotation and invalidating their transform.
- Recalculate 32k local matrices from translation, rotation and scale transform.
- Transform 32k entities by parent to get world matrix.
- Recalculate 32k node AABB's from world matrix.
- Reverse combine all child node AABB's to generate parent containing AABB.
Loads a skinned mesh and applies an animation, to show how to use animation controller and bind animation to rigs.
Uses vertex stream out (d3d) or transform feedback (opengl) to demonstrate how to skin a model once, and render it many times via instancing. This strategy could be used to skin a model once and then render multiple times into shadow maps or other buffers without having to pay the overhead of skinning during each pass.
- pmfx_renderer_demo.cpp
- pmfx_demo.jsn
- common.jsn
- editor_renderer.jsn
- deferred_renderer.jsn
- pmfx.h
- forward_render.pmfx
- deferred_render.pmfx
The sample can render entities in the component entity system with 100 lights in forward, deferred or z-prepass render mode (I plan to add fwd+, clustered and other methods here too). The entity component scene is rendered via a scene view renderer and all render state is defined in config files, configs can include one another and views can be inherited making the system very powerful to create different rendering strategies with minimal amounts of code duplication.
The post processing sample builds upon the systems explained in the data driven renderer example using pmfx. It renders some repeated Menger sponges and gradient background in a pixel shader through ray marching. Post process layers of bloom depth of field, colour correction and CRT effect create the final image.
Uses pmfx to quickly setup multiple render passes of a scene into the faces of a 2 Cubemap textures. The dynamic Cubemap textures are then applied to spheres which move around the scene, the camera for the Cubemap rendering is positioned at the centre of each sphere.
This sample uses a pre-calculated 3D signed distance field (level-set) which was generated from triangular meshes that are placed in the scene. The volume texture was generated in pmtech editor. The 3D texture is stored and loaded in DDS format and then ray marched during forward rendering for each point light. It demonstrates how static scenes can be dynamicaly lit by point lights with complex shadows due to the benefits of ray marching.
Demonstrates and implementation of the seperable sub surface scattering technique, once again pmfx is used to configure the scene simply and easily. A pass is used to generate a shadow map for the head, the head is then rendered with forward lighting and light transmittance is performed on the mesh using the shadow map to detect thickness. Finally a post process pass is used for light reflectance which uses a wide Gaussian distribution.
The entities demo demonstrates how the entity component system can handle large number of entity updates and renders them multiple times into 4 shadow maps with a main pass using forward lighting. Similar to the instancing demo the tourus is made of 64k cubes which all individually rotate but are rendered as instances.
This is an implementation of area lights with linearly transformed cosines. It adds the ability in pmtech to apply quad area lights with textures or animated shaders. Area lights are textured by using texture arrays which have mip maps that can be updated on the fly. This sample makes good use of render target mip map generation in d3d, opengl and via blit command encoder in metal and can be used as a unit test for that functionality.
This demo shows how pmfx can be used to configure complex rendering strategies by implementing stencil shadow volumes. The scene is rendered multiple times per light with a 2 sided stencil test to generate shadow volumes and additive blending to add lighting.
Shadow volume extrusion is done in a vertex shader with a pre-computed edge mesh, where every edge from a polygon mesh has a degenerate quad which can be extruded in the light direction if it is on the silhouette of the mesh from the lights direction.
pmfx introduces "template" and "abstract" view types where the abstract view can be used to render template views multiple times.
This sample also acts a good unit test for the rendering backends and a workout for the stencil buffer which I often find to be an overlooked hardware feature these days.
Realtime global illumination achieved by rendering shadow maps with colour information and then projecting the shadow map into voxels stored in a 3D texture using a compute shader, mip maps are generated for the the 3D texture and then 3D texture is cone traced in real time to evaluate incoming irradiance. Stochastic sampling is used for the rays that are cone traced which results in a noisy image, temporal AA is used to smooth this out and allow for wider GI distributions.