Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Bevy meshes as Nannou's rendering backend #966

Closed
tychedelia opened this issue Mar 14, 2024 · 0 comments
Closed

Use Bevy meshes as Nannou's rendering backend #966

tychedelia opened this issue Mar 14, 2024 · 0 comments
Labels

Comments

@tychedelia
Copy link
Collaborator

tychedelia commented Mar 14, 2024

So far, we've taken an incremental approach to moving our existing rendering algorithm into Bevy:

  • In #954 Bevy render PoC #960, we ported the renderer to run in a ViewNode, which allowed us to have full low-level control over a wgpu render pass.
  • To more closely integrate with Bevy, 962 use mid level render apis for render #964 uses Bevy's "mid-level" render apis to express things in terms of render phase items. However, this has some friction outlined on the PR, namely that we write data into a single Draw instance and our renderer assumes we are drawing from a single set of matching gpu buffers.

While the approach taken in #964 works, we're interested in ditching our rendering code entirely, and writing our vertex data directly into Bevy meshes. What this would look like is that each PrimitiveRender would result in a new logical mesh with a matching Bevy StandardMaterial that could be used for texturing and (in future api extensions) other material-y things like emissives, pbr, etc.

This should be understood as a "logical" mesh because our immediate mode api presents some difficulties here: Bevy's assets are asynchronous, with the assumption that you'll pay the cost of uploading a persistent mesh (a sophisticated 3d model, etc) up front and work with that.

As such, we'll need to experiment with caching mesh assets to dynamically provision for Nannou's render primitives. There's a few approaches we might take here:

  1. Allocate new meshes as needed, rendering new primitives into the first available mesh. This has the benefit of being very simple and performing well when a sketch is relatively stable, but likely has poor worst-case characteristics in terms of memory use and performance.
  2. Sort primitives by the size of their vertex data to try to match them with equivalently sized existing gpu resources.
  3. Try to figure out some kind of ad hoc persistence identifier, a hash of the draw data used to create it, etc., and try to optimistically associate primitives with meshes. This is likely error prone and could run into some pathological scenarios that are hard to handle.

While starting with a growable cache should be fine, we'll also need to figure out a long term strategy for cache eviction, i.e. if a mesh hasn't been used in N frames.

TODO:

  • Consider any performance pitfalls. We're mostly thinking about re-allocated buffers for vertex data, but is changing material uniforms problematic?
  • Understand asynchronous characteristics of assets better. Even with caching, we probably still want blocking apis?
  • Bevy's meshes don't have great apis for "building up" a mesh like we do, will this be a problem?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

When branches are created from issues, their pull requests are automatically linked.

1 participant