Tainted\\Coders

Bevy Rendering

Bevy version: 0.16Last updated:

Rendering in Bevy is done through the wgpu library. This allows Bevy to render in both native and web based environments.

wgpu is a safe portable graphics library for Rust based on the WebGPU API.

In your Bevy game there are two big things going on each frame:

  1. Simulation where we run our game logic
  2. Rendering where we actually draw things on the screen

These two parts are done in parallel and rendering is by far the more unfamiliar and behind the scenes.

Graphics can be rendered with your GPU or CPU. Bevy has moved more towards using GPU driven rendering in 0.16 which lets your GPU do more of the work to figure out what to draw on the screen.

Because drawing things on screen is quite expensive, Bevy automatically uses the camera's viewpoint to determine which things to draw and which to ignore in a process called frustum culling.

The Bevy graphics stack

The graphics stack consists of 5 layers:

+-----------------------------+
|     Bevy Rendering Engine   |
|-----------------------------|
| - Applies materials, meshes |
| - Computes transforms       |
| - Submits draw commands     |
+-----------------------------+
              |
              v
+-----------------------------+
|            wgpu             |
|-----------------------------|
| - Safe abstraction over     |
|   WebGPU inspired by Vulkan |
| - Handles resource mgmt,    |
|   GPU memory allocation     |
| - Multi-backend (Vulkan,    |
|   Metal, DX12, WebGPU)      |
+-----------------------------+
              |
              v
+-----------------------------+
|     Graphics API (Backend)  |
|-----------------------------|
| - Vulkan / Metal / DX12 /   |
|   WebGPU / OpenGL           |
| - Used by wgpu to talk to   |
|   GPU drivers               |
+-----------------------------+
              |
              v
+-----------------------------+
|     GPU Driver (Vendor)     |
|-----------------------------|
| - Provided by AMD/NVIDIA/   |
|   Intel, etc.               |
| - Implements low-level API  |
| - Handles scheduling, memory|
|   transfers, shader mgmt    |
+-----------------------------+
              |
              v
+-----------------------------+
|        GPU Hardware         |
|-----------------------------|
| - Executes commands         |
| - Renders frames            |
| - Manages internal VRAM     |
+-----------------------------+

Down at the bottom of the stack is the GPU hardware, we don't interface with this directly.

Then there are the drivers. A driver is a low-level piece of software that abstracts hardware specific details and provides a standardized interface to the operating system.

These drivers can be from a vendor or a third party. They can be open or closed source. These drivers are interfacing with the kernel space, not the user space of your computer.

The actual user space API to these drivers is the graphics drivers. These are usually DirectX, Metal, OpenGL and Vulkan. This layer is relatively settled and works well.

Then comes the safety and resource allocation layer. This is the wgpu layer. This layer looks a lot like the Vulkan API but is helping solve some problems with safety and resource allocation.

wgpu is an implementation of the WebGPU standard. This standard is more restrictive and higher-level than Vulkan. It is designed for safety, portability, and use in browser contexts, so it avoids Vulkan’s complexity and potential for misuse.

In OpenGL these features comes for free. Well almost, OpenGL's hidden memory management and global state make it harder to reason about performance, which is part of why APIs like Vulkan/WebGPU exist.

In Vulkan, this needs to be explicitly implemented. The GPU has its own memory, and Vulkan has a limited allocator which lets you request GPU memory buffers. You need a second allocator on top of that one to allocate arbitrary-sized memory buffers. This is the kind of work that wgpu is doing for us.

Finally we have Bevy's rendering engine which is going to apply all our textures, meshes, and materials by calculating the transforms to place them relative to our camera and send this information to wgpu who will tell our GPU what to render.

WGPU

At its heart, wgpu is helping us safely allocate all our wgpu::Buffer which is a continuous block of memory allocated on the GPU.

The next level up would be a wgpu::Texture which represents an image stored in GPU memory. Textures are usually accessed in shaders through views and samplers.

Then there are the pipelines, which is a highly specialized program that runs on the GPU. Pipelines are composed as a sequence of stages.

Shaders are small programs that run on the GPU within the pipelines. They are written in the WGSL language. Shaders are designed to be highly parallel by default making them fast to run when dispatched to the GPU.

GPU vs CPU rendering

Originally Bevy was very CPU based. Your CPU would figure out which objects were visible on the screen and apply frustum and occlusion culling. Then your CPU would write all this to your GPU using one of the graphics APIs we mentioned earlier.

In GPU based rendering we only have the CPU send the transform information of our objects. The CPU sends the objects that should maybe be rendered on the screen and your GPU works out which ones should actually be rendered.

Render graph

Bevy provides an extendable graph-structured rendering system, where input nodes pass data to output nodes. These nodes are held inside the RenderGraph, a stateless structure that holds your stateful nodes.

In a similar way to your App, it has a separate runner that iterates over this graph to actually render things.

These graphs are made up of Nodes, Edges and Slots.

  1. Nodes are responsible for generating draw calls and operating on input and output slots.
  2. Edges specify the order of execution for nodes and connect input and output slots together.
  3. Slots describe the render resources created or used by the nodes.

Adding an input node to a render graph allows them to be nested.

The Render Pipeline

There are 5 steps to the Bevy rendering pipeline:

  1. Extract where all the info required to render is taken from our game world
  2. Prepare where we set up all the vertex data and write the vertex buffer
  3. Queue get the pipeline created, set up the bind groups and add entities into a "render phase" (list of items we are going to use to perform our draw calls)
  4. Render Graph which is outside of normal ECS and system flows and calls out to each node which generate draw calls
  5. Draw Functions when we use the RenderCommand generated in the last step to actually perform the drawing on the screen

Extraction

The goal of this step is to extract all the data we need from the game world so both simulation and rendering can continue without worrying about each other.

It acts as a sync point so both simulation and rendering will lock during this step and cannot continue until this step is finished.

Because of its blocking nature its important to keep this step as fast as possible and just copy values without any heavy algorithms.

Two systems are added to your application to handle this: one to extract the camera view and another to extract the UI nodes. This system produces an ExtractedView component which is wrapped into the render phase used during the queue step.

We need the ExtractedView which takes our cameras view into account so we can project our nodes onto the screen relative to its viewpoint.

Prepare

Here our goal is to write vertices and bind group data to UiMeta resource.

Queue

Here we are setting up out UiPipeline by telling the GPU how we laid out our vertex data in the previous prepare step.

We then set up vertex and fragment shaders. Bevy also uses some caching to check a handle to this pipeline to see if it has changed, otherwise we can cache the results.

Render Graph

In this step we create an acyclical render graph containing all the steps to render our nodes on the screen.

Render Graphs are a way to logically model GPU command construction in a modular way. Graph Nodes pass GPU resources like Textures and Buffers (and sometimes Entities) to each other, forming a directed acyclic graph.

When a Graph Node runs, it uses its graph inputs and the Render World to construct GPU command lists.

Render Graphs also support sub graphs (basically namespaced graphs) that can be called by any node (e.g. "2d" sub graph and a "3d" subgraph for different parts of the game or multiple windows).

This will also call out to wgpu to begin a render pass.

Draw functions

Our render pass has begun and we begin rendering our phase items.

For every item we added to the render phase we call draw on our DrawFunction.

Draw is actually just a trait we can implement ourself or use a RenderCommand.

RenderCommand can itself be a tuple of RenderCommand.

Textures

A texture refers to a two-dimensional image that is used to add details, colors, and patterns to the surface of a 3D model. Textures are typically created in image editing software such as Photoshop or GIMP.

They can be simple images, such as a picture of wood grain, or complex maps that define different aspects of the material, such as a diffuse map, normal map, specular map, etc.

Textures are often stored as files in formats like PNG or JPEG.

Materials

A Material defines how light interacts with the surface of an object. It determines the visual properties, such as color, reflectivity, shininess, transparency, and more.

Your textures are applied to a material. You can think of a texture as the visual details of what an object should look like and the material contains the rules about how the texture should appear given an environment.

In Bevy, materials are defined using shaders, which are programs that run on the GPU and calculate how light interacts with the geometry of an object.

Physical based rendering (PBR) uses a series of properties to mimic real life.

Meshes

When rendering a game object, the mesh provides the underlying geometry on which textures and materials are applied.

The mesh's vertices store position information, which determines the shape and structure of the object. Edges connect the vertices, and faces define the polygons that form the visible surface of the object.

Textures are often mapped onto the mesh using UV coordinates, which define how the texture image is wrapped around the geometry. UV coordinates assign specific points on the mesh's surface to corresponding pixels in the texture image.

The material is responsible for determining how light is reflected or absorbed by different parts of the mesh, giving it a specific visual appearance.

In bevy we are provided with some built in meshes that represent common shapes:

3D

2D

Rendering entities

We can actually render our entities on the screen by giving them a Material (which contains a Texture) and a Mesh:

use bevy::{
  color::palettes::css::RED,
  math::prelude::*, prelude::*
};

fn main() {
  App::new()
    .add_plugins(DefaultPlugins)
    .add_systems(Startup, setup)
    .run();
}

fn setup(
  mut commands: Commands,
  mut meshes: ResMut<Assets<Mesh>>,
  mut materials: ResMut<Assets<ColorMaterial>>,
  asset_server: Res<AssetServer>,
) {
  // Spawn our viewport so we can see things
  commands.spawn(Camera2d);

  let red: Color = RED.into();
  let circle = Circle::new(50.);

  // Circle mesh
  commands.spawn((
    Mesh2d(meshes.add(circle)),
    MeshMaterial2d(materials.add(ColorMaterial::from(red))),
    Transform::from_xyz(-150., 0., 0.)
  ));

  // Sprite
  commands.spawn((
    Sprite { image: asset_server.load("enemy.png"), ..default() },
    Transform::from_translation(Vec3::new(-50., 0., 0.))
  ));
}

The calls to into() are letting your compiler figure out how to change the more basic types into the ones the functions expect without having to do it by hand. It does so through the implementation of From and Into traits on the respective types.

When we called materials.add or meshes.add we inserted an Asset into our Assets and got back a handle. It also creates an AssetEvent::Created which we can read in other systems if we wanted to.

Text

The default text size in bevy is 24px. To change any of the default styles we render one of two bundles and pass in our own TextStyle.

We can either render the text as part of the UI using Text:

fn spawn_text_in_ui(asset_server: ResMut<AssetServer>, mut commands: Commands) {
  let font = asset_server.load("fonts/FiraSans-Bold.ttf");
  let text_font = TextFont {
    font: font.clone(),
    font_size: 60.0,
    ..default()
  };
  let text_color = TextColor(Color::WHITE);
  let text = Text::new("translation");

  commands.spawn((text_font, text_color, text));
}

Or render it as part of the scene with Text2d:

fn spawn_text_in_scene(
  asset_server: ResMut<AssetServer>,
  mut commands: Commands,
) {
  commands.spawn((
    TextFont {
      font: asset_server.load("fonts/FiraSans-Bold.ttf"),
      font_size: 100.0,
      ..default()
    },
    TextColor(Color::WHITE),
    Text2d::new("Hello, Bevy!"),
    TextLayout::new_with_justify(JustifyText::Center),
    Transform::from_xyz(0., 0., 0.),
  ));
}

Lighting

Lighting can be controlled by spawning lighting specific components on positioned entities.

You can control the range of calculating these lights using a CascadeShadowConfigBuilder.