Bevy Sprites
A Sprite
represents an image we want to render.
In Bevy, a Sprite
holds position, color, and size data for a particular texture.
When we talk about textures we really mean any image data represented for raster graphics which uses a two-dimensional picture as a matrix of pixel color values.
We load these images into our assets and add them to entities with a Sprite
component which will render them onto our game using your GPU.
Rendering a sprite
To render a sprite we add a SpriteBundle
to an entity:
fn spawn_player(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn((
SpriteBundle {
transform: Transform::default(),
texture: asset_server.load("sprites/ball.png"),
..default()
},
Player,
));
}
This will spawn an image in the center of the screen. The size of the sprite will be its natural image dimensions.
Changing the sprite size
We can control the size of our sprite by providing a custom one when we are adding the sprite:
fn spawn_player_with_custom_size(
mut commands: Commands,
asset_server: Res<AssetServer>,
) {
commands.spawn((
SpriteBundle {
texture: asset_server.load("sprites/ball.png"),
sprite: Sprite {
custom_size: Some(Vec2::new(100., 100.)),
..default()
},
..default()
},
Player,
));
}
We could also have chosen to set the scale of its Transform
.
Sprites load from your assets folder
Bevy will load your assets from the ./assets
folder by default. So when we write:
asset_server.load("sprites/ball.png")
Bevy will load ./assets/sprites/ball.png
. The load
method will return a Handle<Image>
which we add as a component to our player entity.
Remember that your AssetServer
will not immediately load the image (even though it will feel that fast).
Instead it returns a handle component: Handle<Image>
and adds it to our entity. The sprite won't actually render until the image has been fully loaded.
Use a SpriteBundle to spawn sprites
The SpriteBundle
just contains a pair of Sprite
and Handle<Image>
components with a collection of other components that help position it for the renderer:
// https://docs.rs/bevy/latest/bevy/sprite/prelude/struct.SpriteBundle.html
pub struct SpriteBundle {
// Unique to a sprite bundle:
pub sprite: Sprite,
pub texture: Handle<Image>,
// `SpatialBundle` components:
pub visibility: Visibility,
pub inherited_visibility: InheritedVisibility,
pub view_visibility: ViewVisibility,
pub transform: Transform,
pub global_transform: GlobalTransform,
}
Compare that to a SpatialBundle
which groups all the components related to the correct positional rendering of an entity:
// https://docs.rs/bevy/latest/bevy/prelude/struct.SpatialBundle.html
pub struct SpatialBundle {
pub visibility: Visibility,
pub inherited_visibility: InheritedVisibility,
pub view_visibility: ViewVisibility,
pub transform: Transform,
pub global_transform: GlobalTransform,
}
A Sprite
is just a component representing exactly how to display the texture we attached to the entity:
// https://docs.rs/bevy/latest/bevy/sprite/struct.Sprite.html
#[repr(C)]
pub struct Sprite {
pub color: Color,
pub flip_x: bool,
pub flip_y: bool,
pub custom_size: Option<Vec2>,
pub rect: Option<Rect>,
pub anchor: Anchor,
}
The macro #[repr(C)]
is telling the rust compiler to create this struct exactly how it would be in C.
Bevy does this because it will eventually be passing this sprite through a FFI boundary to be processed by a library written in C.
Sprites are anchored
If you had your camera centered at (0, 0)
you would notice the sprite shows up in the dead center of the screen. That's because the default Anchor
for a sprite is to be Anchor::Center
.
The anchor field of your Sprite
is what determines how your sprite will be positioned relative to its transform:
// https://docs.rs/bevy/latest/bevy/sprite/enum.Anchor.html
pub enum Anchor {
Center,
BottomLeft,
BottomCenter,
BottomRight,
CenterLeft,
CenterRight,
TopLeft,
TopCenter,
TopRight,
Custom(Vec2),
}
The Custom(Vec2)
value will be scaled by the size of the sprite. So you do not need to know the exact physical size of your sprite, you just use relative values.
So top left is (-0.5, 0.5)
and center is (0., 0.)
.
Stacking sprites
When you spawn a sprite, the z
value of its Transform
determines the order it will be drawn.
If you want one sprite to render in front of another you need to set their z-index to be higher than the one you want in the background.
fn setup_game(mut commands: Commands, asset_server: Res<AssetServer>) {
// z-index is 0. so it will get spawned behind
commands.spawn((
SpriteBundle {
transform: Transform::default(),
texture: asset_server.load("sprites/grass.png"),
..default()
},
Tile,
));
// z-index is 1. so it will get spawned in front of our grass tile
commands.spawn((
SpriteBundle {
transform: Transform::from_xyz(0., 0., 1.),
texture: asset_server.load("sprites/ball.png"),
..default()
},
Player,
));
}
This will show the ball.png
sprite on top of the grass.png
sprite.
Creating a sprite sheet
Sometimes we want sprites that are animated or we can change their textures at runtime. To do this we use a SpriteSheetBundle
.
This is the same as a SpriteBundle
except that it takes a TextureAtlas
which wants a TextureAtlasLayout
.
A TextureAtlasLayout
is a holder for how to break up a spritesheet like a tilemap. So we provide the layout parameters and then use the layout to slice up any sprite sheet.
To load a sprite sheet, a common pattern is to do the loading in a resource that you define FromWorld
for:
#[derive(Resource)]
struct PlayerSpriteSheet(Handle<TextureAtlasLayout>);
impl FromWorld for PlayerSpriteSheet {
fn from_world(world: &mut World) -> Self {
let texture_atlas = TextureAtlasLayout::from_grid(
(24, 24).into(), // The size of each image
7, // The number of columns
1, // The number of rows
None, // Padding
None, // Offset
);
let mut texture_atlases = world
.get_resource_mut::<Assets<TextureAtlasLayout>>()
.unwrap();
let texture_atlas_handle = texture_atlases.add(texture_atlas);
Self(texture_atlas_handle)
}
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.init_resource::<PlayerSpriteSheet>()
.run();
}
In this case we are loading a new texture atlas layout we can apply to an image to get our sprite sheet. Our layout is initialized with:
- 7 columns
- 1 row
24x24
pixel image size for each frame of our sprite sheet.
Then we can easily reference this layout and create sprite sheets in our other systems:
fn spawn_player_sprite(
mut commands: Commands,
sprite_atlas: Res<PlayerSpriteSheet>,
asset_server: Res<AssetServer>,
) {
let sprite: Handle<Image> = asset_server.load("animations/player.png");
commands.spawn((
SpriteBundle {
texture: sprite.clone(),
..default()
},
TextureAtlas {
layout: sprite_atlas.0.clone(),
index: 0,
},
));
}
A texture atlas will split up the image according a series of Vec<Rect>
which represent their pixel position on the sprite sheet. The same layout we just created can be used on many images, which is convenient if you use many fixed size sprite sheets.
Texture atlases
Its common to store many textures in a single file. A spritesheet or tilemap can contain hundreds of textures laid out on a 2D grid.
Bevy uses the concept of a TextureAtlas
which holds a TextureAtlasLayout
and an index
that makes working with these types of files much easier.
Each TextureAtlasLayout
is also an Asset
like your images, meshes, etc. So adding them involves loading them with the asset server and keeping a reference to a Handle<TextureAtlasLayout>
.
Typically we store these references in a resource so we can easily reference them in our other systems. But there is nothing stopping you from storing it as a component on an entity.
Texture atlas builder
A TextureAtlas
assumes you will give it a single image file and it will split that up into many textures.
Some workflows separate sprite sheets into many separate files. If we have many separate images and want to combine them into a single sprite sheet, we can use a TextureAtlasBuilder
.
First, we need to load our textures sometime early in our game. This could be done in a FromWorld
implementation from the example above, but for varieties sake we will load it from a startup system:
// https://github.com/bevyengine/bevy/blob/release-0.14.2/examples/2d/texture_atlas.rs
#[derive(Resource, Default)]
struct RpgSpriteFolder(Handle<LoadedFolder>);
fn load_textures(mut commands: Commands, asset_server: Res<AssetServer>) {
let folder = RpgSpriteFolder(asset_server.load_folder("textures/rpg"));
// load multiple, individual sprites from a folder
commands
.insert_resource(folder);
}
This is going to load every file in ./assets/textures/rpg
.
Later, after they have loaded we can create our texture atlas:
fn create_texture_atlas(
loaded_folders: Res<Assets<LoadedFolder>>,
rpg_sprite_handles: Res<RpgSpriteFolder>,
mut atlases: ResMut<Assets<TextureAtlasLayout>>,
mut textures: ResMut<Assets<Image>>,
) {
let mut builder = TextureAtlasBuilder::default();
let folder = loaded_folders.get(&rpg_sprite_handles.0).unwrap();
for handle in folder.handles.iter() {
let id = handle.id().typed_unchecked::<Image>();
let Some(texture) = textures.get(id) else {
warn!("Texture not loaded: {:?}", handle.path().unwrap());
continue;
};
builder.add_texture(Some(id), texture);
}
let (atlas, texture) = builder.build().unwrap();
textures.add(texture);
atlases.add(atlas);
}
For advanced use cases like font atlases or dynamic shadow maps there is also the DynamicTextureAtlasBuilder
which can be updated at runtime cheaply instead of being finalized. However its use for sprite sheets with animation is not encouraged.
Getting the bounding box of transformed sprites
The sprites image dimensions may not be exactly the same as the physical size of the image on our screen.
How a sprite renders is determined by their Transform
scale or a custom_size
if provided. This means that a sprite's image size on your monitor (the physical size) may not be the same its logical size in our game world.
So to get the true logical size we should apply the same scaling on the original size of the image. An easy way is to use a Rect
to represent it:
fn print_sprite_bounding_boxes(
mut sprites: Query<(&Transform, &Handle<Image>), With<Sprite>>,
assets: Res<Assets<Image>>,
) {
for (transform, image_handle) in &mut sprites {
let image_size = assets.get(image_handle).unwrap().size_f32();
info!("image_dimensions: {:?}", image_size);
info!("position: {:?}", transform.translation);
info!("scale: {:?}", transform.scale);
let scaled = image_size * transform.scale.truncate();
let bounding_box =
Rect::from_center_size(transform.translation.truncate(), scaled);
info!("bounding_box: {:?}", bounding_box);
}
}
The output of the console would be:
image_dimensions: Vec2(288.0, 288.0)
position: Vec3(0.0, 0.0, 0.0)
scale: Vec3(0.1, 0.1, 0.1)
scaled_image_dimension: Vec2(28.800001, 28.800001)
bounding_box: Rect {
min: Vec2(-14.400001, -14.400001),
max: Vec2(14.400001, 14.400001)
}
Clicking on our sprites
The code above would return the logical value of the bounds of our sprite in our game world. However, if we wanted to click on it we would run into a big issue:
The actual mouse cursor needs to be projected according to the view of our current Camera
. In the past this had to be done manually and involves calculating normalized device coordinates. But recently the Camera
component has got some methods that help us out:
fn cast_cursor_ray(
windows: Query<&Window>,
cameras: Query<(&Camera, &GlobalTransform)>,
) {
let window = windows.single();
let (camera, position) = cameras.single();
// check if the cursor is inside the window and get its position
// then, ask bevy to convert into world coordinates, and truncate to discard Z
if let Some(world_position) = window
.cursor_position()
.and_then(|cursor| camera.viewport_to_world(position, cursor))
.map(|ray| ray.origin.truncate())
{
info!("World coords: {}/{}", world_position.x, world_position.y);
}
}
A normalized device coordinate would be a Vec3
that ranges from (-1, 1)
in the x
and y
directions and (0, 1)
in the z
axis.
These normalized device coordinates allow us to cast a ray from our mouse to the game world to give the logical value of our mouse in our game world, regardless of how zoomed in or out our camera is.
We could then check if these coordinates are within the Rect
from our bounding box to register a clicking event and react to them in other systems.
How sprites are rendered
Sprites are rendered by loading our textures onto our GPU through wgpu to render them efficiently.
To do any post processing we can set a specific Camera
to render only our sprites and add our extra processing to it.
When a sprite is consumed by the rendering system your Sprite
components are extended as ExtractedSprite
before being given to the rendering pipeline.
Finally sprites are batched into an entity containing a SpriteBatch
component which helps make rendering faster. This batching is sensitive to the z-index mentioned earlier which will determine the rendering order.
Bevy will automatically set ComputedVisibility
for sprites that are off screen and the rendering system will not try and render them.
Pixel perfect rendering
By default images are anti aliased, which makes their edges blur slightly when they are placed. This can cause "bleed" in your sprites and possibly lines may appear at the edges of your sprites where they shouldn't be.
When working with mostly sprites in a game you'll want to prevent blurry sprites by setting the ImagePlugin
to use nearest neighbor sampling:
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(ImagePlugin::default_nearest()))
.init_resource::<PlayerSpriteSheet>()
.run();
}
The default is linear interpolation which can leave the edges of sprites looking blurred. Nearest neighbor sampling will keep the hard edges and make sprites look pixel perfect.