Tainted \\ Coders

Bevy ECS

Last updated:

ECS stands for Entity Component System and is a programming paradigm (like Model View Controller) where we store and access our data in a way that maximizes performance.

As with any programming paradigm we name and box things taxonomically:

  • Entities represent the “things” in your world, usually with a simple ID
  • Components represent the data the “things” in your world own
  • Systems enumerate the components and affect the rest of your program

This way of thinking is probably not as intuitive as something like MVC programming where we group the idea of entities, components and systems together. Breaking them apart means the programmer has to be more explicit and takes them further away from their natural perception of reality.

So why bother? Performance.

Squeezing performance out of our CPU

To squeeze speed out of our computers we need to be careful about the way we store our data in memory. Our CPU is going to be more performant if we can get it to give us data from its cache instead of fetching it from RAM which is much more expensive.

The more we can keep our data in contiguous arrays, the better your CPU will be at using its cache which avoids unnecessary trips to our computers much slower memory.

When I say “contiguous array” I mean every item in our array should be useful and sequential so when we use it our CPU cache can predict us correctly. We are being more sympathetic to our CPU and in exchange our CPU is working harder to save us time accessing memory.

An array of [1, 0, 0, 0, 5, 0, 0, 7] could be contiguous if the 0 means something. But if we are using the 0 as a null value slot then our access patterns would not match what the cache expects.

The big idea in ECS is to store everything in contiguous arrays in a way that matches how we would use them in our game logic.

Game engines loop over our game logic and each tick we want to do something with the “things” in our game. Usually modifying the data they hold, like move its position.

In a classic object oriented game we might layout our player like this:

struct Position {
    x: f32,
    y: f32,

struct Velocity {
    dx: f32,
    dy: f32,

struct Points(f32);

struct Player {
    points: Points(f32),
    position: Position,
    velocity: Velocity,

fn main() {
    let players: Vec<Player> = vec![
        Player {
            points: Points(1.0),
            position: Position { x: 0.0, y: 0.0 },
            velocity: Velocity { dx: 1.0, dy: 1.0 }
        Player {
            points: Points(1.0),
            position: Position { x: 0.0, y: 0.0 },
            velocity: Velocity { dx: 2.0, dy: 2.0 },
        // More players...

    loop {
        // Iterate over all entities and update their positions
        for player in players.iter() {
            player.points += 1.0
            player.position.x += player.velocity.dx;
            player.position.y += player.velocity.dy;

This approach is quite concise and easy to read, but it takes a hit in terms of its performance.

Efficient memory layout

Our memory layout would look like:

Player 1: [Points1, Position1, Velocity1]
Player 2: [Points2, Position2, Velocity2]
Player 3: [Points3, Position3, Velocity3]

When your CPU fetches data from memory and puts it into the cache it does so in a fixed block called a “cache line”. Common cache line sizes range from 32 to 512 bytes, with 64 bytes being a prevalent choice in modern CPUs.

Your CPU will grab the whole cache line, even if only a portion of the data is actually needed. In doing so your CPU is guessing that things you store together in memory are likely to be accessed together (spatial locality).

So our cache lines might look like:

Cache Line 1: [Points1, Position1]
Cache Line 2: [Velocity1, Points2]
Cache Line 3: [Position2, Velocity2]
... etc

When you load a variable onto the stack you load it into the cache, then when you access it again later your CPU will first check its cache. But if you had loaded other data in the interim it will have been evicted and have to be fetched from memory (cache miss).

When we perform our game loop above and fetch the data for each player, we would be bouncing all over our cache trying to fetch the missing data, possibly evicting other cache lines and getting cache misses resulting in worse performance.

Because we are accessing entities which hold a lot of data and enumerating all of the data for each entity our cache is getting obliterated.

So whats the alternative?

Well we could store our game data into “structures of arrays” (SoA) instead of our previous AoS.

struct Entity {
    id: u32,

struct PositionComponent {
    x: f32,
    y: f32,

struct VelocityComponent {
    dx: f32,
    dy: f32,

struct World {
    entities: Vec<Entity>,
    positions: Vec<PositionComponent>,
    velocities: Vec<VelocityComponent>,

fn update_positions_and_velocities(world: &mut World) {
    for (position, velocity) in world.positions.iter_mut().zip(world.velocities.iter()) {
        position.x += velocity.dx;
        position.y += velocity.dy;

Now when we load our world param our memory is laid out like:

Velocities: [Velocity1, Velocity2, Velocity3]
Positions: [Position1, Position2, Position3]

Because we’ve stored arrays of the same type

So when we enumerate these arrays our memory access patters match what the CPU is predicting and our cache lines are less likely to be thrashed. We load each item onto the stack by accessing it sequentially which maximizes the efficiency of our cache.

Entities help us avoid passing references to our data

Okay so by using the entities and components part of our ECS we get better memory performance. But there is also the idea of how do we manage our references and pointers?

In our example above we got rid of our Player and it became implicit. The player was the index of the array. To rebuild the “thing” in our game world we just access the 3rd item of each of the arrays.

This is a powerful abstraction because we can avoid passing around references to the data on our arrays. Instead we can pass around this index of our entities and when we want the data we can request it from one place.

By localizing our memory access within our systems we can perform disjointed queries of our data in parallel with each other for even more performance gains.

Archetypes help combinations of components stay together in memory

Our systems are typically iterating over entities based on the groups of components they have. However these arrays can be scattered in different arrays or structures of arrays.

To get around this some ECS frameworks (Bevy included) introduce archetypes.

Archetypes address this inefficiency by organizing entities with similar component compositions into one table. Each archetype owns a table which represents a specific combination of components. Entities that share the same component composition are placed into the table of that archetype.

Archetypes ensure that entities with similar component compositions are stored in contiguous memory locations. This allows systems to access the necessary components in a sequential and cache-friendly manner.

By grouping entities with similar component compositions, archetypes eliminate redundancy in component storage. Each archetype has its own set of component arrays, and entities within the same archetype share the same array instances. This reduces memory overhead compared to storing components for each entity individually.

Archetypes also enable batch processing of entities with similar component compositions. Systems can process multiple entities within the same archetype simultaneously, taking advantage of data parallelism. This can be beneficial for operations such as updating positions, applying physics, or performing AI calculations.

So that explains why Bevy chose to use an archetypal ECS as the core of their framework. Lets see how it actually works specifically in Bevy:

ECS with Bevy

Bevy is an archetypal ECS built with Rust. It uses a combination of entities, components and systems to build up your game logic in a way that is both expressible and more performant than other programming paradigms.


Hold unique identifiers. Components are associated to these unique IDs. We can think of entities as the primary key of a row in a traditional SQL database.

Read more about entities.


Components are your columns. They are associated to a particular Entity. Each component type holds only a small amount of data, and entities are composed of many of these components.

The benefit of separating the identity of our game world objects from the data they hold is that we can query for only the components we need in each system.

If two systems need different data they might be able to run in parallel with each other which can lead to gains above what we discussed before.

In Bevy, components are rust structs stored in a World and attached to an Entity.

Read more about components.


Systems are how we effect the world. Each system declares what components, or group of components it needs to run and the App provides the specific components each game tick.

In Bevy these are simple rust functions, could even be a closure (anonymous functions, lambdas).

Examples: move system, damage system

By default systems run in parallel with each other and their order is non-deterministic.

Read more about systems.


Apps are similar to Rack from the Ruby ecosystem. They schedule middleware to run at various times in the game loop similar to an http request/response.

They can be used to add systems, resources, states and other things to our core game loop.

fn main() {
        .add_systems(Startup, startup_system)
        .add_systems(Update, normal_system)

Read more about apps.


Entities, components and resources stored in this container

We can think of a World similar to an env hash from Rack. Its a HashSet or Vec which is used to CRUD its state.

A World reference is passed to functions which use its data structure to fetch and persist entities and components.

Read more about worlds.


To make it more ergonomic to spawn entities with particular components we can spawn a group of components at once using a Bundle.

To make a Bundle we implement the Bundle trait which allows for insertion or removal of components.

Every type which implements Component also implements Bundle.

A tuple of bundles can also itself be a bundle with some clever macros that Bevy uses. However this is limited to a max length of 15, which can be extended by using a tuple of nested bundles.

Read more about bundles.


A group of entities that all share the same components. Each World has one archetype for each unique combination of components it contains.

Archetypes are locally unique to the World they are in.

  • Archetype with ID 0 is EMPTY
  • Archetype with ID u32::MAX is INVALID

Archetypes and bundles form a graph. Adding or removing a bundle moves an Entity to a new Archetype. Edges are used to cache the results of these moves.

Read more about archetypes.


A singleton Component with no corresponding Entity.

Examples: asset storage, events, system state

A counter would be an example, something that counts but is unrelated to any specific entity.

Only one resource of each type can be stored in a World at any given time.

There are also non send resources, which can only be accessed on the main thread.

Identified uniquely by its TypeID.


Executors of systems. Embody rules for parallel and/or ordered serial execution.

Query filters

Specified by a type on an argument in systems

Change detection

Kind of like in rails you can ask components and resources if they are changed

fn my_system(mut resource: Res<MyResource>) {
    if resource.is_changed() {
        println!("My component was mutated!");

Normally change detection is triggered by either DerefMut or AsMut, however it can be manually triggered via set_if_neq.

Both DerefMut and AsMut will manage the reference counter and trigger the change handling methods.


Command buffers give us the ability to queue up changes to our World without directly accessing it. This is important to have parallel execution that is thread safe.

An alternative to using commands would be to use an ExclusiveSystem which blocks parallel execution. That way we could immediately invoke our commands but not have to worry about their order with other systems.

Read more about commands.


Message bus style event store that can be accessed with EntityReader and EntityWriter.

EntityWriter will push events to a queue to be consumed by other EntityReaders.

EntityReader will consume events from a queue ensuring that a system reading the events only ever consumes each event once.

Read more about events.


Used to record information that should be passed to LogDiagnosticsPlugin. Built in diagnostics are available under bevy::diagnostics::*.

You can create your own custom diagnostics:

// All diagnostics should have a unique DiagnosticId.
// For each new diagnostic, generate a new random number.
pub const SYSTEM_ITERATION_COUNT: DiagnosticId =

fn setup_diagnostic_system(mut diagnostics: ResMut<Diagnostics>) {
    // Diagnostics must be initialized before measurements can be added.
    // In general it's a good idea to set them up in a "startup system".

fn my_system(mut diagnostics: ResMut<Diagnostics>) {
    // Add a measurement of 10.0 for our diagnostic each time this system runs.
    diagnostics.add_measurement(SYSTEM_ITERATION_COUNT, || 10.0);

Read more