Tainted\\Coders

Bevy Events

Bevy version: 0.16Last updated:

Events let us communicate between systems. We can write events in one system and then read them in another which lets us decouple what happened from what should happen.

These events can be sent to one (or both) of these places:

  1. Event streams for communication between systems
  2. Observers for triggering immediate behavior

Each event type we add to our App is added to the EventRegistry which is managing separate Events<T> streams for each event type.

Each Event is tracked individually using its EventId. These IDs auto increment based on the order the events were sent.

EventReader and EventWriter are the system parameters used to consume these events each tick.

The event streams

An event stream is an Events<T> resource which is basically just a wrapper around a Vec<T> with some convenient accessors.

The EventReader system parameter will track whether your system has read an event. Each reader contains a Local<EventCursor> that holds the systems progress in reading from the Events<T> resource.

Streams are double buffered

Events<T> is a collection that acts as a double buffered queue.

Double buffering is done to ensure each system has an opportunity to see each event. It is helping systems not have to care about the exact ordering within a frame.

To illustrate this double buffering, imagine you have a game with a system that publishes events when the player gets detected with a PlayerDetected event.

fn main() {
  App::new()
    .add_plugins(DefaultPlugins)
    .add_systems(
      Update,
      (
        on_player_detected,
        detect_player.after(on_player_detected)
      )
    )
    .run()
}

When you start your game both buffers start out empty:

Buffer A (current): []
Buffer B (previous): []

Then we publish an event from detect_player using an EventWriter, it pushes it to the buffer:

Buffer A (current): [PlayerDetected]
Buffer B (previous): []

Unfortunately detect_player runs after our reading system on_player_detected. If we only had one buffer and we wiped it then the on_player_detected system would never get to read the published event.

Instead, at the end of this game tick the oldest buffer (Buffer B) will be cleared and made to be our primary buffer.

Buffer A (previous): [PlayerDetected]
Buffer B (current): []

In our example, the on_player_detected that read the events can use the original Buffer A holding the events for the previous frame.

Even if more events from other systems were published, the Buffer A would be holding the last frame's event.

Buffer A (previous): [PlayerDetected]
Buffer B (current): [PlayerDetected]

This lets the systems earlier in the frame catch up by reading the last frame's events.

Finally at the end of the third tick the oldest buffer (Buffer A) is cleared and set to be our current buffer to write events to, leaving our buffer looking like:

Buffer A (current): []
Buffer B (previous): [PlayerDetected]

In this way our systems have access to both this frame and the last frames events. So it shouldn't matter how we order our systems to be before or after we fire our events, it will have access to both.

Adding events

Events are defined just like our resources and components.

We define events as a type that derives Event:

// A simple marker event
#[derive(Event)]
struct PlayerKilled;

// A tuple struct event
#[derive(Event)]
struct PlayerDetected(Entity);

// An event with fields
#[derive(Event)]
struct PlayerDamaged {
  entity: Entity,
  damage: f32,
}

Then we add our event to our App, similar to how we manage our assets:

fn main() {
  App::new()
    .add_event::<PlayerKilled>();
}

When we add_event, Bevy adds a system for handling that specific type: Events<T>::event_update_system.

This system runs each frame, cleaning up any unconsumed events by calling Events<T>::update. If this function were not called, then our events will grow unbounded eventually exhausting the queue.

This also means that if your events are not consumed by the next frame then they will be cleaned up and dropped silently.

Writing events to the stream

Events are written to a double buffered queue. This just means that events produced are stored for two frames.

This prevents a situation where a system that was called earlier will miss the event, which is very common because Bevy is working hard to run our systems in parallel.

For example, if you had defined a system to only run conditionally, then its possible to miss events during the frames it was not called.

To write events to the stream we use an EventWriter. Any two systems that use the same event writer type will not be run in parallel as they both use mutable access to Events<T>.

fn detect_player(
  mut events: EventWriter<PlayerDetected>,
  players: Query<(Entity, &Transform), With<Player>>,
) {
  for (entity, transform) in players.iter() {
    // ...
    events.write(PlayerDetected(entity));
  }
}

Each EventWriter can only write events for one type that is known during compile time. There may be times where you don't know this type and as a work around you can send type erased events through your Commands

commands.queue(|w: &mut World| {
  w.send_event(MyEvent);
});

Reading events from the stream

This double buffering strategy means that we must consume our events steadily each frame or risk losing them. We can read events from our systems with an EventReader that consumes events from our buffers:

fn react_to_detection(
  mut events: EventReader<PlayerDetected>
) {
  for event in events.read() {
    // Do something with each event here
  }
}

An EventReader system parameter tracks the consumption of these events on a per-system basis using a Local<EventCursor>, which will guarantee each system an opportunity to read the event once.

If you have many different types of events you want handled the same way you can use a generic system and an Events resource:

fn handle_event<T: Event>(
  mut events: ResMut<Events<T>>
) {
  // We can clear events this frame
  events.clear();

  // Or clear events next frame (bevy default)
  events.update();

  // Or consume our events right here and now
  for event in events.drain() {
    // ...
  }
}

fn main() {
  App::new()
    .init_resource::<Events<PlayerKilled>>()
    .add_systems(Update, handle_event::<PlayerKilled>)
    .run();
}

So we can say that the Events<T> resource represents a collection of all events of the type that occurred in the last two update calls.

Observers

An Observer is a callback system that listens for a Trigger. Each trigger is for a specific event type.

There are two types of observers:

  1. Broadcast observers
  2. Entity observers

It is important to note that when you send events using an EventWriter, they do not trigger our observers. We have to trigger them manually, usually using Commands:

// Triggering a broadcast observer
commands.trigger(SomeEvent)

// Triggering an entity observer
commands.entity(the_entity).trigger(SomeEvent)

This is not the same as writing an event to an event stream like with EventWriter. Instead, these events are sent directly to the observer and handled immediately. Not sent to your Events<T> collection.

Bevy has built-in triggers for each part of the component lifecycle:

TypeDescription
OnAdd<T>Triggers when a component is added
OnInsert<T>Triggers when a component is inserted
OnRemove<T>Triggers when a component is removed and not replaced
OnReplace<T>Triggers when a component is added to an entity which already had that component

Broadcast observers

If we want a broadcast observer that listens to events globally we can add the observer to our App definition:

#[derive(Component, Debug)]
struct Position(Vec2);

#[derive(Component)]
struct Enemy;

fn on_respawn(
  trigger: Trigger<OnAdd, Enemy>,
  query: Query<(&Enemy, &Position)>,
) {
  let (enemy, position) = query.get(trigger.target()).unwrap();
  println!("Enemy was respawned at {:?}", position);
}

fn main() {
  App::new().add_plugins(DefaultPlugins).add_observer(on_respawn);
}

Any time a Enemy component is added, our on_respawn system will fire. Observer based systems must have a Trigger as their first argument.

Entity observers

If we want an entity observer with more fine grained control, we can choose to react to a events that are triggered on a specific entity:

#[derive(Component)]
struct Boss;

#[derive(Event)]
struct BossSpawned;

fn on_boss_spawned(
  trigger: Trigger<BossSpawned>,
  query: Query<(&Enemy, &Position)>,
) {
  if let Ok((enemy, position)) = query.get(trigger.target()) {
    println!("Boss was spawned at {:?}", position);
  }
}

fn spawn_boss(mut commands: Commands) {
  commands.spawn((Enemy, Boss)).observe(on_boss_spawned);
  commands.trigger(BossSpawned);
}

Observers added this way are actually created as an EntityObserver which will use component hooks to only send our system specific entities.

These entity events will bubble up a hierarchy of ChildOf attached components.

When events are propagated they are re-sent to their next target while keeping track of where they started through original_target. This propagation continues until the chain reaches a dead-end, or the observer handling the propagation manually stops it.

Triggers

The generic arguments for Trigger can be somewhat confusing:

pub struct Trigger<'w, E, B: Bundle = ()> {
  // ...
}

The first generic argument E is the event.

The second generic argument B is optional. A good mental model is to think that the second generic argument is only for Bevy's built-in triggers. Any observers you create will only take one.

You should be aware that when using Bevy's built in trigger events the second generic B can be a bundle of components, and that it is going to act as a filter of those components using OR not AND logic.

So this trigger:

fn on_respawn(
  trigger: Trigger<OnAdd, (Enemy, Person)>,
  // ...
)

Is going to trigger when an Enemy OR a Person is added.

Event propagation

You can set events to automatically propagate themselves according to a relationship.

#[derive(Event)]
#[event(auto_propagate, traversal = &'static ChildOf)]
struct LocationTravelled;

This would mean that a LocationTravelled event for a specific entity would be automatically triggered again for a related entity if we held a ChildOf(Entity) component.

#[derive(Component, Default)]
struct Ship;

#[derive(Component)]
struct Player;

fn spawn_player(mut commands: Commands) {
  let ship = commands.spawn((Ship, children![Player])).id();

  // Trigger the event on the ship, it will propagate up to the vehicle and player.
  commands.entity(ship).trigger(LocationTravelled);
}

In this example, even though we triggered the event on the Ship, an additional event for the Player would also be sent to any relevant observers.

Choosing events vs observers

Observers can be easier to reason about if we care about the effects of our events happening within a single frame.

Observers are processed when a key (the Trigger) is used to lookup a set of values (the observing systems) which are iterated in an arbitrary order and handled immediately.

Another good use case for observers is events we want to handle from only certain entities and not others of the same type as they can be triggered per entity.

On the other hand, systems that read Events<T> can be put in a specific order. Observers in 0.16 cannot be explicitly ordered. This becomes a problem if your observers depend on other observers having run before them.

Events also help you decouple systems from each other. We can send an event and not have to control who exactly consumes our event. For observers, the producers of the event need to explicitly know who to trigger the event for.

This table shows the key differences:

ObserversEvents
Optimal event frequencyInfrequentFrequent
HandlerOnly handles a single eventCan handle many events together
LatencyImmediateUp to 1 frame
Event propagationBubblingNone
ScopeWorld or EntityWorld
OrderingNo explicit orderOrdered
CouplingHighLow

There can be multiple ways of defining the same behavior between the two so it is more up to your game's specific constraints.

Examples of good use cases for observers:

Examples of good use cases for events: