Audio is queued by adding components to our entities which the AudioPlugin
then links up to an audio sink. An audio sink is typically a physical or virtual
device that receives audio data and produces sound.
Playing audio
We can trigger our sounds to play by spawning an AudioBundle
on any entity.
fn play_background_audio(
asset_server: Res<AssetServer>,
mut commands: Commands
) {
// Create an entity dedicated to playing our background music
commands.spawn(AudioBundle {
source: asset_server.load("background_audio.ogg"),
settings: PlaybackSettings::LOOP,
});
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, play_background_audio)
.run();
}
Once the asset is loaded the music will start playing in a loop until this entity we spawned is despawned or the component is removed.
The actual playing of this audio happens in a system that was added in the
AudioPlugin
. A system adds an AudioSink
to the AudioBundle
we just added
which will control the playback.
The data must be one of the file formats supported by Bevy:
wav
ogg
flac
mp3
There are a few different playback settings that are built in:
Setting | Description |
PlaybackSettings::ONCE |
Will play the associated audio only once |
PlaybackSettings::LOOP |
Will loop the audio |
PlaybackSettings::DESPAWN |
Will play the audio once then despawn the entity |
PlaybackSettings::REMOVE |
Will play the audio once then despawn the component |
Controlling playback
To control the playback of our AudioBundle
we can use the AudioSink
which
was added by the AudioPlugin
when we spawned our entity:
fn volume_system(
keyboard_input: Res<ButtonInput<KeyCode>>,
music_box_query: Query<&AudioSink, With<MusicBox>>
) {
if let Ok(sink) = music_box_query.get_single() {
if keyboard_input.just_pressed(KeyCode::Equal) {
sink.set_volume(sink.volume() + 0.1);
} else if keyboard_input.just_pressed(KeyCode::Minus) {
sink.set_volume(sink.volume() - 0.1);
}
}
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, play_background_audio)
.add_systems(Update, volume_system)
.run();
}
Spatial audio
The example above will play a flat unmodified sound for whatever source we feed our bundle.
To change our spatial audio settings globally we can set the audio plugin settings:
use bevy::audio::{SpatialScale, AudioPlugin};
const AUDIO_SCALE: f32 = 1. / 100.;
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(AudioPlugin {
default_spatial_scale: SpatialScale::new_2d(AUDIO_SCALE),
..default()
}))
.run();
}
Then to play our sounds we can add a listener with a SpatialListener
component
and move them relative to whatever entity is emitting the sound:
fn play_2d_spatial_audio(
mut commands: Commands,
asset_server: Res<AssetServer>
) {
// Spawn our emitter
commands.spawn((
Player,
AudioBundle {
source: asset_server.load("flight_of_the_valkaries.ogg"),
settings: PlaybackSettings::LOOP
}
));
// Spawn our listener
commands.spawn((
SpatialListener::new(100.), // Gap between the ears
SpatialBundle::default()
));
}
This will spawn a player entity with the sound emitting from their position. So for example, other players around could hear it according to how far away from us they are.
Volume
There are two separate sources of volume for our apps:
- Global volume
- Audio sink volume
To change the global volume we modify the GlobalVolume
resource:
use bevy::audio::Volume;
fn change_global_volume(
mut volume: ResMut<GlobalVolume>,
) {
volume.volume = Volume::new(0.5);
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.insert_resource(GlobalVolume::new(0.2))
.add_systems(Startup, change_global_volume)
.run();
}
Then for the individual audio sinks we can use their public interface within our systems to modify their individual values:
fn volume_system(
keyboard_input: Res<ButtonInput<KeyCode>>,
music_box_query: Query<&AudioSink, With<MusicBox>>
) {
if let Ok(sink) = music_box_query.get_single() {
if keyboard_input.just_pressed(KeyCode::Equal) {
sink.set_volume(sink.volume() + 0.1);
} else if keyboard_input.just_pressed(KeyCode::Minus) {
sink.set_volume(sink.volume() - 0.1);
}
}
}
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Update, volume_system)
.run();
}
Internals
Internally, Bevy is using rodio to decode these sources.
The AudioBundle
is made up of both a source
and some settings
which
controls the playback:
// https://github.com/bevyengine/bevy/blob/66f72dd25bb9e3f3d035f2f14dcbcd25674f968c/crates/bevy_audio/src/audio.rs#L240
pub type AudioBundle = AudioSourceBundle<AudioSource>;
// https://github.com/bevyengine/bevy/blob/66f72dd25bb9e3f3d035f2f14dcbcd25674f968c/crates/bevy_audio/src/audio.rs#L252
pub struct AudioSourceBundle<Source = AudioSource>
where
Source: Asset + Decodable,
{
// Asset containing the audio data to play.
pub source: Handle<Source>,
// Initial settings that the audio starts playing with.
// If you would like to control the audio while it is playing,
// query for the [`AudioSink`][crate::AudioSink] component.
// Changes to this component will *not* be applied to already-playing audio.
pub settings: PlaybackSettings,
}
The Decodable
trait is what allows Bevy to convert the source file into
a rodio
compatible rodio::Source
type. Types that implement this trait
will hold raw sound data that is then converted into an iterator of
samples.