The Next-Gen Dynamic Sound System
of Killzone Shadow Fall
Take Away
Take Away
• Promote experimentation
Take Away
• Promote experimentation
• Own your tools to allow it to happen
Take Away
• Promote experimentation
• Own your tools to allow it to happen
• Reduce “Idea to Game” time as much as possible
Take Away
Take Away
• If a programmer is in the creative loop, creativity
suffers
Take Away
• If a programmer is in the creative loop, creativity
suffers
• Don't be afraid to give designers a visual
progr...
Take Away
• If a programmer is in the creative loop, creativity
suffers
• Don't be afraid to give designers a visual
progr...
Andreas Varga Anton Woldhek
Senior Tech Programmer Senior Sound Designer
Who are Andreas & Anton? Why would you listen to ...
Andreas Varga
Killzone Shadow Fall
Killzone 3
Killzone 2
Manhunt 2
Max Payne 2: The Fall of Max Payne
Anton Woldhek
Killzo...
We work at Guerrilla, which is one of Sony’s 1st party studios. Here’s a view of our sound studios
Killzone 2 Killzone 3
After Killzone 3, we started working on Killzone Shadow Fall, which was a launch title for the PS4. This was planned as a ...
Here's a short video showing how the game looks and sounds.
Here's a short video showing how the game looks and sounds.
Andreas: But what exactly does next-gen mean?
We thought next gen sound (initially) wouldn’t be about fancy new DSP but ab...
What is Next-Gen Audio?
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Exte...
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Exte...
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Exte...
Why Emancipate?
Anton
Why Emancipate?
• Audio is separate from rest
Anton
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
Anton
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
• Mental: closed off sound engine and tools
An...
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
• Mental: closed off sound engine and tools
• ...
Why Integrate?
Anton
Why Integrate?
• Gain a community of end users that:
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
• Find bugs
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
• Find bugs
• Be...
Motivation for Change
Motivation for Change
Previous generation very risk averse, failing was not possible
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Pr...
Old Workflow
Nuendo
Old Workflow
Nuendo
Old Workflow
Nuendo export
WAV

Assets
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import reload Play Game
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import reload Play Game
Wasted Time Wasted Time Wasted Time
~10...
New Workflow
Nuendo
Sound Tool
Running
Game
New Workflow
Nuendo export
Sound Tool
WAV

Assets
hot-load
Running
Game
hot-load
New Workflow
Nuendo export
Sound Tool
WAV

Assets
hot-load
Running
Game
hot-load
Wasted Time
~10s
Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running ga...
Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running ga...
No More Auditioning Tool
No More Auditioning Tool
• Game is the inspiration
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
• No interruptions
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
• No interruptions
• Why audition any...
How to Emancipate?
Andreas
How to Emancipate?
• Make sound engine and tools in-house
Andreas
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
Andreas
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
• Immediate changes, based on designer...
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
• Immediate changes, based on designer...
Extendable, Reusable

and

High Performance?
April 11, 2011
April 11, 2011
April 11, 2011
!
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andreas: But while that...
!
Max/MSP
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andreas: But wh...
!
Max/MSP Pure Data
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andre...
⚠️ Technical Details Ahead! ⚠️
⚠️ Technical Details Ahead! ⚠️
• We will show code
⚠️ Technical Details Ahead! ⚠️
• We will show code
• But you don’t have to be afraid
⚠️ Technical Details Ahead! ⚠️
• We will show code
• But you don’t have to be afraid
• Sound designers will never have to ...
Graph Sound System
Graph Sound System
• Generate native code from data flow
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play s...
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play s...
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play s...
Sound
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of individual nodes and...
Sound
Node A
Node C
Node B
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of...
Sound
Node A
Node C
Node B
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of...
Sound
Node A
Node C
Node B
A.cpp
B.cpp
C.cpp
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs ...
A.cpp
B.cpp
C.cpp
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections....
A.cpp
B.cpp
C.cpp
Sound.cpp
The graphs we build have certain inputs and outputs and are made up of individual nodes and co...
Sound.cpp
A(inA, outA) { … }
B(inB, outB) { … }
C(inX, inY, outZ) { … }
The graphs we build have certain inputs and output...
Sound(X, Y, Z)
{
A(X,outA);
B(Y,outB);
C(outA,outB,Z);
}
Sound.cpp
A(inA, outA) { … }
B(inB, outB) { … }
C(inX, inY, outZ)...
Sound.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object...
Sound.cpp compile
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into a...
Sound.cpp compile Sound.o
So as part of our content pipeline the C++ code that was generated for the graph is then compile...
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part o...
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
link
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as p...
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
Sound2.cpp
Sound3....
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
hot-load
Sound2.cp...
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
Execute Sound() in...
The most simple graph sound example
we can come up with
Lets get to know the system by building a very basic sound
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
i...
• Wave node is used to play sample data
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:...
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
Wave
inStart
inWave (sampl...
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
• When inStart is active, ...
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
• When inStart is active, ...
But what about the code?
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave nod...
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuratio...
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuratio...
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuratio...
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuratio...
But the nice thing is, that the code is not immediately visible. It’s there, and as a programmer you can use it to debug w...
Anton: Now let's inject the sound into the game and play it, for testing purposes.
Video showing how a sound is created fr...
Anton: Now let's inject the sound into the game and play it, for testing purposes.
Video showing how a sound is created fr...
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an inpu...
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an inpu...
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an inpu...
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an inpu...
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an inpu...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPoin...
Video showing effect of distance-based low pass filtering example
Video showing effect of distance-based low pass filtering example
Adding Nodes
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the s...
Adding Nodes
• Create C++ file with one function
I guess by now you can see how this system allowed us to build powerful dy...
Adding Nodes
• Create C++ file with one function
• Create small description file for node
I guess by now you can see how thi...
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
I guess ...
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
• Define ...
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
• Define ...
Math example
Convert semitones to pitch ratio
SemitonesToPitchRatio
inSemiTones outPitchRatio
SemitonesToPitchRatio
inSemi...
void SemitonesToPitchRatio(const float inSemitones,
float* outPitchRatio)
{
*outPitchRatio = Math::Pow(2.0f, inSemitones/1...
Math example
Convert semitones to pitch ratio
SemitonesToPitchRatio
inSemiTones outPitchRatio
Logic example
Compare two values
Compare
inA outGreater
inB outSmaller
outEqual
Here's another simple example for comparis...
void Compare(const float inA, const float inB,
bool* outSmaller, bool* outGreater,
bool* outEqual)
{
*outSmaller = inA < i...
Logic example
Compare two values
Compare
inA outGreater
inB outSmaller
outEqual
Modulation example
Sine LFO
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
This is the sine LFO node, it will ou...
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue,...
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue,...
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue,...
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue,...
Modulation example
Sine LFO
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
Nodes built from
Graphs
Generate a random pitch within a
range of semitones
RandomRangedPitchRatio
inTrigger outPitchRatio...
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then convert...
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then convert...
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then convert...
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then convert...
Nodes built from
Graphs
Generate a random pitch within a
range of semitones
RandomRangedPitchRatio
inTrigger outPitchRatio...
When you notice repeated work…
When you notice repeated work…
…abstract it!
When you notice repeated work…
…abstract it!
When you notice repeated work…
…abstract it!
Inputs
Start
ChanceToPlay
Inputs
Start
ChanceToPlay
Delay
inEvent
inTime
outEvent
0.05
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEven...
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEven...
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEven...
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEven...
Experimenting Should Be Safe and Fun
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried...
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried...
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried...
Examples
Fire Rate Variation
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
• Idea: Slight variation in fire rate
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
• Idea: Slight variation in fire rate
What the Mind Hears, …
What the Mind Hears, …
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
• Use sound-to-sound ...
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
• Use sound-to-sound ...
Bird Sound Logic
Not Shooting Play “Bird Loop”FALSE
GunFireState
Gun Sound Bird Sound
Bird Sound Logic
Shooting Play “Bird Loop”FALSE
GunFireState
Gun Sound Bird Sound
Bird Sound Logic
Shooting Play “Bird Loop”TRUE
GunFireState
Gun Sound Bird Sound
set
Bird Sound Logic
Shooting Play “Bird Loop”TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Shooting TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Not Shooting TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Play “Bird Loop”
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Play “Bird Loop”
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Any sound can send a message...
Material Dependent Environmental
Reactions (MADDER)
Material Dependent Environmental
Reactions (MADDER)
• Inspiration: 

Guns have this explosive force in the world

when the...
Material Dependent Environmental
Reactions (MADDER)
• Inspiration: 

Guns have this explosive force in the world

when the...
See http://www.youtube.com/watch?v=nMkiDguYtGw
See http://www.youtube.com/watch?v=nMkiDguYtGw
See http://www.youtube.com/watch?v=nMkiDguYtGw
MADDER
MADDER
• Inputs required:
MADDER
• Inputs required:
• Distance to closest surface
MADDER
• Inputs required:
• Distance to closest surface
• Direction of closest surface
MADDER
• Inputs required:
• Distance to closest surface
• Direction of closest surface
• Material of closest surface
MADDER raycast
Listener
MADDER raycast
• Sweeping raycast
around listener
Listener
MADDER raycast
• Sweeping raycast
around listener
Listener
MADDER raycast
• Sweeping raycast
around listener
• After one revolution,
closest hit point is used Listener
MADDER raycast
• Sweeping raycast
around listener
• After one revolution,
closest hit point is used
• Distance, angle and
...
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MA...
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MA...
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MA...
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
...
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
...
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
...
4x
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedG...
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWall...
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWall...
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWall...
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
SelectSample
inMaterial outWave
inMaterialSamples
Inputs...
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
SelectSample
inMaterial outWave
inMaterialSamples
Inputs...
Video showing MADDER effect for 4 different materials
Video showing MADDER effect for 4 different materials
MADDER Prototype
Video showing initial MADDER prototype with 4 different materials in scene.
MADDER Prototype
Video showing initial MADDER prototype with 4 different materials in scene.
MADDER Four-angle raycast
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
• One sweeping raycast

in eac...
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
• One sweeping raycast

in eac...
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
i...
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
i...
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
i...
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
i...
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
i...
Four-angle MADDER
Video showing final 4-angle MADDER with 4 materials placed in the scene
Four-angle MADDER
Video showing final 4-angle MADDER with 4 materials placed in the scene
MADDER video (30 seconds)
MADDER off
Capture from final game with MADDER disabled
MADDER video (30 seconds)
MADDER off
Capture from final game with MADDER disabled
MADDER on
Capture from final game with MADDER enabled
MADDER on
Capture from final game with MADDER enabled
Creative Workflow
Creative Workflow
IDEA Coder GAME
Sound
Designer
DONE
Creative Workflow
IDEA
Coder
GAME
Sound
Designer
DONE
Gun Tails
Gun Tails
• Experiment with existing data
Gun Tails
• Experiment with existing data
• Wall raycast
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
How to Debug?
What kind of debugging support did we have? Obviously a sound designer is creating much more complex logic n...
Manually Placed Debug Probes
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
0.5
0.5
1
0.0
Anton: These are place...
Manually Placed Debug Probes
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
DebugProbe
inDebugName
inDebugValue
...
Manually Placed Debug Probes
• Just connect any output to visualize it
SineLFO
inFrequency outValue
inAmplitude
inOffset
in...
Manually Placed Debug Probes
• Just connect any output to visualize it
• Shows debug value on game screen
SineLFO
inFreque...
Manually Placed Debug Probes
Can miss quick changes because game
frame rate is lower than sound update rate!
Manually Placed Debug Probes
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (voi...
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (voi...
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (voi...
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (voi...
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (voi...
Post Mortem
So what when wrong and what went right?
Timeline
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
Andreas: We started working on PS4 techn...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engi...
“You’re making sound designers do
programming work, that’ll be a disaster!
The game will crash all the time!”
This didn't ...
“This is an unproven idea, why don’t we
just use some middleware solution?”
There were doubts that we can create new tech ...
Used by other Disciplines
The fact that the graph system was available in the editor that the whole studio is using meant ...
Used by other Disciplines
• Graph system became popular
The fact that the graph system was available in the editor that th...
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
The fact that the graph system was a...
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
The fac...
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
• More ...
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
• More ...
Compression Codecs
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM,...
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
~20 MB for in-memory samples
A little bit more i...
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 750...
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 750...
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 750...
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 750...
Sample Rates on PS3
We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimiz...
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48...
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48...
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48...
The Future…
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the future, and use it f...
The Future…
• Extend and improve
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the...
The Future…
• Extend and improve
• Create compute shaders from graphs
Andreas: Were do we go from here?
We’re planning to ...
The Future…
• Extend and improve
• Create compute shaders from graphs
• More elaborate debugging support
Andreas: Were do ...
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
Upcoming SlideShare
Loading in...5
×

The Next-Gen Dynamic Sound System of Killzone Shadow Fall

1,225

Published on

We'll describe our new audio run-time and toolset that was built specifically for Killzone Shadow Fall on PlayStation 4. However, the ideas used are widely applicable and the focus is on integration with the game engine and fast iteration, with special attention to shortcuts to get your creative spark translated into in-game sounds as quickly as possible. We will demonstrate our implementation of these demands, which is a next-gen sound system that was designed to combine artistic freedom with high run-time performance. It should be interesting to both creative as well as technical minds who are looking for inspiration on what to expect from a modern sound design environment. To emphasize the performance advantage, we will show the point of view of both the sound designer and the programmer simultaneously. We will use examples starting with simple sounds and build up to increasingly more complex dynamic ones to illustrate the benefits of this unique approach.

Published in: Software
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,225
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
10
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

The Next-Gen Dynamic Sound System of Killzone Shadow Fall

  1. 1. The Next-Gen Dynamic Sound System of Killzone Shadow Fall
  2. 2. Take Away
  3. 3. Take Away • Promote experimentation
  4. 4. Take Away • Promote experimentation • Own your tools to allow it to happen
  5. 5. Take Away • Promote experimentation • Own your tools to allow it to happen • Reduce “Idea to Game” time as much as possible
  6. 6. Take Away
  7. 7. Take Away • If a programmer is in the creative loop, creativity suffers
  8. 8. Take Away • If a programmer is in the creative loop, creativity suffers • Don't be afraid to give designers a visual programming tool
  9. 9. Take Away • If a programmer is in the creative loop, creativity suffers • Don't be afraid to give designers a visual programming tool • Generating code from assets is easy and powerful
  10. 10. Andreas Varga Anton Woldhek Senior Tech Programmer Senior Sound Designer Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
  11. 11. Andreas Varga Killzone Shadow Fall Killzone 3 Killzone 2 Manhunt 2 Max Payne 2: The Fall of Max Payne Anton Woldhek Killzone Shadow Fall Killzone 3 Killzone 2 ! Creator of the Game Audio Podcast Senior Tech Programmer Senior Sound Designer Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
  12. 12. We work at Guerrilla, which is one of Sony’s 1st party studios. Here’s a view of our sound studios
  13. 13. Killzone 2 Killzone 3
  14. 14. After Killzone 3, we started working on Killzone Shadow Fall, which was a launch title for the PS4. This was planned as a next-gen title from the beginning.
  15. 15. Here's a short video showing how the game looks and sounds.
  16. 16. Here's a short video showing how the game looks and sounds.
  17. 17. Andreas: But what exactly does next-gen mean? We thought next gen sound (initially) wouldn’t be about fancy new DSP but about faster iteration and less optimization. To allow the creation of more flexible sounds that truly can become a part of the game design. We also wanted to make sure that the sound team works with the same basic work flow as the rest of the company. So our motivation was to build a cutting edge sound system and tool set, but don't build a new synth.
  18. 18. What is Next-Gen Audio? Anton
  19. 19. What is Next-Gen Audio? • Emancipating sound, the team and the asset Anton
  20. 20. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload Anton
  21. 21. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components Anton
  22. 22. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components • High run-time performance Anton
  23. 23. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components • High run-time performance • Don’t worry about the synth Anton
  24. 24. Why Emancipate? Anton
  25. 25. Why Emancipate? • Audio is separate from rest Anton
  26. 26. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms Anton
  27. 27. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms • Mental: closed off sound engine and tools Anton
  28. 28. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms • Mental: closed off sound engine and tools • Should actively pursue integration Anton
  29. 29. Why Integrate? Anton
  30. 30. Why Integrate? • Gain a community of end users that: Anton
  31. 31. Why Integrate? • Gain a community of end users that: • Can help with complex setup Anton
  32. 32. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks Anton
  33. 33. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks • Find bugs Anton
  34. 34. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks • Find bugs • Benefit mutually from improvements Anton
  35. 35. Motivation for Change
  36. 36. Motivation for Change Previous generation very risk averse, failing was not possible
  37. 37. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  38. 38. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  39. 39. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  40. 40. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible We could experiment much more! {
  41. 41. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible More of the ideas shipped in the game! { We could experiment much more! {
  42. 42. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible More of the ideas shipped in the game! { We could experiment much more! { } No
 bad
 ideas!
  43. 43. Old Workflow Nuendo
  44. 44. Old Workflow Nuendo
  45. 45. Old Workflow Nuendo export WAV
 Assets
  46. 46. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import
  47. 47. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import reload Play Game
  48. 48. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import reload Play Game Wasted Time Wasted Time Wasted Time ~10s 5s…30s ~2mins
  49. 49. New Workflow Nuendo Sound Tool Running Game
  50. 50. New Workflow Nuendo export Sound Tool WAV
 Assets hot-load Running Game hot-load
  51. 51. New Workflow Nuendo export Sound Tool WAV
 Assets hot-load Running Game hot-load Wasted Time ~10s
  52. 52. Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
  53. 53. Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
  54. 54. No More Auditioning Tool
  55. 55. No More Auditioning Tool • Game is the inspiration
  56. 56. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly
  57. 57. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly • No interruptions
  58. 58. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly • No interruptions • Why audition anywhere else?
  59. 59. How to Emancipate? Andreas
  60. 60. How to Emancipate? • Make sound engine and tools in-house Andreas
  61. 61. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration Andreas
  62. 62. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration • Immediate changes, based on designer feedback Andreas
  63. 63. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration • Immediate changes, based on designer feedback • No arbitrary technical limitations Andreas
  64. 64. Extendable, Reusable
 and
 High Performance?
  65. 65. April 11, 2011
  66. 66. April 11, 2011
  67. 67. April 11, 2011
  68. 68. ! Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  69. 69. ! Max/MSP Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  70. 70. ! Max/MSP Pure Data Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  71. 71. ⚠️ Technical Details Ahead! ⚠️
  72. 72. ⚠️ Technical Details Ahead! ⚠️ • We will show code
  73. 73. ⚠️ Technical Details Ahead! ⚠️ • We will show code • But you don’t have to be afraid
  74. 74. ⚠️ Technical Details Ahead! ⚠️ • We will show code • But you don’t have to be afraid • Sound designers will never have to look at it
  75. 75. Graph Sound System
  76. 76. Graph Sound System • Generate native code from data flow
  77. 77. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz)
  78. 78. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters
  79. 79. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters • NOT running actual synthesis
  80. 80. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters • NOT running actual synthesis • Dynamic behavior
  81. 81. Sound Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  82. 82. Sound Node A Node C Node B Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  83. 83. Sound Node A Node C Node B Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  84. 84. Sound Node A Node C Node B A.cpp B.cpp C.cpp Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  85. 85. A.cpp B.cpp C.cpp The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  86. 86. A.cpp B.cpp C.cpp Sound.cpp The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  87. 87. Sound.cpp A(inA, outA) { … } B(inB, outB) { … } C(inX, inY, outZ) { … } The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  88. 88. Sound(X, Y, Z) { A(X,outA); B(Y,outB); C(outA,outB,Z); } Sound.cpp A(inA, outA) { … } B(inB, outB) { … } C(inX, inY, outZ) { … } The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  89. 89. Sound.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  90. 90. Sound.cpp compile So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  91. 91. Sound.cpp compile Sound.o So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  92. 92. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  93. 93. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o link Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  94. 94. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  95. 95. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link hot-load Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  96. 96. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link Execute Sound() in Game hot-load Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  97. 97. The most simple graph sound example we can come up with Lets get to know the system by building a very basic sound
  98. 98. Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  99. 99. • Wave node is used to play sample data Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  100. 100. • Wave node is used to play sample data • The on_start input is activated when the sound starts Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  101. 101. • Wave node is used to play sample data • The on_start input is activated when the sound starts • When inStart is active, a voice is created and played Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  102. 102. • Wave node is used to play sample data • The on_start input is activated when the sound starts • When inStart is active, a voice is created and played • So this graph plays one sample when the sound starts Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  103. 103. But what about the code? Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  104. 104. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  105. 105. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  106. 106. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  107. 107. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } #include <Wave.cpp> Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  108. 108. But the nice thing is, that the code is not immediately visible. It’s there, and as a programmer you can use it to debug what’s going on, but a sound designer would only see the graph representation.
  109. 109. Anton: Now let's inject the sound into the game and play it, for testing purposes. Video showing how a sound is created from scratch and injected into the running game.
  110. 110. Anton: Now let's inject the sound into the game and play it, for testing purposes. Video showing how a sound is created from scratch and injected into the running game.
  111. 111. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  112. 112. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  113. 113. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  114. 114. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  115. 115. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  116. 116. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  117. 117. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  118. 118. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  119. 119. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  120. 120. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  121. 121. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } #include <EvaluateCurve.cpp> #include <Wave.cpp> It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  122. 122. Video showing effect of distance-based low pass filtering example
  123. 123. Video showing effect of distance-based low pass filtering example
  124. 124. Adding Nodes I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  125. 125. Adding Nodes • Create C++ file with one function I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  126. 126. Adding Nodes • Create C++ file with one function • Create small description file for node I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  127. 127. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  128. 128. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs • Define the default values I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  129. 129. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs • Define the default values • No need to recompile game, just add assets I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  130. 130. Math example Convert semitones to pitch ratio SemitonesToPitchRatio inSemiTones outPitchRatio SemitonesToPitchRatio inSemiTones outPitchRatio Andreas: Let's look at an example node for a mathematical problem, the obvious ones like add or multiply are trivial, but here's a useful one, converting from musical semitones to a pitch ratio value. So for instance inputting a value of 12 semitones, would result in a pitch ratio of 2, i.e. an octave.
  131. 131. void SemitonesToPitchRatio(const float inSemitones, float* outPitchRatio) { *outPitchRatio = Math::Pow(2.0f, inSemitones/12.0f); } SemiTonesToPitchRatio.cpp This shows how easy it is to create a new node. The only additional information that needs to be created is a meta data file that describes the input and output values and their default values.
  132. 132. Math example Convert semitones to pitch ratio SemitonesToPitchRatio inSemiTones outPitchRatio
  133. 133. Logic example Compare two values Compare inA outGreater inB outSmaller outEqual Here's another simple example for comparison of values. Typically you have different parts that depend on the comparison result, so it's useful to have "smaller" and "greater than" available at the same time. Also saves us from using additional nodes.
  134. 134. void Compare(const float inA, const float inB, bool* outSmaller, bool* outGreater, bool* outEqual) { *outSmaller = inA < inB; *outGreater = inA > inB; *outEqual = inA == inB; } Compare.cpp The implementation is just as simple. All the comparison results are stored in output parameters.
  135. 135. Logic example Compare two values Compare inA outGreater inB outSmaller outEqual
  136. 136. Modulation example Sine LFO SineLFO inFrequency outValue inAmplitude inOffset inPhase This is the sine LFO node, it will output a sine wave at a given frequency and amplitude. This is a slightly more complicated node.
  137. 137. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  138. 138. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  139. 139. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  140. 140. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  141. 141. Modulation example Sine LFO SineLFO inFrequency outValue inAmplitude inOffset inPhase
  142. 142. Nodes built from Graphs Generate a random pitch within a range of semitones RandomRangedPitchRatio inTrigger outPitchRatio inMinSemitone inMaxSemitone Certain math or logic processing that starts to happen often in sounds can be abstracted into their own nodes, without having to know any programming. Here’s an example of a node, built from an actual graph.
  143. 143. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  144. 144. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  145. 145. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  146. 146. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  147. 147. Nodes built from Graphs Generate a random pitch within a range of semitones RandomRangedPitchRatio inTrigger outPitchRatio inMinSemitone inMaxSemitone
  148. 148. When you notice repeated work…
  149. 149. When you notice repeated work… …abstract it!
  150. 150. When you notice repeated work… …abstract it!
  151. 151. When you notice repeated work… …abstract it!
  152. 152. Inputs Start ChanceToPlay
  153. 153. Inputs Start ChanceToPlay Delay inEvent inTime outEvent 0.05
  154. 154. Inputs Start ChanceToPlay Random inRange100 Compare inA outGreater inB outSmaller outEqual OR Delay inEvent inTime outEvent 0.05
  155. 155. Inputs Start ChanceToPlay Random inRange100 Compare inA outGreater inB outSmaller outEqual OR Delay inEvent inTime outEvent 0.05 Wave inAzimuth inStart inWave AND
  156. 156. Inputs Start ChanceToPlay Random inRange100 Compare inA outGreater inB outSmaller outEqual OR Delay inEvent inTime outEvent 0.05 Wave inAzimuth inStart inWave AND Select
 Random
 Wave
  157. 157. Inputs Start ChanceToPlay Random inRange100 Compare inA outGreater inB outSmaller outEqual OR Delay inEvent inTime outEvent 0.05 Wave inAzimuth inStart inWave AND Random inRange150 Ramp inTrigger outValue inStartValue inTargetValue 0 Select
 Random
 Wave
  158. 158. Experimenting Should Be Safe and Fun
  159. 159. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory.
  160. 160. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot
  161. 161. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot • Some of it worked, some didn’t
  162. 162. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot • Some of it worked, some didn’t • Little or no code support - all in audio design
  163. 163. Examples
  164. 164. Fire Rate Variation
  165. 165. Fire Rate Variation • Experiment: How to make battle sound more interesting
  166. 166. Fire Rate Variation • Experiment: How to make battle sound more interesting
  167. 167. Fire Rate Variation • Experiment: How to make battle sound more interesting
  168. 168. Fire Rate Variation • Experiment: How to make battle sound more interesting • Idea: Slight variation in fire rate
  169. 169. Fire Rate Variation • Experiment: How to make battle sound more interesting • Idea: Slight variation in fire rate
  170. 170. What the Mind Hears, …
  171. 171. What the Mind Hears, …
  172. 172. What the Mind Hears, … • Experiment: Birds should react when you fire gun
  173. 173. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only
  174. 174. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only • Use sound-to-sound messaging system
  175. 175. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only • Use sound-to-sound messaging system • Gun sounds notify bird sounds of shot
  176. 176. Bird Sound Logic Not Shooting Play “Bird Loop”FALSE GunFireState Gun Sound Bird Sound
  177. 177. Bird Sound Logic Shooting Play “Bird Loop”FALSE GunFireState Gun Sound Bird Sound
  178. 178. Bird Sound Logic Shooting Play “Bird Loop”TRUE GunFireState Gun Sound Bird Sound set
  179. 179. Bird Sound Logic Shooting Play “Bird Loop”TRUE GunFireState Gun Sound Bird Sound set get
  180. 180. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Shooting TRUE GunFireState Gun Sound Bird Sound set get
  181. 181. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Not Shooting TRUE GunFireState Gun Sound Bird Sound set get
  182. 182. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE
  183. 183. Play “Bird Loop” Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE
  184. 184. Play “Bird Loop” Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE Any sound can send a message which can be received by any other sound
  185. 185. Material Dependent Environmental Reactions (MADDER)
  186. 186. Material Dependent Environmental Reactions (MADDER) • Inspiration: 
 Guns have this explosive force in the world
 when they’re fired
  187. 187. Material Dependent Environmental Reactions (MADDER) • Inspiration: 
 Guns have this explosive force in the world
 when they’re fired • Things that are near start to rattle
  188. 188. See http://www.youtube.com/watch?v=nMkiDguYtGw
  189. 189. See http://www.youtube.com/watch?v=nMkiDguYtGw
  190. 190. See http://www.youtube.com/watch?v=nMkiDguYtGw
  191. 191. MADDER
  192. 192. MADDER • Inputs required:
  193. 193. MADDER • Inputs required: • Distance to closest surface
  194. 194. MADDER • Inputs required: • Distance to closest surface • Direction of closest surface
  195. 195. MADDER • Inputs required: • Distance to closest surface • Direction of closest surface • Material of closest surface
  196. 196. MADDER raycast Listener
  197. 197. MADDER raycast • Sweeping raycast around listener Listener
  198. 198. MADDER raycast • Sweeping raycast around listener Listener
  199. 199. MADDER raycast • Sweeping raycast around listener • After one revolution, closest hit point is used Listener
  200. 200. MADDER raycast • Sweeping raycast around listener • After one revolution, closest hit point is used • Distance, angle and material is passed to sound Distance } Angle Material Listener
  201. 201. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial
  202. 202. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial FirstPerson inStartAND
  203. 203. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial FirstPerson inStartAND ThirdPerson inStartNOT
  204. 204. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial FirstPerson inStartAND ThirdPerson inStartNOT MADDER inWallDistance PitchModifier inWallAngle inWallMaterial Start SpeedOfSound MaterialSamples IsSilencedGun IsBigGun 330 1.0 False False
  205. 205. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial FirstPerson inStartAND ThirdPerson inStartNOT MADDER inWallDistance PitchModifier inWallAngle inWallMaterial Start SpeedOfSound MaterialSamples IsSilencedGun IsBigGun 330 1.0 False False
  206. 206. Inputs on_start inIsFirstPerson inWallDistance inWallAngle inWallMaterial FirstPerson inStartAND ThirdPerson inStartNOT MADDER
 Content Package MADDER inWallDistance PitchModifier inWallAngle inWallMaterial Start SpeedOfSound MaterialSamples IsSilencedGun IsBigGun 330 1.0 False False
  207. 207. Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun
  208. 208. Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Delay Unit
  209. 209. Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  210. 210. 4x Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  211. 211. 4x Wave inAzimuth inPitchRatio inDryGain inWetGain inStart inWave Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  212. 212. 4x Wave inAzimuth inPitchRatio inDryGain inWetGain inStart inWave Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  213. 213. 4x Wave inAzimuth inPitchRatio inDryGain inWetGain inStart inWave Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  214. 214. 4x Wave inAzimuth inPitchRatio inDryGain inWetGain inStart inWave SelectSample inMaterial outWave inMaterialSamples Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Voice Selector Delay Unit
  215. 215. 4x Wave inAzimuth inPitchRatio inDryGain inWetGain inStart inWave SelectSample inMaterial outWave inMaterialSamples Inputs Start SpeedOfSound PitchModifier FiringRate inWallDistance inWallAngle inWallMaterial MaterialSamples IsSilencedGun IsBigGun Falloff Unit Voice Selector Delay Unit
  216. 216. Video showing MADDER effect for 4 different materials
  217. 217. Video showing MADDER effect for 4 different materials
  218. 218. MADDER Prototype Video showing initial MADDER prototype with 4 different materials in scene.
  219. 219. MADDER Prototype Video showing initial MADDER prototype with 4 different materials in scene.
  220. 220. MADDER Four-angle raycast
  221. 221. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener
  222. 222. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener • One sweeping raycast
 in each quadrant
  223. 223. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener • One sweeping raycast
 in each quadrant • Closest hit point in
 each quadrant is taken
  224. 224. Inputs inWallDistanceFront inWallAngleFront inWallMaterialFront inWallDistanceRight inWallAngleRight inWallMaterialRight inWallDistanceBack inWallAngleBack inWallMaterialBack inWallDistanceLeft inWallAngleLeft inWallMaterialLeft
  225. 225. Inputs inWallDistanceFront inWallAngleFront inWallMaterialFront inWallDistanceRight inWallAngleRight inWallMaterialRight inWallDistanceBack inWallAngleBack inWallMaterialBack inWallDistanceLeft inWallAngleLeft inWallMaterialLeft MADDER inWallDistance inWallAngle inWallMaterial
  226. 226. Inputs inWallDistanceFront inWallAngleFront inWallMaterialFront inWallDistanceRight inWallAngleRight inWallMaterialRight inWallDistanceBack inWallAngleBack inWallMaterialBack inWallDistanceLeft inWallAngleLeft inWallMaterialLeft MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial
  227. 227. Inputs inWallDistanceFront inWallAngleFront inWallMaterialFront inWallDistanceRight inWallAngleRight inWallMaterialRight inWallDistanceBack inWallAngleBack inWallMaterialBack inWallDistanceLeft inWallAngleLeft inWallMaterialLeft MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial
  228. 228. Inputs inWallDistanceFront inWallAngleFront inWallMaterialFront inWallDistanceRight inWallAngleRight inWallMaterialRight inWallDistanceBack inWallAngleBack inWallMaterialBack inWallDistanceLeft inWallAngleLeft inWallMaterialLeft MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial MADDER inWallDistance inWallAngle inWallMaterial
  229. 229. Four-angle MADDER Video showing final 4-angle MADDER with 4 materials placed in the scene
  230. 230. Four-angle MADDER Video showing final 4-angle MADDER with 4 materials placed in the scene
  231. 231. MADDER video (30 seconds) MADDER off Capture from final game with MADDER disabled
  232. 232. MADDER video (30 seconds) MADDER off Capture from final game with MADDER disabled
  233. 233. MADDER on Capture from final game with MADDER enabled
  234. 234. MADDER on Capture from final game with MADDER enabled
  235. 235. Creative Workflow
  236. 236. Creative Workflow IDEA Coder GAME Sound Designer DONE
  237. 237. Creative Workflow IDEA Coder GAME Sound Designer DONE
  238. 238. Gun Tails
  239. 239. Gun Tails • Experiment with existing data
  240. 240. Gun Tails • Experiment with existing data • Wall raycast
  241. 241. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside
  242. 242. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  243. 243. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  244. 244. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  245. 245. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  246. 246. How to Debug? What kind of debugging support did we have? Obviously a sound designer is creating much more complex logic now, and that means bugs will creep in. We need to be able to find problems such as incorrect behaviour quickly.
  247. 247. Manually Placed Debug Probes SineLFO inFrequency outValue inAmplitude inOffset inPhase 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  248. 248. Manually Placed Debug Probes SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  249. 249. Manually Placed Debug Probes • Just connect any output to visualize it SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  250. 250. Manually Placed Debug Probes • Just connect any output to visualize it • Shows debug value on game screen SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  251. 251. Manually Placed Debug Probes
  252. 252. Can miss quick changes because game frame rate is lower than sound update rate! Manually Placed Debug Probes
  253. 253. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  254. 254. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  255. 255. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  256. 256. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost • Simple in-game debugger to show data Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  257. 257. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost • Simple in-game debugger to show data • Scrub through recording Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  258. 258. Post Mortem So what when wrong and what went right?
  259. 259. Timeline Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  260. 260. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  261. 261. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  262. 262. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  263. 263. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  264. 264. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  265. 265. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) November 2013 Shipped as launch title Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  266. 266. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) November 2013 Shipped as launch title New system was up and running within 6 months! Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  267. 267. “You’re making sound designers do programming work, that’ll be a disaster! The game will crash all the time!” This didn't really happen. We had a few buggy nodes, but in general, things worked out fine. What we did is to make sure that if new nodes are peer-reviewed like any other code, especially if it’s written by a non-programmer. Also since nodes are so simple and limited in their scope, it’s hard to make something that truly breaks the game.
  268. 268. “This is an unproven idea, why don’t we just use some middleware solution?” There were doubts that we can create new tech and a toolset with a user friendly workflow in time. The safer thing would’ve been to license a middleware solution, but at the time we had to make this decision, no 3rd parties were disclosed about PS4 yet, and we couldn’t be sure if they will properly support the PS4 audio hardware. Also we were keen on having a solution that’s integrated with our normal asset pipeline, which allowed us to share the tech built for audio with other disciplines. Sound designers using the same workflow as artists and game designers is a benefit that middleware can't give us. New nodes created by e.g. game programmers are immediately useful for sound designers, and vice versa. All of these reasons led us to push forward with our own tech, to make something that really fits the game and our way of working. ! Anton: Also we wouldn’t have been able to do the kind of deep tool integration that we have now.
  269. 269. Used by other Disciplines The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  270. 270. Used by other Disciplines • Graph system became popular The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  271. 271. Used by other Disciplines • Graph system became popular • Used for procedural rigging The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  272. 272. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  273. 273. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior • More nodes were added The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  274. 274. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior • More nodes were added • Bugs were fixed quicker The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  275. 275. Compression Codecs A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  276. 276. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 ~20 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  277. 277. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  278. 278. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  279. 279. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 Large, don’t need precise timing Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  280. 280. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 We’ve completely stopped using ADPCM (VAG) ! Large, don’t need precise timing Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  281. 281. Sample Rates on PS3 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  282. 282. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  283. 283. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 Using sample rate to optimize memory usage! 😩 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  284. 284. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 On PS4 we use 48 kHz exclusively! Using sample rate to optimize memory usage! 😩 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  285. 285. The Future… Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  286. 286. The Future… • Extend and improve Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  287. 287. The Future… • Extend and improve • Create compute shaders from graphs Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  288. 288. The Future… • Extend and improve • Create compute shaders from graphs • More elaborate debugging support Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×