My talk at CMMR 2013 on the Mood Conductor system.
Mood Conductor is an interactive system that allows audience members to communicate emotional directions to performers in order to conduct improvised music performances. Mood Conductor consists of three main technical components: a smartphone-friendly web application used by the audience, a server-side application for aggregating and clustering audience indicated emotion coordinates in the arousal-valence space, and a visualisation client that creates a video
projection used by the musicians as guidance. This projection also provides visual feedback for the audience.
1. 10th International Symposium on Computer Music Multidisciplinary Research
The
Mood
Conductor
System:
Audience
and
Performer
Interac9on
using
Mobile
Technology
and
Emo9on
Cues
György Fazekas, Mathieu Barthet
and Mark Sandler
Centre for Digital Music
Queen Mary University of London
School of Electronic Engineering and Computer Science
10th International Symposium on
Computer Music Multidisciplinary
Research (CMMR’13)
2. 10th International Symposium on Computer Music Multidisciplinary Research
Outline
• Mo*va*on
• Music
and
Emo*on
• Outline
of
the
Mood
Conductor
System
• A
quick
video
demonstra*on
• Implementa*on
details
• Interac*ve
performances
and
data
collec*on
• Evalua*on
• Conclusions
3. 10th International Symposium on Computer Music Multidisciplinary Research
Mo*va*on
• Classic
concert
situa9on:
audience
listens
to
the
music
played
by
performers
in
a
passive
manner:
• typically,
interac*on
with
the
performers
is
not
possible
• apart
from
conven*onal
means
(e.g.
cheering
/
discontent).
• Goal:
create
an
interac*on
between
audience
and
performers
ac*ng
on
musical
expression
or
the
improvised
composi*on
itself
4. 10th International Symposium on Computer Music Multidisciplinary Research
Mo*va*on
• Introduce
a
new
chain
of
communica9on:
• Performer
<-‐-‐>
Listener
• vs.
classic
chain
of
communica*on
C
-‐>
P
-‐>
L
• P:
performer(s);
L:
listener(s);
C:
Composer
• Use
emo9on
cues
as
means
to
communicate
between
performers
and
the
audience
5. 10th International Symposium on Computer Music Multidisciplinary Research
Why
Emo*on?
• Research
provides
strong
evidence
of
the
ability
of
music
to
express
or
induce
emo*on
(Schubert,
1999.
Sloboda
and
Juslin,
2001).
• Recent
work
(van
Zijl
and
Sloboda,
2010)
also
showed
that
performers
experience
both
music-‐
related
and
prac*ce-‐related
emo*ons.
6. 10th International Symposium on Computer Music Multidisciplinary Research
Music
and
Emo*on
• Core
emo9ons
can
be
well
represented
using
a
con*nuous
two
dimensional
space
(Russel,
1980.
Thayer
1986),
• where
the
dimensions
correspond
to
• arousal
and
valence.
• Arousal
is
related
to
the
excita*on
or
energy.
• Valence
is
related
to
the
pleasantness.
7. 10th International Symposium on Computer Music Multidisciplinary Research
Music
and
Emo*on
• For
ease
of
use,
we
developed
an
interface
that
fuses
dimensional
and
categorical
models
of
emo*on.
• This
is
made
available
as
a
web-‐based
App
suitable
for
mobile
devices
• URL: http://bit.ly/moodxp
8. 10th International Symposium on Computer Music Multidisciplinary Research
• Audience
indicated
emo*on
cues
are
• collected
using
a
server,
• clustered
in
real-‐*me,
and
• visualised.
• Both
the
audience
and
the
performers
can
see
the
visualisa*on.
• The
system
also
logs
all
data.
Mood
Conductor
System
12. 10th International Symposium on Computer Music Multidisciplinary Research
System
Architecture
• The
system
has
3
core
components:
• client
intercase:
mobile
applica*on
• Mood
Conductor
server
• Visualisa*on
Client
• The
collected
emo*on
responses
are
grouped
in
real-‐*me
using
a
*me
constrained
clustering
process.
13. 10th International Symposium on Computer Music Multidisciplinary Research
Real-‐*me
Clustering
• User
input
is
organised
using
N
clusters
that
correspond
to
blobs
Bi
(i
=
1,2,...,N)
visualised
on
screen.
• Each
cluster
is
associated
with
the
3-‐tuple
(xi,ci,ti),
• xi
is
the
spa*al
centre
of
the
cluster
on
the
AV
plane,
• ci
is
the
number
of
observa*ons
or
user
inputs
associated
with
cluster
i,
and
• ti
represents
the
*me
of
the
cluster
object
construc*on.
14. 10th International Symposium on Computer Music Multidisciplinary Research
Real-‐*me
Clustering
• User
input
represented
by
S
(xs,
ts)
is
clustered
using
a
spa*al
and
temporal
kernel:
• where
and
are
server-‐side
parameters
represen*ng
spa*al
and
temporal
tolerances.
15. 10th International Symposium on Computer Music Multidisciplinary Research
Real-‐*me
Clustering
• New
clusters
are
spawned
if
nbs
<
1.
• Otherwise,
the
input
is
assigned
to
B
that
minimises
d(xs,xi)
for
all
Bi
(i
=
1,2,...,N).
• The
parameters
of
B
are
updated
such
that:
• that
is,
input
count
is
increased
and
the
*me
is
reset.
and
.
16. 10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve
Performances
• Interac*ve
performances
were
held
in
several
loca*ons
with
different
music
ensembles:
• Wilton’s
Music
Hall,
Resonance
fes*val,
CMMR
2012
London,
UK
(with
VoXP)
• Strasbourg
Cathedral,
exhibitronic#2,
Electroacous*c
Music
Fes*val,
Strasbourg,
France
(with
VoXP)
• Harold
Pinter
Drama
Studio,
New
Musical
Interfaces’
concert,
QMUL,
London,
UK
(rock
band)
• Hack
the
Barbican,
Barbican
Centre,
London,
UK
• ACII
2103
conference,
Geneva,
Switzerland
(with
VoXP)
• User
data
was
logged
for
further
analyses
17. 10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve
Performances
• Cathedral
of
Strasbourg
(with
VoXP):
• highest
number
of
responses
occurred
along
the
diagonal
corresponding
to
*redness
vs.
energy
in
Thayer’s
model,
with
a
high
number
of
responses
in
the
nega*ve-‐low
(melancholy,
dark,
atmospheric)
and
posi*ve-‐high
(humour,
silly,
fun)
quadrants.
venue audience
size
unique
IPs
duration
(min.)
number of
emotion
cues
average
responses
per sec.
Cathedral of
Strasbourg
Harold Pinter
Drama Studio
~150 465 15 5392 6.22
~45 68 29 5429 3.72
18. 10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve
Performances
• Harold
Pinter
Studio
(with
rock
band):
• Similar
overall
interac*on
pakern
• with
a
notable
difference:
• a
more
emphasised
cluster
of
mood
indica*ons
can
be
observed
in
the
quadrant
corresponding
to
nega*ve
valence
and
high
arousal
(aggressive,
energe*c,
brutal).
venue audience
size
unique
IPs
duration
(min.)
number of
emotion
cues
average
responses
per sec.
Cathedral of
Strasbourg
Harold Pinter
Drama Studio
~150 465 15 5392 6.22
~45 68 29 5429 3.72
19. 10th International Symposium on Computer Music Multidisciplinary Research
Qualita*ve
observa*ons
• Interac*on
using
the
system
gradually
evolves
and
features
different
interac*on
types:
• exploratory
interac9on
• typically
occurs
in
the
first
phase
of
the
performance
• occasional
game-‐like
interac9on
• audience
members
converge
in
different
quadrants
• genuinely
musical
interac9on
• occurs
when
audiences’
understanding
of
the
system
deepens
and
the
interac*on
slows
20. 10th International Symposium on Computer Music Multidisciplinary Research
Evalua*on
• The
collected
data
allows
• simula*ng
(replaying)
performances
and
• fine
tune
system
parameters
• Survey
based
evalua*on
was
used
to
measure
how
musicians
and
audiences
asses
the
system.
• This
is
discussed
in
a
companion
paper
presented
during
the
poster
session
on
Wednesday.
21. 10th International Symposium on Computer Music Multidisciplinary Research
Survey-‐based
evalua*on
• 89%
of
the
audience
par*cipants
acknowledged
the
novelty
of
the
performance
and
the
possibility
to
get
ac*vely
involved
in
the
performance.
• 52%
of
performers
found
the
point
cloud-‐based
visual
feedback
system
confusing.
• A
new
system
was
created
that
adds
a
con*nuous
emo*on
trajectory
to
the
visualisa*on
and
improves
the
mobile
interface.
23. 10th International Symposium on Computer Music Multidisciplinary Research
Conclusions
• Mood
Conductor
opens
a
new
communica*on
channel
between
the
audience
and
musicians
that
proved
to
be
effec*ve
in
several
public
improvised
music
performances
• Mood
Conductor
allows
for
examining
the
interac*on
between
ar*sts
and
audience
using
technology.
• The
recorded
data
may
be
used
in
music
emo*on
studies,
and
analysed
in
the
context
of
recorded
audio.
24. 10th International Symposium on Computer Music Multidisciplinary Research
Conclusions
• Iden*fied
the
need
for
automa*cally
adjus*ng
clustering
parameters
to
the
audience
size.
• It
may
be
possible
to
improve
the
visualisa*on
by
employing
different
clustering
strategies
or
visualisa*on
models
• An
intriguing
research
ques*on
is
to
define
a
reliable
and
objec*ve
measure
of
coherency
that
reflects
the
overall
quality
of
communica*on
between
musicians
and
the
audience.