SlideShare a Scribd company logo
1 of 24
Download to read offline
10th International Symposium on Computer Music Multidisciplinary Research
The	
  Mood	
  Conductor	
  System:	
  Audience	
  
and	
  Performer	
  Interac9on	
  using	
  Mobile	
  
Technology	
  and	
  Emo9on	
  Cues
György Fazekas, Mathieu Barthet
and Mark Sandler
Centre for Digital Music
Queen Mary University of London
School of Electronic Engineering and Computer Science
10th International Symposium on
Computer Music Multidisciplinary
Research (CMMR’13)
10th International Symposium on Computer Music Multidisciplinary Research
Outline
• Mo*va*on
• Music	
  and	
  Emo*on
• Outline	
  of	
  the	
  Mood	
  Conductor	
  System
• A	
  quick	
  video	
  demonstra*on
• Implementa*on	
  details
• Interac*ve	
  performances	
  and	
  data	
  collec*on
• Evalua*on
• Conclusions
10th International Symposium on Computer Music Multidisciplinary Research
Mo*va*on
• Classic	
  concert	
  situa9on:	
  audience	
  listens	
  to	
  the	
  
music	
  played	
  by	
  performers	
  in	
  a	
  passive	
  manner:	
  
• typically,	
  interac*on	
  with	
  the	
  performers	
  is	
  not	
  possible	
  
• apart	
  from	
  conven*onal	
  means	
  (e.g.	
  cheering	
  /	
  discontent).
• Goal:	
  create	
  an	
  interac*on	
  between	
  audience	
  
and	
  performers	
  ac*ng	
  on	
  musical	
  expression	
  or	
  
the	
  improvised	
  composi*on	
  itself
10th International Symposium on Computer Music Multidisciplinary Research
Mo*va*on
• Introduce	
  a	
  new	
  chain	
  of	
  communica9on:
• Performer	
  <-­‐-­‐>	
  Listener	
  
• vs.	
  classic	
  chain	
  of	
  communica*on	
  C	
  -­‐>	
  P	
  -­‐>	
  L
• P:	
  performer(s);	
  L:	
  listener(s);	
  C:	
  Composer
• Use	
  emo9on	
  cues	
  as	
  means	
  to	
  communicate	
  
between	
  performers	
  and	
  the	
  audience
10th International Symposium on Computer Music Multidisciplinary Research
Why	
  Emo*on?
• Research	
  provides	
  strong	
  evidence	
  of	
  the	
  ability	
  
of	
  music	
  to	
  express	
  or	
  induce	
  emo*on	
  (Schubert,	
  
1999.	
  Sloboda	
  and	
  Juslin,	
  2001).
• Recent	
  work	
  (van	
  Zijl	
  and	
  Sloboda,	
  2010)	
  also	
  
showed	
  that	
  performers	
  experience	
  both	
  music-­‐
related	
  and	
  prac*ce-­‐related	
  emo*ons.
10th International Symposium on Computer Music Multidisciplinary Research
Music	
  and	
  Emo*on
• Core	
  emo9ons	
  can	
  be	
  well	
  represented	
  using	
  a	
  
con*nuous	
  two	
  dimensional	
  space	
  (Russel,	
  1980.	
  
Thayer	
  1986),	
  
• where	
  the	
  dimensions	
  correspond	
  to	
  
• arousal	
  and	
  valence.
• Arousal	
  is	
  related	
  to	
  the	
  excita*on	
  or	
  energy.
• Valence	
  is	
  related	
  to	
  the	
  pleasantness.
10th International Symposium on Computer Music Multidisciplinary Research
Music	
  and	
  Emo*on
• For	
  ease	
  of	
  use,	
  we	
  developed	
  an	
  interface	
  that	
  
fuses	
  dimensional	
  and	
  categorical	
  models	
  of	
  
emo*on.	
  
• This	
  is	
  made	
  available	
  as	
  a	
  web-­‐based	
  App	
  
suitable	
  for	
  mobile	
  devices
• URL: http://bit.ly/moodxp	
  
10th International Symposium on Computer Music Multidisciplinary Research
• Audience	
  indicated	
  emo*on	
  
cues	
  are	
  
• collected	
  using	
  a	
  server,
• clustered	
  in	
  real-­‐*me,	
  and
• visualised.
	
  
• Both	
  the	
  audience	
  and	
  the	
  
performers	
  can	
  see	
  the	
  
visualisa*on.
• The	
  system	
  also	
  logs	
  all	
  data.
Mood	
  Conductor	
  System
10th International Symposium on Computer Music Multidisciplinary Research
Visualisa*on	
  examples
10th International Symposium on Computer Music Multidisciplinary Research
Visualisa*on	
  examples
10th International Symposium on Computer Music Multidisciplinary Research
Visualisa*on	
  examples
10th International Symposium on Computer Music Multidisciplinary Research
System	
  Architecture
• The	
  system	
  has	
  3	
  core	
  components:
• client	
  intercase:	
  mobile	
  applica*on
• Mood	
  Conductor	
  server
• Visualisa*on	
  Client
• The	
  collected	
  emo*on	
  responses	
  are	
  grouped	
  in	
  
real-­‐*me	
  using	
  a	
  *me	
  constrained	
  clustering	
  
process.
10th International Symposium on Computer Music Multidisciplinary Research
Real-­‐*me	
  Clustering
• User	
  input	
  is	
  organised	
  using	
  N	
  clusters	
  that	
  
correspond	
  to	
  blobs	
  Bi	
  (i	
  =	
  1,2,...,N)	
  visualised	
  on	
  
screen.
• Each	
  cluster	
  is	
  associated	
  with	
  the	
  3-­‐tuple	
  (xi,ci,ti),	
  
• xi	
  is	
  the	
  spa*al	
  centre	
  of	
  the	
  cluster	
  on	
  the	
  AV	
  plane,	
  
• ci	
  is	
  the	
  number	
  of	
  observa*ons	
  or	
  user	
  inputs	
  associated	
  
with	
  cluster	
  i,	
  and	
  
• ti	
  represents	
  the	
  *me	
  of	
  the	
  cluster	
  object	
  construc*on.
10th International Symposium on Computer Music Multidisciplinary Research
Real-­‐*me	
  Clustering
• User	
  input	
  represented	
  by	
  S	
  (xs,	
  ts)	
  is	
  clustered	
  
using	
  a	
  spa*al	
  and	
  temporal	
  kernel:
• where	
  	
  	
  	
  	
  	
  and	
  	
  	
  	
  	
  	
  are	
  server-­‐side	
  parameters	
  
represen*ng	
  spa*al	
  and	
  temporal	
  tolerances.
10th International Symposium on Computer Music Multidisciplinary Research
Real-­‐*me	
  Clustering
• New	
  clusters	
  are	
  spawned	
  if	
  nbs	
  <	
  1.
• Otherwise,	
  the	
  input	
  is	
  assigned	
  to	
  B	
  that	
  
minimises	
  d(xs,xi)	
  for	
  all	
  Bi	
  (i	
  =	
  1,2,...,N).
• The	
  parameters	
  of	
  B	
  are	
  updated	
  such	
  that:
• that	
  is,	
  input	
  count	
  is	
  increased	
  and	
  the	
  *me	
  is	
  reset.	
  
and	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  .
10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve	
  Performances
• Interac*ve	
  performances	
  were	
  held	
  in	
  several	
  
loca*ons	
  with	
  different	
  music	
  ensembles:
• Wilton’s	
  Music	
  Hall,	
  Resonance	
  fes*val,	
  CMMR	
  2012	
  
London,	
  UK	
  (with	
  VoXP)
• Strasbourg	
  Cathedral,	
  exhibitronic#2,	
  Electroacous*c	
  Music	
  
Fes*val,	
  Strasbourg,	
  France	
  (with	
  VoXP)
• Harold	
  Pinter	
  Drama	
  Studio,	
  New	
  Musical	
  Interfaces’	
  
concert,	
  QMUL,	
  London,	
  UK	
  (rock	
  band)
• Hack	
  the	
  Barbican,	
  Barbican	
  Centre,	
  London,	
  UK
• ACII	
  2103	
  conference,	
  Geneva,	
  Switzerland	
  (with	
  VoXP)
• User	
  data	
  was	
  logged	
  for	
  further	
  analyses	
  
10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve	
  Performances
• Cathedral	
  of	
  Strasbourg	
  (with	
  VoXP):
• highest	
  number	
  of	
  responses	
  occurred	
  along	
  the	
  diagonal	
  
corresponding	
  to	
  *redness	
  vs.	
  energy	
  in	
  Thayer’s	
  model,	
  
with	
  a	
  high	
  number	
  of	
  responses	
  in	
  the	
  nega*ve-­‐low	
  
(melancholy,	
  dark,	
  atmospheric)	
  and	
  posi*ve-­‐high	
  
(humour,	
  silly,	
  fun)	
  quadrants.
venue audience
size
unique
IPs
duration
(min.)
number of
emotion
cues
average
responses
per sec.
Cathedral of
Strasbourg
Harold Pinter
Drama Studio
~150 465 15 5392 6.22
~45 68 29 5429 3.72
10th International Symposium on Computer Music Multidisciplinary Research
Interac*ve	
  Performances
• Harold	
  Pinter	
  Studio	
  (with	
  rock	
  band):
• Similar	
  overall	
  interac*on	
  pakern	
  
• with	
  a	
  notable	
  difference:
• a	
  more	
  emphasised	
  cluster	
  of	
  mood	
  indica*ons	
  can	
  be	
  
observed	
  in	
  the	
  quadrant	
  corresponding	
  to	
  nega*ve	
  valence	
  
and	
  high	
  arousal	
  (aggressive,	
  energe*c,	
  brutal).
venue audience
size
unique
IPs
duration
(min.)
number of
emotion
cues
average
responses
per sec.
Cathedral of
Strasbourg
Harold Pinter
Drama Studio
~150 465 15 5392 6.22
~45 68 29 5429 3.72
10th International Symposium on Computer Music Multidisciplinary Research
Qualita*ve	
  observa*ons
• Interac*on	
  using	
  the	
  system	
  gradually	
  evolves	
  and	
  
features	
  different	
  interac*on	
  types:
• exploratory	
  interac9on
• typically	
  occurs	
  in	
  the	
  first	
  phase	
  of	
  the	
  performance
• occasional	
  game-­‐like	
  interac9on
• audience	
  members	
  converge	
  in	
  different	
  quadrants
• genuinely	
  musical	
  interac9on
• occurs	
  when	
  audiences’	
  understanding	
  of	
  the	
  system	
  
deepens	
  and	
  the	
  interac*on	
  slows
10th International Symposium on Computer Music Multidisciplinary Research
Evalua*on
• The	
  collected	
  data	
  allows	
  
• simula*ng	
  (replaying)	
  performances	
  and	
  
• fine	
  tune	
  system	
  parameters
• Survey	
  based	
  evalua*on	
  was	
  used	
  to	
  measure	
  how	
  
musicians	
  and	
  audiences	
  asses	
  the	
  system.
• This	
  is	
  discussed	
  in	
  a	
  companion	
  paper	
  presented	
  during	
  
the	
  poster	
  session	
  on	
  Wednesday.
10th International Symposium on Computer Music Multidisciplinary Research
Survey-­‐based	
  evalua*on
• 89%	
  of	
  the	
  audience	
  par*cipants	
  acknowledged	
  the	
  
novelty	
  of	
  the	
  performance	
  	
  and	
  the	
  possibility	
  to	
  get	
  
ac*vely	
  involved	
  in	
  the	
  performance.
• 52%	
  of	
  performers	
  found	
  the	
  point	
  cloud-­‐based	
  visual	
  
feedback	
  system	
  confusing.
• A	
  new	
  system	
  was	
  created	
  
that	
  adds	
  a	
  con*nuous	
  
emo*on	
  trajectory	
  to	
  the	
  
visualisa*on	
  and	
  improves	
  
the	
  mobile	
  interface.
10th International Symposium on Computer Music Multidisciplinary Research
2nd	
  Survey-­‐based	
  evalua*on
10th International Symposium on Computer Music Multidisciplinary Research
Conclusions
• Mood	
  Conductor	
  opens	
  a	
  new	
  communica*on	
  channel	
  
between	
  the	
  audience	
  and	
  musicians	
  that	
  proved	
  to	
  be	
  
effec*ve	
  in	
  several	
  public	
  improvised	
  music	
  performances
• Mood	
  Conductor	
  allows	
  for	
  examining	
  the	
  interac*on	
  
between	
  ar*sts	
  and	
  audience	
  using	
  technology.
• The	
  recorded	
  data	
  may	
  be	
  used	
  in	
  music	
  emo*on	
  studies,	
  
and	
  analysed	
  in	
  the	
  context	
  of	
  recorded	
  audio.
10th International Symposium on Computer Music Multidisciplinary Research
Conclusions
• Iden*fied	
  the	
  need	
  for	
  automa*cally	
  adjus*ng	
  clustering	
  
parameters	
  to	
  the	
  audience	
  size.
• It	
  may	
  be	
  possible	
  to	
  improve	
  the	
  visualisa*on	
  by	
  
employing	
  different	
  clustering	
  strategies	
  or	
  visualisa*on	
  
models
• An	
  intriguing	
  research	
  ques*on	
  is	
  to	
  define	
  a	
  reliable	
  and	
  
objec*ve	
  measure	
  of	
  coherency	
  that	
  reflects	
  the	
  overall	
  
quality	
  of	
  communica*on	
  between	
  musicians	
  and	
  the	
  
audience.

More Related Content

Similar to Mood Conductor CMMR 2013

Towards more smart, connected and open audiovisual archives
Towards more smart, connected and open audiovisual archivesTowards more smart, connected and open audiovisual archives
Towards more smart, connected and open audiovisual archives
Johan Oomen
 
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
The European Library
 
NCorreia, AVOL 2011-07-24
NCorreia, AVOL 2011-07-24NCorreia, AVOL 2011-07-24
NCorreia, AVOL 2011-07-24
Nuno Correia
 

Similar to Mood Conductor CMMR 2013 (20)

Europeana creative sound archives and social networks @ British and Irish S...
Europeana creative  sound archives and social networks  @ British and Irish S...Europeana creative  sound archives and social networks  @ British and Irish S...
Europeana creative sound archives and social networks @ British and Irish S...
 
Estermann performing arts_database_20180721
Estermann performing arts_database_20180721Estermann performing arts_database_20180721
Estermann performing arts_database_20180721
 
ExperimentalHumanities-TheCaseStudyOfLovelaceAndBabbage-20180621
ExperimentalHumanities-TheCaseStudyOfLovelaceAndBabbage-20180621ExperimentalHumanities-TheCaseStudyOfLovelaceAndBabbage-20180621
ExperimentalHumanities-TheCaseStudyOfLovelaceAndBabbage-20180621
 
Music Objects to Social Machines
Music Objects to Social MachinesMusic Objects to Social Machines
Music Objects to Social Machines
 
Towards more smart, connected and open audiovisual archives
Towards more smart, connected and open audiovisual archivesTowards more smart, connected and open audiovisual archives
Towards more smart, connected and open audiovisual archives
 
Computational Creativity - Kai-Uwe Kühnberger,
Computational Creativity - Kai-Uwe Kühnberger,Computational Creativity - Kai-Uwe Kühnberger,
Computational Creativity - Kai-Uwe Kühnberger,
 
PRIMO - Practice-as-Research In Music Online
PRIMO - Practice-as-Research In Music OnlinePRIMO - Practice-as-Research In Music Online
PRIMO - Practice-as-Research In Music Online
 
Estermann wikidata performing-arts-20181109
Estermann wikidata performing-arts-20181109Estermann wikidata performing-arts-20181109
Estermann wikidata performing-arts-20181109
 
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
British Library Labs, Aly Conteh, Digitisation Programme Manager at British L...
 
Museums and their media production
Museums and their media productionMuseums and their media production
Museums and their media production
 
Jewish Music Online: Digital Fieldwork
Jewish Music Online: Digital FieldworkJewish Music Online: Digital Fieldwork
Jewish Music Online: Digital Fieldwork
 
Sound matters
Sound mattersSound matters
Sound matters
 
NCorreia, AVOL 2011-07-24
NCorreia, AVOL 2011-07-24NCorreia, AVOL 2011-07-24
NCorreia, AVOL 2011-07-24
 
Crowdsourcing Descriptions for Nature Recordings
Crowdsourcing Descriptions for Nature RecordingsCrowdsourcing Descriptions for Nature Recordings
Crowdsourcing Descriptions for Nature Recordings
 
[Seminar] 20210129 Hongkyu Lim
[Seminar] 20210129 Hongkyu Lim[Seminar] 20210129 Hongkyu Lim
[Seminar] 20210129 Hongkyu Lim
 
Music Recommendation 2018
Music Recommendation 2018Music Recommendation 2018
Music Recommendation 2018
 
Trends in Music Recommendations 2018
Trends in Music Recommendations 2018Trends in Music Recommendations 2018
Trends in Music Recommendations 2018
 
AudioCubes: a Distributed Cube Tangible Interface based on Interaction Range ...
AudioCubes: a Distributed Cube Tangible Interface based on Interaction Range ...AudioCubes: a Distributed Cube Tangible Interface based on Interaction Range ...
AudioCubes: a Distributed Cube Tangible Interface based on Interaction Range ...
 
Europeana Sounds training session on intellectual property rights (24 June 2015)
Europeana Sounds training session on intellectual property rights (24 June 2015)Europeana Sounds training session on intellectual property rights (24 June 2015)
Europeana Sounds training session on intellectual property rights (24 June 2015)
 
British Library Labs Presentation at Ed Tech Hackathon 2013 - hackathoncentra...
British Library Labs Presentation at Ed Tech Hackathon 2013 - hackathoncentra...British Library Labs Presentation at Ed Tech Hackathon 2013 - hackathoncentra...
British Library Labs Presentation at Ed Tech Hackathon 2013 - hackathoncentra...
 

Recently uploaded

Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
FIDO Alliance
 
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
panagenda
 
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
Muhammad Subhan
 

Recently uploaded (20)

Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
AI mind or machine power point presentation
AI mind or machine power point presentationAI mind or machine power point presentation
AI mind or machine power point presentation
 
UiPath manufacturing technology benefits and AI overview
UiPath manufacturing technology benefits and AI overviewUiPath manufacturing technology benefits and AI overview
UiPath manufacturing technology benefits and AI overview
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptx
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
 
Introduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxIntroduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptx
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
 
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
 
How to Check GPS Location with a Live Tracker in Pakistan
How to Check GPS Location with a Live Tracker in PakistanHow to Check GPS Location with a Live Tracker in Pakistan
How to Check GPS Location with a Live Tracker in Pakistan
 
WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024
 
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
“Iamnobody89757” Understanding the Mysterious of Digital Identity.pdf
 
Navigating the Large Language Model choices_Ravi Daparthi
Navigating the Large Language Model choices_Ravi DaparthiNavigating the Large Language Model choices_Ravi Daparthi
Navigating the Large Language Model choices_Ravi Daparthi
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdfFrisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptx
 

Mood Conductor CMMR 2013

  • 1. 10th International Symposium on Computer Music Multidisciplinary Research The  Mood  Conductor  System:  Audience   and  Performer  Interac9on  using  Mobile   Technology  and  Emo9on  Cues György Fazekas, Mathieu Barthet and Mark Sandler Centre for Digital Music Queen Mary University of London School of Electronic Engineering and Computer Science 10th International Symposium on Computer Music Multidisciplinary Research (CMMR’13)
  • 2. 10th International Symposium on Computer Music Multidisciplinary Research Outline • Mo*va*on • Music  and  Emo*on • Outline  of  the  Mood  Conductor  System • A  quick  video  demonstra*on • Implementa*on  details • Interac*ve  performances  and  data  collec*on • Evalua*on • Conclusions
  • 3. 10th International Symposium on Computer Music Multidisciplinary Research Mo*va*on • Classic  concert  situa9on:  audience  listens  to  the   music  played  by  performers  in  a  passive  manner:   • typically,  interac*on  with  the  performers  is  not  possible   • apart  from  conven*onal  means  (e.g.  cheering  /  discontent). • Goal:  create  an  interac*on  between  audience   and  performers  ac*ng  on  musical  expression  or   the  improvised  composi*on  itself
  • 4. 10th International Symposium on Computer Music Multidisciplinary Research Mo*va*on • Introduce  a  new  chain  of  communica9on: • Performer  <-­‐-­‐>  Listener   • vs.  classic  chain  of  communica*on  C  -­‐>  P  -­‐>  L • P:  performer(s);  L:  listener(s);  C:  Composer • Use  emo9on  cues  as  means  to  communicate   between  performers  and  the  audience
  • 5. 10th International Symposium on Computer Music Multidisciplinary Research Why  Emo*on? • Research  provides  strong  evidence  of  the  ability   of  music  to  express  or  induce  emo*on  (Schubert,   1999.  Sloboda  and  Juslin,  2001). • Recent  work  (van  Zijl  and  Sloboda,  2010)  also   showed  that  performers  experience  both  music-­‐ related  and  prac*ce-­‐related  emo*ons.
  • 6. 10th International Symposium on Computer Music Multidisciplinary Research Music  and  Emo*on • Core  emo9ons  can  be  well  represented  using  a   con*nuous  two  dimensional  space  (Russel,  1980.   Thayer  1986),   • where  the  dimensions  correspond  to   • arousal  and  valence. • Arousal  is  related  to  the  excita*on  or  energy. • Valence  is  related  to  the  pleasantness.
  • 7. 10th International Symposium on Computer Music Multidisciplinary Research Music  and  Emo*on • For  ease  of  use,  we  developed  an  interface  that   fuses  dimensional  and  categorical  models  of   emo*on.   • This  is  made  available  as  a  web-­‐based  App   suitable  for  mobile  devices • URL: http://bit.ly/moodxp  
  • 8. 10th International Symposium on Computer Music Multidisciplinary Research • Audience  indicated  emo*on   cues  are   • collected  using  a  server, • clustered  in  real-­‐*me,  and • visualised.   • Both  the  audience  and  the   performers  can  see  the   visualisa*on. • The  system  also  logs  all  data. Mood  Conductor  System
  • 9. 10th International Symposium on Computer Music Multidisciplinary Research Visualisa*on  examples
  • 10. 10th International Symposium on Computer Music Multidisciplinary Research Visualisa*on  examples
  • 11. 10th International Symposium on Computer Music Multidisciplinary Research Visualisa*on  examples
  • 12. 10th International Symposium on Computer Music Multidisciplinary Research System  Architecture • The  system  has  3  core  components: • client  intercase:  mobile  applica*on • Mood  Conductor  server • Visualisa*on  Client • The  collected  emo*on  responses  are  grouped  in   real-­‐*me  using  a  *me  constrained  clustering   process.
  • 13. 10th International Symposium on Computer Music Multidisciplinary Research Real-­‐*me  Clustering • User  input  is  organised  using  N  clusters  that   correspond  to  blobs  Bi  (i  =  1,2,...,N)  visualised  on   screen. • Each  cluster  is  associated  with  the  3-­‐tuple  (xi,ci,ti),   • xi  is  the  spa*al  centre  of  the  cluster  on  the  AV  plane,   • ci  is  the  number  of  observa*ons  or  user  inputs  associated   with  cluster  i,  and   • ti  represents  the  *me  of  the  cluster  object  construc*on.
  • 14. 10th International Symposium on Computer Music Multidisciplinary Research Real-­‐*me  Clustering • User  input  represented  by  S  (xs,  ts)  is  clustered   using  a  spa*al  and  temporal  kernel: • where            and            are  server-­‐side  parameters   represen*ng  spa*al  and  temporal  tolerances.
  • 15. 10th International Symposium on Computer Music Multidisciplinary Research Real-­‐*me  Clustering • New  clusters  are  spawned  if  nbs  <  1. • Otherwise,  the  input  is  assigned  to  B  that   minimises  d(xs,xi)  for  all  Bi  (i  =  1,2,...,N). • The  parameters  of  B  are  updated  such  that: • that  is,  input  count  is  increased  and  the  *me  is  reset.   and                                  .
  • 16. 10th International Symposium on Computer Music Multidisciplinary Research Interac*ve  Performances • Interac*ve  performances  were  held  in  several   loca*ons  with  different  music  ensembles: • Wilton’s  Music  Hall,  Resonance  fes*val,  CMMR  2012   London,  UK  (with  VoXP) • Strasbourg  Cathedral,  exhibitronic#2,  Electroacous*c  Music   Fes*val,  Strasbourg,  France  (with  VoXP) • Harold  Pinter  Drama  Studio,  New  Musical  Interfaces’   concert,  QMUL,  London,  UK  (rock  band) • Hack  the  Barbican,  Barbican  Centre,  London,  UK • ACII  2103  conference,  Geneva,  Switzerland  (with  VoXP) • User  data  was  logged  for  further  analyses  
  • 17. 10th International Symposium on Computer Music Multidisciplinary Research Interac*ve  Performances • Cathedral  of  Strasbourg  (with  VoXP): • highest  number  of  responses  occurred  along  the  diagonal   corresponding  to  *redness  vs.  energy  in  Thayer’s  model,   with  a  high  number  of  responses  in  the  nega*ve-­‐low   (melancholy,  dark,  atmospheric)  and  posi*ve-­‐high   (humour,  silly,  fun)  quadrants. venue audience size unique IPs duration (min.) number of emotion cues average responses per sec. Cathedral of Strasbourg Harold Pinter Drama Studio ~150 465 15 5392 6.22 ~45 68 29 5429 3.72
  • 18. 10th International Symposium on Computer Music Multidisciplinary Research Interac*ve  Performances • Harold  Pinter  Studio  (with  rock  band): • Similar  overall  interac*on  pakern   • with  a  notable  difference: • a  more  emphasised  cluster  of  mood  indica*ons  can  be   observed  in  the  quadrant  corresponding  to  nega*ve  valence   and  high  arousal  (aggressive,  energe*c,  brutal). venue audience size unique IPs duration (min.) number of emotion cues average responses per sec. Cathedral of Strasbourg Harold Pinter Drama Studio ~150 465 15 5392 6.22 ~45 68 29 5429 3.72
  • 19. 10th International Symposium on Computer Music Multidisciplinary Research Qualita*ve  observa*ons • Interac*on  using  the  system  gradually  evolves  and   features  different  interac*on  types: • exploratory  interac9on • typically  occurs  in  the  first  phase  of  the  performance • occasional  game-­‐like  interac9on • audience  members  converge  in  different  quadrants • genuinely  musical  interac9on • occurs  when  audiences’  understanding  of  the  system   deepens  and  the  interac*on  slows
  • 20. 10th International Symposium on Computer Music Multidisciplinary Research Evalua*on • The  collected  data  allows   • simula*ng  (replaying)  performances  and   • fine  tune  system  parameters • Survey  based  evalua*on  was  used  to  measure  how   musicians  and  audiences  asses  the  system. • This  is  discussed  in  a  companion  paper  presented  during   the  poster  session  on  Wednesday.
  • 21. 10th International Symposium on Computer Music Multidisciplinary Research Survey-­‐based  evalua*on • 89%  of  the  audience  par*cipants  acknowledged  the   novelty  of  the  performance    and  the  possibility  to  get   ac*vely  involved  in  the  performance. • 52%  of  performers  found  the  point  cloud-­‐based  visual   feedback  system  confusing. • A  new  system  was  created   that  adds  a  con*nuous   emo*on  trajectory  to  the   visualisa*on  and  improves   the  mobile  interface.
  • 22. 10th International Symposium on Computer Music Multidisciplinary Research 2nd  Survey-­‐based  evalua*on
  • 23. 10th International Symposium on Computer Music Multidisciplinary Research Conclusions • Mood  Conductor  opens  a  new  communica*on  channel   between  the  audience  and  musicians  that  proved  to  be   effec*ve  in  several  public  improvised  music  performances • Mood  Conductor  allows  for  examining  the  interac*on   between  ar*sts  and  audience  using  technology. • The  recorded  data  may  be  used  in  music  emo*on  studies,   and  analysed  in  the  context  of  recorded  audio.
  • 24. 10th International Symposium on Computer Music Multidisciplinary Research Conclusions • Iden*fied  the  need  for  automa*cally  adjus*ng  clustering   parameters  to  the  audience  size. • It  may  be  possible  to  improve  the  visualisa*on  by   employing  different  clustering  strategies  or  visualisa*on   models • An  intriguing  research  ques*on  is  to  define  a  reliable  and   objec*ve  measure  of  coherency  that  reflects  the  overall   quality  of  communica*on  between  musicians  and  the   audience.