SlideShare a Scribd company logo

Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

A variety of recent researches in Audio Emotion Recognition (AER) outlines high performance and retrieval accuracy results. However, in most works music is considered as the original sound content that conveys the identified emotions. One of the music characteristics that is found to represent a fundamental means for conveying emotions are the rhythm- related acoustic cues. Although music is an important aspect of everyday life, there are numerous non-linguistic and non- musical sounds surrounding humans, generally defined as sound events (SEs). Despite this enormous impact of SEs to humans, a scarcity of investigations regarding AER from SEs is observed. There are only a few recent investigations concerned with SEs and AER, presenting a semantic connection between the former and the listener’s triggered emotion. In this work we analytically investigate the connection of rhythm-related characteristics of a wide range of common SEs with the arousal of the listener using sound events with semantic content. To this aim, several feature evaluation and classification tasks are conducted using different ranking and classification algorithms. High accuracy results are obtained, demonstrating a significant relation of SEs rhythmic characteristics to the elicited arousal.

1 of 96
Download to read offline
Sound Events and Emotions: Investigating the
Relation of Rhythmic Characteristics and Arousal
K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1
1Digital Audio Processing & Applications Group, Audiovisual Signal Processing
Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece
2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle
University of Thessaloniki, Thessaloniki, Greece
Agenda
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt

Recommended

20211026 taicca 1 intro to mir
20211026 taicca 1 intro to mir20211026 taicca 1 intro to mir
20211026 taicca 1 intro to mirYi-Hsuan Yang
 
"All you need is AI and music" by Keunwoo Choi
"All you need is AI and music" by Keunwoo Choi"All you need is AI and music" by Keunwoo Choi
"All you need is AI and music" by Keunwoo ChoiKeunwoo Choi
 
Understanding Music Playlists
Understanding Music PlaylistsUnderstanding Music Playlists
Understanding Music PlaylistsKeunwoo Choi
 
20190625 Research at Taiwan AI Labs: Music and Speech AI
20190625 Research at Taiwan AI Labs: Music and Speech AI20190625 Research at Taiwan AI Labs: Music and Speech AI
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
 
20211026 taicca 2 music generation
20211026 taicca 2 music generation20211026 taicca 2 music generation
20211026 taicca 2 music generationYi-Hsuan Yang
 
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Research on Automatic Music Composition at the Taiwan AI Labs, April 2020
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
 
Computational models of symphonic music
Computational models of symphonic musicComputational models of symphonic music
Computational models of symphonic musicEmilia Gómez
 

More Related Content

What's hot

北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)kthrlab
 
machine learning x music
machine learning x musicmachine learning x music
machine learning x musicYi-Hsuan Yang
 
Poster vega north
Poster vega northPoster vega north
Poster vega northAcxelVega
 
Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
 
Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
 
Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1a2media15b
 
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesLearning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
 
Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Paul Lamere
 
Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)Sophia Yeiji Shin
 
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
 
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...Gilberto Bernardes
 
How to create a podcast
How to create a podcastHow to create a podcast
How to create a podcastFALE - UFMG
 

What's hot (14)

北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
 
machine learning x music
machine learning x musicmachine learning x music
machine learning x music
 
Poster vega north
Poster vega northPoster vega north
Poster vega north
 
Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)
 
Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021
 
Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1
 
Unit 22 radio script
Unit 22 radio scriptUnit 22 radio script
Unit 22 radio script
 
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesLearning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
 
Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)
 
Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)
 
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
 
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
 
How to create a podcast
How to create a podcastHow to create a podcast
How to create a podcast
 

Viewers also liked

Using Technology to Create Emotions
Using Technology to Create EmotionsUsing Technology to Create Emotions
Using Technology to Create EmotionsmyQaa
 
TV Drama - Editing
TV Drama - EditingTV Drama - Editing
TV Drama - EditingZoe Lorenz
 
TV Drama - Sound
TV Drama - SoundTV Drama - Sound
TV Drama - SoundZoe Lorenz
 
Camera shots in tv drama
Camera shots in tv dramaCamera shots in tv drama
Camera shots in tv dramaNaamah Hill
 
Sound in TV Drama
Sound in TV DramaSound in TV Drama
Sound in TV DramaZoe Lorenz
 
Module 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItModule 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItMichael DeBlis III, Esq., LLM
 

Viewers also liked (6)

Using Technology to Create Emotions
Using Technology to Create EmotionsUsing Technology to Create Emotions
Using Technology to Create Emotions
 
TV Drama - Editing
TV Drama - EditingTV Drama - Editing
TV Drama - Editing
 
TV Drama - Sound
TV Drama - SoundTV Drama - Sound
TV Drama - Sound
 
Camera shots in tv drama
Camera shots in tv dramaCamera shots in tv drama
Camera shots in tv drama
 
Sound in TV Drama
Sound in TV DramaSound in TV Drama
Sound in TV Drama
 
Module 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItModule 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From It
 

Similar to Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxrosemarybdodson23141
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreportDaniel Shin
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meetingIMS Bands
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detectionTyler Schnoebelen
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioKendra Cote
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioJennifer York
 
The Effect of Music on Memory
The Effect of Music on MemoryThe Effect of Music on Memory
The Effect of Music on MemoryJacobCrane
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluationDr_Mo3ath
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)Domenica Fanelli
 
Research Proposal
Research ProposalResearch Proposal
Research ProposalRyan Pelon
 
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxAIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxZawarali786
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxjaggernaoma
 
The Use Of Chant Therapy Help Increase And Avoid The...
The Use Of Chant Therapy Help Increase And Avoid The...The Use Of Chant Therapy Help Increase And Avoid The...
The Use Of Chant Therapy Help Increase And Avoid The...Laura Hansen
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talkMark Rauterkus
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometrybethfernandezaud
 
Taking A Look At Audio Compression
Taking A Look At Audio CompressionTaking A Look At Audio Compression
Taking A Look At Audio CompressionPatty Buckley
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory혜원 정
 

Similar to Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal (20)

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreport
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meeting
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detection
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res Audio
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res Audio
 
Sound physics
Sound physicsSound physics
Sound physics
 
The Effect of Music on Memory
The Effect of Music on MemoryThe Effect of Music on Memory
The Effect of Music on Memory
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluation
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)
 
Research Proposal
Research ProposalResearch Proposal
Research Proposal
 
Acoustic analyzer
Acoustic analyzerAcoustic analyzer
Acoustic analyzer
 
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxAIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docx
 
The Use Of Chant Therapy Help Increase And Avoid The...
The Use Of Chant Therapy Help Increase And Avoid The...The Use Of Chant Therapy Help Increase And Avoid The...
The Use Of Chant Therapy Help Increase And Avoid The...
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talk
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometry
 
Taking A Look At Audio Compression
Taking A Look At Audio CompressionTaking A Look At Audio Compression
Taking A Look At Audio Compression
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory
 

Recently uploaded

What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...
What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...
What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...ShapeBlue
 
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31shyamraj55
 
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...ShapeBlue
 
Large Language Models and Applications in Healthcare
Large Language Models and Applications in HealthcareLarge Language Models and Applications in Healthcare
Large Language Models and Applications in HealthcareAsma Ben Abacha
 
Roundtable_-_API_Research__Testing_Tools.pdf
Roundtable_-_API_Research__Testing_Tools.pdfRoundtable_-_API_Research__Testing_Tools.pdf
Roundtable_-_API_Research__Testing_Tools.pdfMostafa Higazy
 
Low Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & PitfallsLow Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & PitfallsScyllaDB
 
Python For Kids - Sách Lập trình cho trẻ em
Python For Kids - Sách Lập trình cho trẻ emPython For Kids - Sách Lập trình cho trẻ em
Python For Kids - Sách Lập trình cho trẻ emNho Vĩnh
 
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...SearchNorwich
 
KUBRICK Graphs: A journey from in vogue to success-ion
KUBRICK Graphs: A journey from in vogue to success-ionKUBRICK Graphs: A journey from in vogue to success-ion
KUBRICK Graphs: A journey from in vogue to success-ionNeo4j
 
iOncologi_Pitch Deck_2024 slide show for hostinger
iOncologi_Pitch Deck_2024 slide show for hostingeriOncologi_Pitch Deck_2024 slide show for hostinger
iOncologi_Pitch Deck_2024 slide show for hostingerssuser9354ce
 
Artificial Intelligence, Design, and More-than-Human Justice
Artificial Intelligence, Design, and More-than-Human JusticeArtificial Intelligence, Design, and More-than-Human Justice
Artificial Intelligence, Design, and More-than-Human JusticeJosh Gellers
 
Q4 2023 Quarterly Investor Presentation - FINAL.pdf
Q4 2023 Quarterly Investor Presentation - FINAL.pdfQ4 2023 Quarterly Investor Presentation - FINAL.pdf
Q4 2023 Quarterly Investor Presentation - FINAL.pdfTejal81
 
How AI and ChatGPT are changing cybersecurity forever.pptx
How AI and ChatGPT are changing cybersecurity forever.pptxHow AI and ChatGPT are changing cybersecurity forever.pptx
How AI and ChatGPT are changing cybersecurity forever.pptxInfosec
 
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...James Anderson
 
Enterprise Architecture As Strategy - Book Review
Enterprise Architecture As Strategy - Book ReviewEnterprise Architecture As Strategy - Book Review
Enterprise Architecture As Strategy - Book ReviewAshraf Fouad
 
Geospatial Synergy: Amplifying Efficiency with FME & Esri
Geospatial Synergy: Amplifying Efficiency with FME & EsriGeospatial Synergy: Amplifying Efficiency with FME & Esri
Geospatial Synergy: Amplifying Efficiency with FME & EsriSafe Software
 
AI for Educators - Integrating AI in the Classrooms
AI for Educators - Integrating AI in the ClassroomsAI for Educators - Integrating AI in the Classrooms
AI for Educators - Integrating AI in the ClassroomsPremsankar Chakkingal
 
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...2toLead Limited
 
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...ShapeBlue
 

Recently uploaded (20)

What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...
What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...
What’s New in CloudStack 4.19, Abhishek Kumar, Release Manager Apache CloudSt...
 
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31
Unleash the Solace Pub Sub connector | Banaglore MuleSoft Meetup #31
 
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...
Elevating Cloud Infrastructure with Object Storage, DRS, VM Scheduling, and D...
 
Large Language Models and Applications in Healthcare
Large Language Models and Applications in HealthcareLarge Language Models and Applications in Healthcare
Large Language Models and Applications in Healthcare
 
Roundtable_-_API_Research__Testing_Tools.pdf
Roundtable_-_API_Research__Testing_Tools.pdfRoundtable_-_API_Research__Testing_Tools.pdf
Roundtable_-_API_Research__Testing_Tools.pdf
 
In sharing we trust. Taking advantage of a diverse consortium to build a tran...
In sharing we trust. Taking advantage of a diverse consortium to build a tran...In sharing we trust. Taking advantage of a diverse consortium to build a tran...
In sharing we trust. Taking advantage of a diverse consortium to build a tran...
 
Low Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & PitfallsLow Latency at Extreme Scale: Proven Practices & Pitfalls
Low Latency at Extreme Scale: Proven Practices & Pitfalls
 
Python For Kids - Sách Lập trình cho trẻ em
Python For Kids - Sách Lập trình cho trẻ emPython For Kids - Sách Lập trình cho trẻ em
Python For Kids - Sách Lập trình cho trẻ em
 
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...
ChatGPT's Code Interpreter: Your secret weapon for SEO automation success - S...
 
KUBRICK Graphs: A journey from in vogue to success-ion
KUBRICK Graphs: A journey from in vogue to success-ionKUBRICK Graphs: A journey from in vogue to success-ion
KUBRICK Graphs: A journey from in vogue to success-ion
 
iOncologi_Pitch Deck_2024 slide show for hostinger
iOncologi_Pitch Deck_2024 slide show for hostingeriOncologi_Pitch Deck_2024 slide show for hostinger
iOncologi_Pitch Deck_2024 slide show for hostinger
 
Artificial Intelligence, Design, and More-than-Human Justice
Artificial Intelligence, Design, and More-than-Human JusticeArtificial Intelligence, Design, and More-than-Human Justice
Artificial Intelligence, Design, and More-than-Human Justice
 
Q4 2023 Quarterly Investor Presentation - FINAL.pdf
Q4 2023 Quarterly Investor Presentation - FINAL.pdfQ4 2023 Quarterly Investor Presentation - FINAL.pdf
Q4 2023 Quarterly Investor Presentation - FINAL.pdf
 
How AI and ChatGPT are changing cybersecurity forever.pptx
How AI and ChatGPT are changing cybersecurity forever.pptxHow AI and ChatGPT are changing cybersecurity forever.pptx
How AI and ChatGPT are changing cybersecurity forever.pptx
 
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
GDG Cloud Southlake 30 Brian Demers Breeding 10x Developers with Developer Pr...
 
Enterprise Architecture As Strategy - Book Review
Enterprise Architecture As Strategy - Book ReviewEnterprise Architecture As Strategy - Book Review
Enterprise Architecture As Strategy - Book Review
 
Geospatial Synergy: Amplifying Efficiency with FME & Esri
Geospatial Synergy: Amplifying Efficiency with FME & EsriGeospatial Synergy: Amplifying Efficiency with FME & Esri
Geospatial Synergy: Amplifying Efficiency with FME & Esri
 
AI for Educators - Integrating AI in the Classrooms
AI for Educators - Integrating AI in the ClassroomsAI for Educators - Integrating AI in the Classrooms
AI for Educators - Integrating AI in the Classrooms
 
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...
Microsoft x 2toLead Webinar Session 1 - How Employee Communication and Connec...
 
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...
CloudStack 101: The Best Way to Build Your Private Cloud – Rohit Yadav, VP Ap...
 

Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

  • 1. Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1 1Digital Audio Processing & Applications Group, Audiovisual Signal Processing Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece 2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle University of Thessaloniki, Thessaloniki, Greece
  • 2. Agenda 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 3. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 4. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 5. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 6. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 7. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 8. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 9. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 10. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 11. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 12. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 13. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 14. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 15. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 16. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 17. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 18. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 19. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 20. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 21. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 22. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 23. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 24. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 25. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 26. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 27. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 28. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 29. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 30. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 31. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 32. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 33. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 34. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 35. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 36. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 37. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 38. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 39. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 40. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 41. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 42. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 43. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 44. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 45. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 46. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 47. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 48. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 49. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 50. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 51. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 52. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 53. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 54. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 55. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 56. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 57. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 58. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 59. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 60. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 61. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 62. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 63. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Extracted Features Statistical Measures Beat spectrum Mean Onsets Standard deviation Tempo Gradient Fluctuation Kurtosis Event density Skewness Pulse clarity
  • 64. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 65. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 66. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 67. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 68. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 69. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 70. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 71. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 72. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 73. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 74. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 75. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation Upper group Beatspectrum std Event density std Onsets gradient Fluctuation kurtosis Beatspectrum gradient Pulse clarity std Fluctuation mean Fluctuation std Fluctuation skewness Onsets skewness Pulse clarity kurtosis Event density kurtosis Onsets mean
  • 76. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 77. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 78. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 79. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 80. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 81. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26% Window length: 1.4 seconds Algorithm utilised: Artificial Neural network
  • 82. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 83. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 84. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 85. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 86. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 87. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 88. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 89. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 90. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 91. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 92. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 93. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 94. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence
  • 95. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence