Slide de apresentação de artigo da disciplina de Inteligência Artificial sobre modelos de identificação de melodias.
"Towards a Computational Model of Melody Identification in Polyphonic Music"
Paper, presented at the Workshop “Music, Mind, Invention”, 30.-31.March 2012, Ewing, NJ.
When a 2D Fourier Transform is applied to piano roll plots which are often used in sequencer software, the resulting 2D graphic is a novel music visualization which reveals internal musical structure. This visualization converts the set of musical notes from the notation display in the piano roll plot to a display which shows structure over time and spectrum within a set musical time period. The transformation is reversible, which means that it also can be used as a novel interface for editing music. The concept of this visualization is demonstrated by software which was written for using MIDI files and creating the visualization with the Fast Fourier Transform (FFT) algorithm. This software demonstrates the live real-time display of this visualization in replay of MIDI files or by music input through a connected MIDI keyboard. The resulting display is independent of pitch transformation or tempo. This visualization approach can be used for musicology studies, for music fingerprinting, comparing composition styles, and for a new creative composition method.
Demixing Commercial Music Productions via Human-Assisted Time-Frequency Maskingdhia_naruto
This convention paper has been reproduced from the author’s advance manuscript, without editing, corrections, or
consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be
obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York
10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is
not permitted without direct permission from the Journal of the Audio Engineering Society.
Avaliação Heurística de um Ambiente Virtual para Análise de Rotas de Execução...Ronildo Oliveira
Slide de apresentação de artigo da disciplina de Manutenção de Software.
"Avaliação Heurística de um Ambiente Virtual para Análise de Rotas de Execução de Software"
A relevância da participação em centros acadêmicos para a formação complement...Ronildo Oliveira
Slide de apresentação de artigo nos encontros universitários UFC 2016.
"A relevância da participação em centros acadêmicos para a formação complementar em computação"
Relato de Experiência de Monitoria da Disciplina de Estrutura de Dados, Estr...Ronildo Oliveira
Slide de apresentação de artigo nos encontros universitários UFC 2016.
"Relato de Experiência de Monitoria da Disciplina de Estrutura de Dados, Estrutura de Dados Avançada e do Projeto Almoço com Código"
Paper, presented at the Workshop “Music, Mind, Invention”, 30.-31.March 2012, Ewing, NJ.
When a 2D Fourier Transform is applied to piano roll plots which are often used in sequencer software, the resulting 2D graphic is a novel music visualization which reveals internal musical structure. This visualization converts the set of musical notes from the notation display in the piano roll plot to a display which shows structure over time and spectrum within a set musical time period. The transformation is reversible, which means that it also can be used as a novel interface for editing music. The concept of this visualization is demonstrated by software which was written for using MIDI files and creating the visualization with the Fast Fourier Transform (FFT) algorithm. This software demonstrates the live real-time display of this visualization in replay of MIDI files or by music input through a connected MIDI keyboard. The resulting display is independent of pitch transformation or tempo. This visualization approach can be used for musicology studies, for music fingerprinting, comparing composition styles, and for a new creative composition method.
Demixing Commercial Music Productions via Human-Assisted Time-Frequency Maskingdhia_naruto
This convention paper has been reproduced from the author’s advance manuscript, without editing, corrections, or
consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be
obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York
10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is
not permitted without direct permission from the Journal of the Audio Engineering Society.
Avaliação Heurística de um Ambiente Virtual para Análise de Rotas de Execução...Ronildo Oliveira
Slide de apresentação de artigo da disciplina de Manutenção de Software.
"Avaliação Heurística de um Ambiente Virtual para Análise de Rotas de Execução de Software"
A relevância da participação em centros acadêmicos para a formação complement...Ronildo Oliveira
Slide de apresentação de artigo nos encontros universitários UFC 2016.
"A relevância da participação em centros acadêmicos para a formação complementar em computação"
Relato de Experiência de Monitoria da Disciplina de Estrutura de Dados, Estr...Ronildo Oliveira
Slide de apresentação de artigo nos encontros universitários UFC 2016.
"Relato de Experiência de Monitoria da Disciplina de Estrutura de Dados, Estrutura de Dados Avançada e do Projeto Almoço com Código"
This document discusses 11 applications of machine learning to music research, focusing on expressive music performance. It describes two approaches - learning at the note level and learning at the structure level. For the note level approach, it uses a system called IBL-SMART that learns rules to determine loudness and tempo for each note. For the structure level approach, it analyzes musical structures like phrases and learns prototypical expression shapes associated with them. It presents experiments applying these approaches to classical pieces, finding the structure level approach produced more musically convincing results.
This document discusses 11 applications of machine learning to music research, focusing on expressive music performance. It describes two approaches - learning at the note level and learning at the structure level. For the note level approach, it uses a system called IBL-SMART that learns rules to determine loudness and tempo for each note. For the structure level approach, it analyzes musical structures like phrases and learns prototypical expression shapes associated with them. It presents experiments applying these approaches to classical pieces, finding the structure level approach produced more musically convincing results.
This document discusses measuring melodic complexity through algorithmic methods and psychological experiments. The key points are:
1) Various algorithms were tested to measure melodic complexity, including entropies, Zipf complexity, and n-gram redundancy of pitch, interval, and duration sequences.
2) Two listening experiments with 47 subjects found that judged melodic complexity correlated most strongly with simply counting the notes.
3) When controlling for note count, first-order metrical entropy showed the highest correlation with judged complexity, indicating that meter is an important dimension of perceived melodic complexity.
In Independent (Re-)Creation Likely To Happen In Pop Music? (Escom 2009)Klaus Frieler
We present an empirical study, in which strong arguments werefound that independnet recreations are not unlikely to happen in pop music given the stylistic and cognitive constraints.
Audio Morphing for Percussive Sound Generationa3labdsp
The aim of audio morphing algorithms is to combine two or more sounds to create a new sound with intermediate timbre and duration. During the last two decades several efforts have been made to improve morphing algorithms in order to obtain more realistic and perceptually relevant sounds. In this paper we present an automatic audio morphing technique applied to percussive musical instruments. Based on preprocessing of the sound references in frequency domain and linear interpolation in time domain, the presented approach allows one to generate high quality hybrid sounds at a low computational cost. Several results are reported in order to show the effectiveness of the proposed approach in terms of audio quality and acoustic perception of the generated hybrid sounds, taking into consideration different percussive samples. Mean opinion score and multidimensional scaling were used to compare the presented approach with existing state of the art techniques.
This document provides an overview of the PHYS207: Physics of Music course. It introduces key topics that will be covered such as sound waves, resonance, vibration of strings and membranes, harmonics, consonance and dissonance. The course will investigate relationships between the perceptual and physical attributes of musical sound using elementary physics concepts and will explore questions about the roles of imperfections and sound in music.
Computational models of symphonic musicEmilia Gómez
Computational models of symphonic music face various challenges due to the genre's formal complexity, long durations, complex instrumentation, and overlapping sources. Researchers are developing approaches to address melody extraction, structural analysis, source separation, and music visualization for symphonic works. For melody extraction, current methods perform best on simple excerpts but struggle with density and complexity, indicating the need for combined audio-score approaches. Structural analysis of symphonies requires consideration of tonality, orchestration, and discrepancies between expert analyses. Source separation aims to isolate instrument sections from multi-channel recordings.
Intelligent real-time music accompaniment for constraint-free improvisationAndreas Floros
Computational Intelligence encompasses tools that allow the fast convergence and adaptation to several problems, a fact that makes them eligible for real-time implementations. The paper at hand discusses the utilization of intelligent algorithms (i.e. Differential Evolution and Genetic Algorithms) for the creation of an adaptive system that is able to provide real-time automatic music accompaniment to a human improviser. The main goal of the presented system is to generate accompanying music based on the local human musician’s tonal, rhythmic and intensity playing style, incorporating no prior knowledge about the improvisers intentions. Compared to existing systems previously proposed, this work introduces a constraint-free im- provisation environment where the most important musical char- acteristics are automatically adapted to the human performer’s playing style, without any prior information. This fact allows the improviser to have maximal control over the tonal, rhythmic and intensity improvisation directions.
This document summarizes a research paper that introduces a probabilistic model for analyzing line spectra, which are sets of prominent frequency components, in musical instrument sounds. The model assumes observations in a time frame are generated by a mixture of notes composed of partials and noise. For piano music specifically, the model introduces fundamental frequency and inharmonicity coefficient as parameters for each note that can be estimated from line spectra using an Expectation-Maximization algorithm. The paper applies this technique to unsupervised estimation of tuning and inharmonicity across the range of a piano from a recorded musical piece.
Graphical visualization of musical emotionsPranay Prasoon
The document discusses graphical visualization of musical emotions using artificial neural networks. 13 audio features are extracted from Hindustani classical music clips labeled as happy or sad. An ANN model with backpropagation algorithm is trained on 70% of data, validated on 15% and tested on 15%. The model correctly classified 15 of 17 happy clips and 21 of 22 sad clips. Testing was repeated 10 times with over 90% accuracy each time, showing the model effectively recognizes musical emotions. Future work involves expanding the model to recognize additional emotions and incorporating physiological features.
The document discusses the mathematical underpinnings of musical tuning systems. It begins by explaining how vibration and the wave equation relate to the physics of sound production in instruments. It then discusses how the human ear perceives sound as a Fourier transform. The document explores how rational frequency ratios between notes allow for matching of harmonic partials, producing consonant intervals. It frames musical tuning systems as subsets of the rational numbers that preserve consonant intervals under multiplication. Finally, it introduces the concept of a homomorphism from the rank-3 module defined by the primes 2, 3, and 5 to a rank-1 module, in order to make the set of available pitches finite within an octave.
There are two aspects of dissonance perception: learned/top-down and innate/bottom-up. Sensory dissonance can be modeled using either auditory models based on the auditory periphery or curve-mapping models based on empirical data. Computer programs that simulate sensory dissonance processing can estimate the degree of dissonance for a given sound. The models were tested on piano music, drone music, and synthesized chords by comparing their predictions of dissonance to participant ratings. The curve-mapping models predicted ratings reasonably well for isolated chords and drone music but not piano music, possibly due to non-sensory influences on ratings for more complex music.
Audio Art Authentication and Classification with Wavelet StatisticsWaqas Tariq
An experimental computation technique for audio art authentication is presented. Specifically, the computational techniques used by painting/drawings art authentication are transformed from twodimensional (image) into one-dimensional (audio) methods. The statistical model consists of first and higher-order wavelet statistics. Classification is performed with a multi-dimensional scaled 3D visual model. The results from the analyses of music/silence discrimination, audio art authentication, genre classification, and audio fingerprinting are demonstrated.
Harmony Search as a Metaheuristic AlgorithmXin-She Yang
This document discusses Harmony Search, a metaheuristic algorithm inspired by music improvisation. It outlines the fundamental steps of Harmony Search, analyzing why it is an effective metaheuristic approach. It also briefly reviews other popular metaheuristics like particle swarm optimization, comparing their similarities and differences to Harmony Search. The document provides examples applying Harmony Search to optimization problems and discusses ways to improve and develop new variants of the algorithm.
The document discusses developing a model to compose monophonic world music using deep learning techniques. It proposes using a bi-axial recurrent neural network with one axis representing time and the other representing musical notes. The network will be trained on a dataset of MIDI files describing pitch, timing, and velocity of notes. It will also incorporate information from music theory on scales, chords, and other elements extracted from sheet music files. The goal is to generate unique musical sequences while adhering to music theory rules. The model aims to address the problem of composing long durations of background music for public spaces in an automated way.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
The kusc classical music dataset for audio key findingijma
In this paper, we present a benchmark dataset based on the KUSC classical music collection and provide
baseline key-finding comparison results. Audio key finding is a basic music information retrieval task; it
forms an essential component of systems for music segmentation, similarity assessment, and mood
detection. Due to copyright restrictions and a labor-intensive annotation process, audio key finding
algorithms have only been evaluated using small proprietary datasets to date. To create a common base for
systematic comparisons, we have constructed a dataset comprising of more than 3,000 excerpts of classical
music. The excerpts are made publicly accessible via commonly used acoustic features such as pitch-based
spectrograms and chromagrams. We introduce a hybrid annotation scheme that combines the use of title
keys with expert validation and correction of only the challenging cases. The expert musicians also provide
ratings of key recognition difficulty. Other meta-data include instrumentation. As demonstration of use of
the dataset, and to provide initial benchmark comparisons for evaluating new algorithms, we conduct a
series of experiments reporting key determination accuracy of four state-of-the-art algorithms. We further
show the importance of considering factors such as estimated tuning frequency, key strength or confidence
value, and key recognition difficulty in key finding. In the future, we plan to expand the dataset to include
meta-data for other music information retrieval tasks.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
The document discusses the mathematical connections between music and art, focusing on the golden ratio. It provides background on the golden ratio, including its relationship to the Fibonacci sequence and its prevalence in nature, architecture, and the human body. Examples are given of how the golden ratio is incorporated into musical structures like time signatures and note lengths. In art, the golden ratio is seen in famous works like the Mona Lisa and influences techniques like composition.
Constantine Kotropoulos, Associate Professor, Aristotle University of Thessaloniki, Department of Informatics, Sparse and Low Rank Representations in Music Signal Analysis
O documento discute o desenvolvimento de jogos mobile, mencionando a framework LibGDX que permite desenvolvimento cross-platform, e aborda tópicos como sprites, animações, colisões e dividir classes por responsabilidade. O autor também compartilha um link para o código-fonte de um jogo que pode ser feito em 2 horas.
Documento de Requisitos do Sistema - Meu TelefoneRonildo Oliveira
O documento apresenta os requisitos funcionais e não funcionais do sistema Meu Telefone, que permite aos usuários realizar recargas, verificar saldo, pacotes e controlar o consumo. São descritos os casos de uso, atores, regras de negócio e diagramas que representam a arquitetura do sistema.
More Related Content
Similar to Towards a Computational Model of Melody Identification in Polyphonic Music
This document discusses 11 applications of machine learning to music research, focusing on expressive music performance. It describes two approaches - learning at the note level and learning at the structure level. For the note level approach, it uses a system called IBL-SMART that learns rules to determine loudness and tempo for each note. For the structure level approach, it analyzes musical structures like phrases and learns prototypical expression shapes associated with them. It presents experiments applying these approaches to classical pieces, finding the structure level approach produced more musically convincing results.
This document discusses 11 applications of machine learning to music research, focusing on expressive music performance. It describes two approaches - learning at the note level and learning at the structure level. For the note level approach, it uses a system called IBL-SMART that learns rules to determine loudness and tempo for each note. For the structure level approach, it analyzes musical structures like phrases and learns prototypical expression shapes associated with them. It presents experiments applying these approaches to classical pieces, finding the structure level approach produced more musically convincing results.
This document discusses measuring melodic complexity through algorithmic methods and psychological experiments. The key points are:
1) Various algorithms were tested to measure melodic complexity, including entropies, Zipf complexity, and n-gram redundancy of pitch, interval, and duration sequences.
2) Two listening experiments with 47 subjects found that judged melodic complexity correlated most strongly with simply counting the notes.
3) When controlling for note count, first-order metrical entropy showed the highest correlation with judged complexity, indicating that meter is an important dimension of perceived melodic complexity.
In Independent (Re-)Creation Likely To Happen In Pop Music? (Escom 2009)Klaus Frieler
We present an empirical study, in which strong arguments werefound that independnet recreations are not unlikely to happen in pop music given the stylistic and cognitive constraints.
Audio Morphing for Percussive Sound Generationa3labdsp
The aim of audio morphing algorithms is to combine two or more sounds to create a new sound with intermediate timbre and duration. During the last two decades several efforts have been made to improve morphing algorithms in order to obtain more realistic and perceptually relevant sounds. In this paper we present an automatic audio morphing technique applied to percussive musical instruments. Based on preprocessing of the sound references in frequency domain and linear interpolation in time domain, the presented approach allows one to generate high quality hybrid sounds at a low computational cost. Several results are reported in order to show the effectiveness of the proposed approach in terms of audio quality and acoustic perception of the generated hybrid sounds, taking into consideration different percussive samples. Mean opinion score and multidimensional scaling were used to compare the presented approach with existing state of the art techniques.
This document provides an overview of the PHYS207: Physics of Music course. It introduces key topics that will be covered such as sound waves, resonance, vibration of strings and membranes, harmonics, consonance and dissonance. The course will investigate relationships between the perceptual and physical attributes of musical sound using elementary physics concepts and will explore questions about the roles of imperfections and sound in music.
Computational models of symphonic musicEmilia Gómez
Computational models of symphonic music face various challenges due to the genre's formal complexity, long durations, complex instrumentation, and overlapping sources. Researchers are developing approaches to address melody extraction, structural analysis, source separation, and music visualization for symphonic works. For melody extraction, current methods perform best on simple excerpts but struggle with density and complexity, indicating the need for combined audio-score approaches. Structural analysis of symphonies requires consideration of tonality, orchestration, and discrepancies between expert analyses. Source separation aims to isolate instrument sections from multi-channel recordings.
Intelligent real-time music accompaniment for constraint-free improvisationAndreas Floros
Computational Intelligence encompasses tools that allow the fast convergence and adaptation to several problems, a fact that makes them eligible for real-time implementations. The paper at hand discusses the utilization of intelligent algorithms (i.e. Differential Evolution and Genetic Algorithms) for the creation of an adaptive system that is able to provide real-time automatic music accompaniment to a human improviser. The main goal of the presented system is to generate accompanying music based on the local human musician’s tonal, rhythmic and intensity playing style, incorporating no prior knowledge about the improvisers intentions. Compared to existing systems previously proposed, this work introduces a constraint-free im- provisation environment where the most important musical char- acteristics are automatically adapted to the human performer’s playing style, without any prior information. This fact allows the improviser to have maximal control over the tonal, rhythmic and intensity improvisation directions.
This document summarizes a research paper that introduces a probabilistic model for analyzing line spectra, which are sets of prominent frequency components, in musical instrument sounds. The model assumes observations in a time frame are generated by a mixture of notes composed of partials and noise. For piano music specifically, the model introduces fundamental frequency and inharmonicity coefficient as parameters for each note that can be estimated from line spectra using an Expectation-Maximization algorithm. The paper applies this technique to unsupervised estimation of tuning and inharmonicity across the range of a piano from a recorded musical piece.
Graphical visualization of musical emotionsPranay Prasoon
The document discusses graphical visualization of musical emotions using artificial neural networks. 13 audio features are extracted from Hindustani classical music clips labeled as happy or sad. An ANN model with backpropagation algorithm is trained on 70% of data, validated on 15% and tested on 15%. The model correctly classified 15 of 17 happy clips and 21 of 22 sad clips. Testing was repeated 10 times with over 90% accuracy each time, showing the model effectively recognizes musical emotions. Future work involves expanding the model to recognize additional emotions and incorporating physiological features.
The document discusses the mathematical underpinnings of musical tuning systems. It begins by explaining how vibration and the wave equation relate to the physics of sound production in instruments. It then discusses how the human ear perceives sound as a Fourier transform. The document explores how rational frequency ratios between notes allow for matching of harmonic partials, producing consonant intervals. It frames musical tuning systems as subsets of the rational numbers that preserve consonant intervals under multiplication. Finally, it introduces the concept of a homomorphism from the rank-3 module defined by the primes 2, 3, and 5 to a rank-1 module, in order to make the set of available pitches finite within an octave.
There are two aspects of dissonance perception: learned/top-down and innate/bottom-up. Sensory dissonance can be modeled using either auditory models based on the auditory periphery or curve-mapping models based on empirical data. Computer programs that simulate sensory dissonance processing can estimate the degree of dissonance for a given sound. The models were tested on piano music, drone music, and synthesized chords by comparing their predictions of dissonance to participant ratings. The curve-mapping models predicted ratings reasonably well for isolated chords and drone music but not piano music, possibly due to non-sensory influences on ratings for more complex music.
Audio Art Authentication and Classification with Wavelet StatisticsWaqas Tariq
An experimental computation technique for audio art authentication is presented. Specifically, the computational techniques used by painting/drawings art authentication are transformed from twodimensional (image) into one-dimensional (audio) methods. The statistical model consists of first and higher-order wavelet statistics. Classification is performed with a multi-dimensional scaled 3D visual model. The results from the analyses of music/silence discrimination, audio art authentication, genre classification, and audio fingerprinting are demonstrated.
Harmony Search as a Metaheuristic AlgorithmXin-She Yang
This document discusses Harmony Search, a metaheuristic algorithm inspired by music improvisation. It outlines the fundamental steps of Harmony Search, analyzing why it is an effective metaheuristic approach. It also briefly reviews other popular metaheuristics like particle swarm optimization, comparing their similarities and differences to Harmony Search. The document provides examples applying Harmony Search to optimization problems and discusses ways to improve and develop new variants of the algorithm.
The document discusses developing a model to compose monophonic world music using deep learning techniques. It proposes using a bi-axial recurrent neural network with one axis representing time and the other representing musical notes. The network will be trained on a dataset of MIDI files describing pitch, timing, and velocity of notes. It will also incorporate information from music theory on scales, chords, and other elements extracted from sheet music files. The goal is to generate unique musical sequences while adhering to music theory rules. The model aims to address the problem of composing long durations of background music for public spaces in an automated way.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
The kusc classical music dataset for audio key findingijma
In this paper, we present a benchmark dataset based on the KUSC classical music collection and provide
baseline key-finding comparison results. Audio key finding is a basic music information retrieval task; it
forms an essential component of systems for music segmentation, similarity assessment, and mood
detection. Due to copyright restrictions and a labor-intensive annotation process, audio key finding
algorithms have only been evaluated using small proprietary datasets to date. To create a common base for
systematic comparisons, we have constructed a dataset comprising of more than 3,000 excerpts of classical
music. The excerpts are made publicly accessible via commonly used acoustic features such as pitch-based
spectrograms and chromagrams. We introduce a hybrid annotation scheme that combines the use of title
keys with expert validation and correction of only the challenging cases. The expert musicians also provide
ratings of key recognition difficulty. Other meta-data include instrumentation. As demonstration of use of
the dataset, and to provide initial benchmark comparisons for evaluating new algorithms, we conduct a
series of experiments reporting key determination accuracy of four state-of-the-art algorithms. We further
show the importance of considering factors such as estimated tuning frequency, key strength or confidence
value, and key recognition difficulty in key finding. In the future, we plan to expand the dataset to include
meta-data for other music information retrieval tasks.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
The document discusses the mathematical connections between music and art, focusing on the golden ratio. It provides background on the golden ratio, including its relationship to the Fibonacci sequence and its prevalence in nature, architecture, and the human body. Examples are given of how the golden ratio is incorporated into musical structures like time signatures and note lengths. In art, the golden ratio is seen in famous works like the Mona Lisa and influences techniques like composition.
Constantine Kotropoulos, Associate Professor, Aristotle University of Thessaloniki, Department of Informatics, Sparse and Low Rank Representations in Music Signal Analysis
Similar to Towards a Computational Model of Melody Identification in Polyphonic Music (20)
O documento discute o desenvolvimento de jogos mobile, mencionando a framework LibGDX que permite desenvolvimento cross-platform, e aborda tópicos como sprites, animações, colisões e dividir classes por responsabilidade. O autor também compartilha um link para o código-fonte de um jogo que pode ser feito em 2 horas.
Documento de Requisitos do Sistema - Meu TelefoneRonildo Oliveira
O documento apresenta os requisitos funcionais e não funcionais do sistema Meu Telefone, que permite aos usuários realizar recargas, verificar saldo, pacotes e controlar o consumo. São descritos os casos de uso, atores, regras de negócio e diagramas que representam a arquitetura do sistema.
Calculo I - Uma Breve Introdução ao Estudo de IntegraisRonildo Oliveira
1) O documento apresenta uma breve introdução sobre o estudo de integrais definidas e indefinidas, incluindo definições, métodos de cálculo e exemplos.
2) Aborda conceitos como primitivas, integrais indefinidas e definidas, método de substituição e integral de Riemann.
3) Inclui uma tabela de integrais comuns e exemplos numéricos de cálculo.
1) O documento discute deadlocks em sistemas operacionais, incluindo suas condições, detecção e prevenção.
2) É apresentado o Algoritmo do Banqueiro para evitar deadlocks alocando recursos de forma segura.
3) As técnicas de detecção incluem modelagem de impasses usando grafos de recursos e algoritmos para identificar ciclos nesses grafos.
Este documento discute vários tópicos relacionados a sistemas operacionais, incluindo gerenciamento de memória, sistemas de arquivos, E/S e multiprocessamento. Aborda conceitos como tabelas de páginas, alocação de memória, fragmentação, drivers de dispositivo, RAID e impasses. Faz referência a um livro texto sobre sistemas operacionais modernos.
Este documento discute conceitos básicos de sistemas operacionais, incluindo processos, espaços de endereçamento, sistemas de arquivos, entrada e saída, proteção, e diferentes arquiteturas como monolíticas, em camadas, microkernels e client-servidor.
Fases do desenvolvimento de software baseado no código de ética.Ronildo Oliveira
O documento discute as principais fases do desenvolvimento de software, incluindo levantamento de requisitos, projeto, implementação, testes e manutenção. A ética no desenvolvimento de software é destacada como um princípio fundamental nas diferentes fases.
Exercícios Resolvidos - Arquitetura e Organização de ComputadoresRonildo Oliveira
A empresa de tecnologia anunciou um novo smartphone com câmera aprimorada, maior tela e melhor desempenho. O dispositivo também possui recursos adicionais de inteligência artificial e segurança de dados aprimorados. O lançamento do novo smartphone está programado para o final deste ano.
Curso Android - 02 configuração do ambiente (Tutorial de Instalação Eclipse +...Ronildo Oliveira
Este documento descreve as configurações necessárias para instalar o ambiente de desenvolvimento Android, incluindo o Eclipse IDE, SDK Android, e como criar um dispositivo virtual.
O documento descreve o sistema operacional Android, incluindo sua história, versões, arquitetura e como desenvolver aplicativos para a plataforma usando Java no Eclipse ou outros ambientes de desenvolvimento.
O documento descreve conceitos fundamentais do Android, incluindo atividades e seu ciclo de vida. Uma atividade representa uma tela e controla eventos nela. Cada atividade possui um ciclo de vida definido por métodos como onCreate(), onStart(), onResume() que gerenciam seu estado conforme mudanças no aplicativo ou dispositivo.
A atividade passa por vários estados durante seu ciclo de vida incluindo onCreate(), onStart(), onResume(), onPause(), onStop() e onDestroy(). A atividade pode ser reaberta pelo usuário ou destruída se o processo precisar de mais memória.
A arquitetura do Android é dividida em aplicações, quarto de aplicações, bibliotecas e núcleo Linux. O quarto de aplicações gerencia atividades, janelas e recursos. As bibliotecas incluem gerenciamento de mídia, SQLite, WebKit e a máquina virtual Dalvik. O núcleo Linux controla dispositivos como display, câmera e Bluetooth.
Este documento apresenta um minicurso sobre desenvolvimento de aplicações para a plataforma Android. Apresenta os ministrantes Ronildo Oliveira da Silva e Derig Almeida Vidal, explica o que é Android, suas versões, estrutura, conceitos básicos como Activity, R.java, findViewById, Manifest e Layout. Finaliza com um passo a passo para criar um projeto Hello World e referências bibliográficas.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Challenges of Nation Building-1.pptx with more important
Towards a Computational Model of Melody Identification in Polyphonic Music
1. Towards a Computational Model of Melody
Identification in Polyphonic Music
S ren Tjagvad Madsen1
, Gerhard Widmer2
Austrian Research Institute for Artificial Intelligence, Vienna1
,Department of
Computational Perception Johannes Kepler University, Linz2
IJCAI (International Joint Conference on Artificial Intelligence)
Ronildo Oliveira da Silva
9 de janeiro de 2017
2. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Contents
1 Introduction
2 Complexity and Melody Perception
3 A Computational Model
4 Experiments
5 Discussion
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 2 / 22
3. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Introduction
1 Melody is a central dimension in almost all music.
2 It is not easy define the concept of ‘melody’.
3 In a way, which notes constitute the melody is defined by where
the listeners perceive the most interesting things to be going on
in the music.
4 This paper presents first steps towards a simple, robust
computational model of automatic melody note identification.
Based on results from musicology and music psychology.
5 We will introduce a simple, straightforward measure of melodic
complexity based on entropy, present an algorithm for predicting
the most likely melody note at any point in a piece.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 3 / 22
4. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Complexity and Melody Perception
The basic motivation for our model of melody identification is the
observation, that there seems to be a connection between the
complexity of a musical line, and the amount of attention that will
be devoted to it on the part of a listener.
Show that the complexity or information content of a sequence of notes
may be directly related to the degree to which the note sequence is
perceived as being part of the melody.
measure of complexity based only on note-level entropies;
measures based on pattern compression and top-down heuristics
derived from music theory.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 4 / 22
5. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
A Computational Model
The basic idea of the model consists in calculating a series of
complexity values locally. Based on these series of local complexity
estimates, the melody is then reconstructed note by note by a
simple algorithm. The information measures will be calculated from the
structural core of music alone: a digital representation of the printed
music score like a MIDI (Musical Instrument Digital Interface).
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 5 / 22
6. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Sliding Window
The algorithm operates by in turn examining a small subset of the notes
in the score. A fixed length window is slid from left to right over the
score.
1 offset of first ending note in current window
2 onset of next note after current window
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 6 / 22
7. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Sliding Window
From the notes belonging to the same voice (instrument) in the window,
we calculate a complexity value. We do that for each voice present in the
window.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 7 / 22
8. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Entropy Measures in Musical Dimensions
Shannon’s entropy [Shannon, 1948] is a measure of randomness or
uncertainty in a signal. If the predictability is high, the entropy is low,
and vice versa.
uniformity, low prediction
no uniformity, high prediction
Let X = {x1, x2, ..., xn}
p(x) = Pr(X = x)
X could for example be the set of MIDI pitch numbers and p(x) would
then be the probability (estimated by the frequency) of a certain pitch.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 8 / 22
9. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Entropy Measures in Musical Dimensions
Let X = {x1, x2, ..., xn} and p(x) = Pr(X = x) then the entropy H(x) is
defined as:
H(X) = −
x∈X
p(x)log2p(x)
p(x) would then be the probability (estimated by the frequency) of a
certain pitch.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 9 / 22
10. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Entropy Measures in Musical Dimensions
We are going to calculate entropy of ’features‘ extracted from the notes
in monophonic lines. We will use features related to pitch and duration
of the notes.
1 Pitch class (C): count the occurrences of different pitch classes
present (the term pitch class is used to refer the ‘name’ of a note);
2 MIDI Interval (I): count the occurrences of each melodic interval
present (e.g., minor second up, major third down, . . . );
3 Note duration (D): count the number of note duration classes
present, where note classes are derived by discretisation (a duration
is given its own class if it is not within 10% of an existing class).
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 10 / 22
11. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Entropy Measures in Musical Dimensions
With each measure we extract events from a given sequence of notes, and
calculate entropy from the frequencies of these events (HC , HI ,HD ).
So far rhythm and pitch are treated separately. We have also included a
measure HCID weighting the above three measures:
HCID =
1
4
(HC + HI ) +
1
2
HD.
Entropy is also defined for a pair of random variables with joint
distribution:
H(X, Y ) = −
x∈X y∈Y
p(x, y)log2[p(x, y)]
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 11 / 22
12. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
An Alternative: Complexity via Compression
The entropy function is a purely statistical measure related to the
frequency of events. No relationships between events is measured – e.g.
the events abcabcabc and abcbcacab will result in the same entropy
value.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 12 / 22
13. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Predicting Melody Notes
The prediction period. The prediction period pi is thus the interval
between the beginning of window wi and the beginning of wi+1.
The average complexity value for each voice present in the
windows in o(pi ) is calculated.
Rank the voices according to their average complexity over o(pi ).
Every note in wi gets its melody attribute set to true if it is part of
the winning voice, and to false otherwise.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 13 / 22
14. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Predicting Melody Notes
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 14 / 22
15. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
The Musical Test Corpus
1 Haydn, F.J.: String quartet No 58 op. 54, No. 2, in C major, 1st
movement
2 Mozart, W.A.: Symphony No 40 in G minor (KV 550), 1st
movement
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 15 / 22
16. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
Annotating Melody Notes
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 16 / 22
17. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
Evaluation Method
We can now measure how well the predicted notes correspond to the
annotated melody in the score. We express this in terms of recall (R) and
precision (P) values.
Recall is the number of correctly predicted notes (true positives,
TP) divided by the total number of notes in the melody.
Precision is TP divided by the total number of notes predicted
(TP + FP (false positives)).
F(R, P) = 1 −
2RP
R + P
A high rate of correctly predicted notes will result in high values of recall,
precision and F − measure (close to 1.0).
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 17 / 22
18. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
Results
We performed prediction experiments with four different window sizes
(1-4 seconds) and with the six different entropy measures.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 18 / 22
19. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
Results
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 19 / 22
20. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
The Musical Test Corpus
Results
We can conclude that there is indeed a correlation between melody and
complexity in both pieces. The precision value of 0.60 in the best
symphony experiment with a resulting F- measure of 0.51 (window size 3
seconds) tells us that 60% of the predicted notes in the symphony are
truly melody notes.
In the string quartet, the second violin is alternating between a single
note and notes from a descending scale, making the voice very attractive
(lots of different notes and intervals) while the ‘real melody’
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 20 / 22
21. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Discussion
In our opinion, the current results, though based on a rather limited test
corpus, indicate that it makes sense to consider musical complexity as an
important factor in computational models of melody perception.
(MADSEN, 2015)
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 21 / 22
22. Introduction Complexity and Melody Perception A Computational Model Experiments Discussion
Referêcias I
MADSEN, G. W. S. T. Towards a Computational Model of Melody
Identification in Polyphonic Music. 1st. ed. [S.l.]: IJCAI, 2015.
So ren Gerhard Towards a Computational Model of Melody Identification in Polyphonic Music 22 / 22