This document discusses using Clojure's spec library to generate computer music. It describes modeling different elements of music like drums/breakbeats, basslines, and vocal samples as data. The document demonstrates slicing up breakbeats and composing them together, generating bass sequences, and including ragga vocal samples to build up sample-based jungle tracks. It reflects on sharing the spec definitions to enable collaborative music composition and generation between backend processes and frontend interfaces. Areas for future work mentioned include adding cognitive understanding to generated music and building self-healing live performances.
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
A very brief introduction of what we have been working on at the AI Labs on "music AI" (specifically, automatic music composition/generation) and "speech AI" (specifically, Mandarin ASR).
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
An up-to-date overview of our recent research on music/audio and AI. It contains four parts:
* AI Listener: source separation (ICMLA'18a) and sound event detection (IJCAI'18)
* AI DJ: music thumbnailing (TISMIR'18) and music sequencing (AAAI'18a)
* AI Composer: melody generation (ISMIR'17), lead sheet generation (ICMLA'18b), multitrack pianoroll generation (AAAI'18b), and instrumentation generation (arxiv)
* AI Performer: CNN-based score-to-audio generation (AAAI'19)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
Slides Hao-Wen Dong and I presented at the ISMIR 2019 tutorial on "Generating Music with GANs—An Overview and Case Studies". More info: https://salu133445.github.io/ismir2019tutorial/
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
Slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/). The following URLs link to some demo audio files we have put on SoundCloud: all of them were fully automatically generated without any manual post-processing or editing.
@ai_piano demo: https://soundcloud.com/yating_ai/sets/ai-piano-generation-demo-202004
@ai_piano+drum demo: https://soundcloud.com/yating_ai/sets/ai-pianodrum-generation-demo-202004
@ai_guitar demo: https://soundcloud.com/yating_ai/ai-guitar-tab-generation-202003/s-KHozfW0PTv5
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
A very brief introduction of what we have been working on at the AI Labs on "music AI" (specifically, automatic music composition/generation) and "speech AI" (specifically, Mandarin ASR).
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
An up-to-date overview of our recent research on music/audio and AI. It contains four parts:
* AI Listener: source separation (ICMLA'18a) and sound event detection (IJCAI'18)
* AI DJ: music thumbnailing (TISMIR'18) and music sequencing (AAAI'18a)
* AI Composer: melody generation (ISMIR'17), lead sheet generation (ICMLA'18b), multitrack pianoroll generation (AAAI'18b), and instrumentation generation (arxiv)
* AI Performer: CNN-based score-to-audio generation (AAAI'19)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
Slides Hao-Wen Dong and I presented at the ISMIR 2019 tutorial on "Generating Music with GANs—An Overview and Case Studies". More info: https://salu133445.github.io/ismir2019tutorial/
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
Slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/). The following URLs link to some demo audio files we have put on SoundCloud: all of them were fully automatically generated without any manual post-processing or editing.
@ai_piano demo: https://soundcloud.com/yating_ai/sets/ai-piano-generation-demo-202004
@ai_piano+drum demo: https://soundcloud.com/yating_ai/sets/ai-pianodrum-generation-demo-202004
@ai_guitar demo: https://soundcloud.com/yating_ai/ai-guitar-tab-generation-202003/s-KHozfW0PTv5
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
Sous le feu des critiques: Trop moderne! Pas assez subversive aux yeux de certains! Pas créative! Un effet de mode passager pour les "djeunz"! Ou pire une musique de drogués!! Permettez moi au cours de cette session de vous éclairer sur cette culture et également sur les coulisses de la création des musiques assistées par ordinateur (MAO), et de voir ensemble les relations intéressantes que l'on peut tisser avec nos pratiques du développement logiciel (Software Craftsmanship).
On a pu lire quelques analogies entre pratique des musiques jazz, somme toute une musique très classique, et la pratique du développement logiciel tel que nous la concevons tous ici ("agile" diront certain). Pourtant il y a bien des façons de faire de la musique et en tant que spécialistes de la programmation j'ai été étonné de constater que peu d'entre nous s’intéressent à la musique dite "électronique". Pourtant, dans ces musiques aussi, nous nous servons d'outils logiciels au service de notre inspiration et notre créativité. On retrouve l'approche incrémentale, la technique imposée par les machines, des patterns évidemment, mais aussi de la pratique répétée, de l'amélioration continue et la coopération quand nous formons des groupes collaboratifs.
Au cours de cette session, après les généralités d'usage, je vous montrerai un DAW (digital audio workstation) logiciel, très couramment employé, et pas que pour la musique électronique, j'ai nommé "Live 9" d'Ableton avec sa surface de contrôle dédiée: Push (une sorte de clavier multi-fonctions pour la musique). Live est également extensible grâce à Max MSP, une API de programmation qui permet de scripter/patcher ce logiciel sous bien des formes.
J'espère vous montrer que création et programmation ne sont pas si éloignés que cela... et vous ferai partager mon expérience au sein de la Do It Yourself Music Academy
slides presented at a three-hour local AI music course in Taiwan in Oct 2021; part 1: a brief introduction to music information retrieval (+analysis, +generation)
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
An up-to-date version of slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/), focusing on introducing the following two publications from our group.
[1] "Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions," in Proc. ACM Multimedia, 2020.
[2] "Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs," in Proc. AAAI 2021.
For the last version of the slides, please visit: https://www2.slideshare.net/affige/research-on-automatic-music-composition-at-the-taiwan-ai-labs-april-2020/edit?src=slideview
Want to go back to your times
Before all this happen, try this lesson
materials
audio tapes
radio / tape player
Mp3 Cable
a/v cable (audio video cable)
A/V with audio connector
L/R with audio connector
gather some audio tapes if you have one.
visit: http://logicaudioideas.blogspot.com/
Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, Artificial intelligence and music (AIM) has, for a long time, been a common subject in several conferences and workshops, including the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence.
Research at MAC Lab, Academia Sincia, in 2017Yi-Hsuan Yang
Some research projects we did in 2017 at the Music & Audio Computing (MAC) Lab, Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan. It includes three parts: 1) vocal separation, 2) music generation, 3) AI DJ.
Finding a Path Through the Juke Box: The Playlist TutorialBen Fields
The simple playlist, in its many forms – from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
Presented on August 9, 2010 at the International Conference on Music Information Retrieval in Utrecht, The Netherlands.
pdf available : http://benfields.net/playlist_ismir_2010.pdf
IDEA mix solo video tapes
This is just a night idea of a posible thing existing or if not im really interested in discuss how to fix the problems what im discover over my personal thinking of the possible project opportunity
match music players online
solo songs,
visual music edit,
fixing the musicability
Sous le feu des critiques: Trop moderne! Pas assez subversive aux yeux de certains! Pas créative! Un effet de mode passager pour les "djeunz"! Ou pire une musique de drogués!! Permettez moi au cours de cette session de vous éclairer sur cette culture et également sur les coulisses de la création des musiques assistées par ordinateur (MAO), et de voir ensemble les relations intéressantes que l'on peut tisser avec nos pratiques du développement logiciel (Software Craftsmanship).
On a pu lire quelques analogies entre pratique des musiques jazz, somme toute une musique très classique, et la pratique du développement logiciel tel que nous la concevons tous ici ("agile" diront certain). Pourtant il y a bien des façons de faire de la musique et en tant que spécialistes de la programmation j'ai été étonné de constater que peu d'entre nous s’intéressent à la musique dite "électronique". Pourtant, dans ces musiques aussi, nous nous servons d'outils logiciels au service de notre inspiration et notre créativité. On retrouve l'approche incrémentale, la technique imposée par les machines, des patterns évidemment, mais aussi de la pratique répétée, de l'amélioration continue et la coopération quand nous formons des groupes collaboratifs.
Au cours de cette session, après les généralités d'usage, je vous montrerai un DAW (digital audio workstation) logiciel, très couramment employé, et pas que pour la musique électronique, j'ai nommé "Live 9" d'Ableton avec sa surface de contrôle dédiée: Push (une sorte de clavier multi-fonctions pour la musique). Live est également extensible grâce à Max MSP, une API de programmation qui permet de scripter/patcher ce logiciel sous bien des formes.
J'espère vous montrer que création et programmation ne sont pas si éloignés que cela... et vous ferai partager mon expérience au sein de la Do It Yourself Music Academy
slides presented at a three-hour local AI music course in Taiwan in Oct 2021; part 1: a brief introduction to music information retrieval (+analysis, +generation)
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
An up-to-date version of slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/), focusing on introducing the following two publications from our group.
[1] "Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions," in Proc. ACM Multimedia, 2020.
[2] "Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs," in Proc. AAAI 2021.
For the last version of the slides, please visit: https://www2.slideshare.net/affige/research-on-automatic-music-composition-at-the-taiwan-ai-labs-april-2020/edit?src=slideview
Want to go back to your times
Before all this happen, try this lesson
materials
audio tapes
radio / tape player
Mp3 Cable
a/v cable (audio video cable)
A/V with audio connector
L/R with audio connector
gather some audio tapes if you have one.
visit: http://logicaudioideas.blogspot.com/
Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, Artificial intelligence and music (AIM) has, for a long time, been a common subject in several conferences and workshops, including the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence.
Research at MAC Lab, Academia Sincia, in 2017Yi-Hsuan Yang
Some research projects we did in 2017 at the Music & Audio Computing (MAC) Lab, Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan. It includes three parts: 1) vocal separation, 2) music generation, 3) AI DJ.
Finding a Path Through the Juke Box: The Playlist TutorialBen Fields
The simple playlist, in its many forms – from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
Presented on August 9, 2010 at the International Conference on Music Information Retrieval in Utrecht, The Netherlands.
pdf available : http://benfields.net/playlist_ismir_2010.pdf
IDEA mix solo video tapes
This is just a night idea of a posible thing existing or if not im really interested in discuss how to fix the problems what im discover over my personal thinking of the possible project opportunity
match music players online
solo songs,
visual music edit,
fixing the musicability
The slides for my seminar on adaptive music at the Charles University in Prague // Introduction to the topic of adaptive music // Music Design // Sequence Music Engine // On development of the Kingdom Come: Deliverance soundtrack.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
20. 20
The Amen Break
• a drum solo (breakbeat) from "Amen Brother" a 1969
track by The Winstons
• one of the most sampled songs in history
• revered for its timbre and rhythm
21. 21
Notation
• music scenes develop their own vernacular
• notation can allow easier, more intuitive
communication
25. 25
Composing breaks
The different renditions of the amen break and other
breakbeats are composable. We can mix and match
parts of different breaks.
41. 41
What's in a track?
•drums (break)
•bass
•vocal samples
42. 42
What's in a track?
ragga |ˈraɡə|
noun [ mass noun ]
a style of dance music originating in Jamaica and
derived from reggae, in which a DJ improvises lyrics
over a sampled or electronic backing track.
56. 56
What worked
• the end result is decent
• I now have a new jam buddy
• it's scalable!
• single data specification I can share between
backend and frontend
57. 57
What didn't work
• clojure.spec generators are unaware of the context
• making custom synthesisers
We'll have some fun with clojure.spec.
We'll model a genre of music using.
We'll make music.
Finally I'll reflect on generative music and the direction where it's heading.
you can find this in talks elsewhere.
this is a computer's rendition of me - I especially like the chalice neck
composing and performing electronic music since around 2007
used procedural real-time 3d visuals - used to be called demoscene back then
but also a software tester by trade - I like to break stuff and use it in novel ways.
I'm approaching this more from the perspective of the musician on a vision quest
You might be wondering how does clojure.spec apply generating music? We'll let me state a hypothesis. ... If music is data we should be able to use whatever tools we know for manipulating data to.
I will be using spec.
Chris Ford's library leipzig which you may be familiar with if you've seen his great talks. There's a lot of Chris Ford fans here in the audience.
and finally overtone for the actual audio (up till 1.9.0-alpha8)
this is all pretty standard stuff for Clojure and music
As you can probably imaging I did not build a universal music making machine.
Instead I just focus on one particular type.
So what kind of music will I be showing?
I resisted the temptation to do something completely crazy - as it's not Paris and it's not 1924. Instead I decided to pick a genre that I think is fairly simple and "canonical" but also close to my heart.
And it's Jungle. [PLAY SONG]
A good way to explain it is that it's a flavour of drum and bass with a twist. Let me give you the context of what we'll be creating.
When speaking about computer music we can speak of really 2 kinds of data - the audio whether it's a .wav file or a buffer of number which is ultimately converted from Digital -> Analogue and ultimately air vibration. The other is the symbolic representation of music - sheet music, MIDI files, guitar tabs. This data is a blueprint for musical performance.
To an extent we can translate between these domains. We want to synthesise audio from a symbolic representation but also we may want to analyse or fingerprint audio to go the other way round. To create jungle we will rely on both sources of data - jungle is heavily sample-based making it an interesting choice for autonomous composition.
The birth of jungle and also drum and bass and hip-hop is linked to the emergence of digital samplers. Their availability, lower cost and ease of use compared to earlier tape techniques meant the democratisation of musical production. Tons of people were sampling vinyl records, whatever they could get their hand on and using samplers to loop, slice, stretch, pitch shift, reverse bits of audio.
So what's the bare minimum that I think constitutes a classical jungle track?
Let's start with the drums as that's the core of the jungle sound.
The breakbeat is literally a break in the song when the pitched instruments stop playing and the drummer plays solo. These most often come from funk, soul or gospel tracks.
Syncopated means that the accent is placed on the "weak" parts of the beat.
Here's the most iconic breakbeat of them all.
Even if you're not into jungle you've probably heard it in everything from NWA's "Straight Outta Compton" to TV commercials, Futurama theme song.
It's called the 'Amen break'. Heck it even has its own Chardonnay- that's how big of a deal this is. Notice this pattern at the top is a transcription of the drum pattern for the crash cymbal, ride, snare and bass-drum. Let's listen to it again. Maybe if you have good eyesight you can follow along
OK, so we know that we want to use sampled breakbeat - how do we represent them symbolically? If you look at the wine bottle you'll have the instruments separated out. But this is not really what we're working with. It's really a single layer, the breakbeat is the instrument. I wanted to create a notation for using breakbeats that feels natural. When thinking about a beat I hear in my head something close to the sounds themselves. Since my first language is Polish
It seems only appropriate to call it the Polish Jungle notation
At it's simplest we can think of this as a set of single-syllable onomatopaoic sounds which correspond to the dominant percussion sound happening at the time.
But this is's enough information to get us going. As in most genres of electronic music, the key is repetition. And since
We define simple rules for how many times a specific sound can be repeat and build a collection we can use.
Spec doesn't force you to use it in
There is a interesting property - that these rules can be applied not just to the amen break but to other breaks as well.
To do this we need to prepare our sample base.
I load up a bunch of them as audio buffers. Define a fuction which plays back a specific slice of the buffer. And pick some different renditions of the amen break to begin with. Here's how they sound like:
Here we play a single kick drum sound.
We can map the syllables to selected parts of the break.
Now we can generate a sequence from the spec earlier. We flatten the repetitions, pick 4 bars of 8 sounds. Rhythmise just takes a collection of amen sounds and arranges them in time using absolute values.
Finally we can play our breaks together. Repeating the entire sequence 2 times, playing at a brisk 172 beats per minute.
I mentioned that we can mix and matche different breakbeat sounds - here I'm doing so at runtime. This is a pattern I commonly use - to choose.
In the future we could expand our model to perform even wilder variations. With per-note effect triggers. There are really a lot of variations we can do to replicate the kinds of edits that human musicians do.
Now that we've set up the scene for slicing breakbeats let's talk a bit about the bass.
Now these sequences are rather rudimentary. But bass in jungle plays a rather simple role.
Typically we'll see a simple sine oscillator used for generating the bass like in this example.
Bass is obviously a pitched instrument, meaning that the height of the sound varies to create a melody. When speaking about melodies at least in the West we thisk of them as sequences of notes.
Here I'm representing pitch in the usual fashion - as an integer operating withn two octaves and the duration is represented by the familiar rational number notation used in sheet music.
But playing pitch as random integers is like having a monkey sit at the piano - they may get it right sometimes but we'll likely be annoyed with our primate companion first. Instead we can compose leipzigs scale functions to transform our random integers into random integers on a scale of our choosing.
It's not great but at least I don't hate it which is good.
Alright let's add our final spice to the mix - the vocal samples. And specifically, we'll be using ragga samples.
Now not ragga as that of India.
These sound something like this.
Nothing fancy here, just loading a set of these samples and specifying their timing.
Let's hear all of them together.
So now that we know how to build a short sequence. What is stopping us from building an entire song?
We can specify an arrangement using the familiar "intro, verse, breakdown, reprise, outro" pattern and even generate the artist, song names.
Specify which instruments should be present
Even generate some artist names
But why stop there if we can make whole discographies. The idivual edits -> macro level of whole libraries of music.
If I want to perform a 2-3 hour set of music - if I have a compositional tool to which I can delegate some of the work I can focus on other things.
But why stop there if we can make whole discographies of music.
I feel like I could..
Take on the world! So you see where this is going.
Is there any practical reason why you'd want to do this? Would I even want to listen to all that music? The examples you've heard today are "OK" at best. It's a start.
There's some other interesting properties of specs that play to our advantage. We want to build a web app where people can listen to tunes and rate them. Providing feedback back to the generating process? We could do that!
Generate documentation with live example data? sure we can do that.
Generate documentation with live example data? sure we can do that.
The end resuslt is decent enough to use these breakbeats. As a musical perfectionist this is a big gain -
Generation of elements independently of each other meant that there was really no chance for the generator to build context-aware sequences.
I tried to generate parameters to overtone synths but this posed a similar problem - one parameter changes - say the resonance of a filter and this affects the volume, but the volume is also a generated parameter then I can have pretty unpleasant results.
OK, this is the start of the rant part.
Perception, acoustics, mood, context
There is always a tension between Design and Appropriation but as as William Gibson wrote - The Street finds its own use for things.
There are certain shorthands for thinking about such. Oh it's all generated - there is no human aspect to it. It's not autonomous at all -it's all controlled by the human creator. I don't think either of these propositions is true. Rather - agency is hybridised.
What happens when 2 artists with their computers jam music and video together.
A lot of things are hidden or blackboxed in technical systems.
This project opens up the black box of computing. That we may feel that the technological goods like laptops or phones are "clean" but the cost of tier manufacture and operation is often elsewhere.
self healing live performance - you played out of key? let me generate a conforming note and play it instead. Stuart Hallowey said yesterday that spec is preventing him from doing stupid things. Why not apply that to musical performance? There is a quote attributed to Syd Barrett the late frontman of Pink Floyd - it's not important to know which notes to play, just which notes NOT to play.
The second thing which want to explore is to expose a live stream of generated music. I'm a big fan of jungletrain.net with is a human-run radio station.
The second thing which want to explore is to expose a live stream of generated music. I'm a big fan of jungletrain.net with is a human-run radio station.
I want to explore the cognitive aspect - if we want our code to make us more creative then it needs to know more about the idiosyncracies of how you think.
Using spec themselves as data.
Modeling a type of music but he also acknowledges the cultural bias in removing the musical data from its performance context.
Otherwise we might find ourselves living in a world with an overabundance of mediocre music. And on that optimistic note, let me play you out with a little tune: