slides presented at a three-hour local AI music course in Taiwan in Oct 2021; part 1: a brief introduction to music information retrieval (+analysis, +generation)
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
Presented at the Machine Learning class at Chalmers, Gothenburg.
http://www.cse.chalmers.se/research/lab/courses.php?coid=9
Trying to connect their theoretical machine learning class with industry examples.
From Idea to Execution: Spotify's Discover WeeklyChris Johnson
Discover Weekly is a personalized mixtape of 30 highly personalized songs that's curated and delivered to Spotify's 75M active users every Monday. It's received high acclaim in the press and reached 1B streams within its first 10 weeks. In this slide deck we dive into the narrative of how Discover Weekly came to be, highlighting technical challenges, data driven development, and the Machine Learning models used to power our recommendations engine.
From the NYC Machine Learning meetup on Jan 17, 2013: http://www.meetup.com/NYC-Machine-Learning/events/97871782/
Video is available here: http://vimeo.com/57900625
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
Presented at the Machine Learning class at Chalmers, Gothenburg.
http://www.cse.chalmers.se/research/lab/courses.php?coid=9
Trying to connect their theoretical machine learning class with industry examples.
From Idea to Execution: Spotify's Discover WeeklyChris Johnson
Discover Weekly is a personalized mixtape of 30 highly personalized songs that's curated and delivered to Spotify's 75M active users every Monday. It's received high acclaim in the press and reached 1B streams within its first 10 weeks. In this slide deck we dive into the narrative of how Discover Weekly came to be, highlighting technical challenges, data driven development, and the Machine Learning models used to power our recommendations engine.
From the NYC Machine Learning meetup on Jan 17, 2013: http://www.meetup.com/NYC-Machine-Learning/events/97871782/
Video is available here: http://vimeo.com/57900625
These are the slides I used for my talk at the BIG Track at the Web Conference 2019. This is a very similar talk to what I gave at the celebration kickoff of Chalmers AI Research Centre in Gothenburg in March 2019. It has a bit more and reflect some of the most recent work we are doing at Spotify Research. I am posted these again as people are asking for the slides. Thank you.
UPDATED VERSION (2011): http://www.slideshare.net/plamere/music-recommendation-and-discovery
As the world of online music grows, music 2.0 recommendation systems become an increasingly important way for music listeners to discover new music.
Commercial recommenders such as Last.fm and Pandora have enjoyed commercial and critical success. But how well do these systems really work? How good are the recommendations? How far into The Long Tail do these recommenders reach?
In this tutorial we look at the current stateof theart in music recommendation. We examine current commercial and research systems, focusing on the advantages and the disadvantages of the various recommendation strategies. We look at some of the challenges in building music recommenders and we explore some of the ways that MIR techniques can be used to improve future recommenders.
Scala Data Pipelines for Music RecommendationsChris Johnson
Are you still building data pipelines with Java and Python? Are you curious about the current buzz in the Big Data community surrounding Scala as a data processing environment? In this talk I'll discuss how Spotify migrated its music recommendations pipeline from Python to Scala. I'll dive into the language specific features that make Scala the ideal candidate for big data processing as well as highlight the rich set of tools and APIs that we take advantage of to process music recommendations for our 50 Million active users including Scalding, Breeze, Kafka, Spark, Parquet, Driven and Zeppelin.
At the BCS Search Solutions 2018, I gave a talk about work on search we are doing at Spotify. The talk described what search means in the context of Spotify, how it differs what we know about search, and the challenges associated with understanding user intents and mindsets in an "entertainment" context. The talk also discussed various efforts at Spotify to understand why users submit search queries, what they expect, how they assess their search experience, and how Spotify responds to these search queries. This is work done with many colleagues at Spotify in Boston, London, New York and Stockholm, and our wonderful summer interns.
Machine Learning and Big Data for Music Discovery at SpotifyChing-Wei Chen
Spotify is the world’s largest on-demand music streaming company, with over 100 million active users who generate around 2TB of interaction data every day. With over 30 million songs to choose from, discovery and personalization play an essential role in helping users discover the best music for them. In this talk, given at the newly opened Galvanize space in NYC in March 2017, we’ll explain how Spotify uses Latent Space Models and Deep Learning to power features such as Discover Weekly and Release Radar.
Algorithmic Music Recommendations at SpotifyChris Johnson
In this presentation I introduce various Machine Learning methods that we utilize for music recommendations and discovery at Spotify. Specifically, I focus on Implicit Matrix Factorization for Collaborative Filtering, how to implement a small scale version using python, numpy, and scipy, as well as how to scale up to 20 Million users and 24 Million songs using Hadoop and Spark.
These are the slides of a talk about some of our research at Spotify, as part of the celebration kickoff of Chalmers AI Research Centre in Gothenburg. I always like to make a story in my talk, and this time I wanted to reflect on the "push" (think recommender system) and "pull" (think search) paradigms. I am using this quote from Nicholas Belkin and Bruce Croft from their Communications of the ACM article published in 1992 to frame my story: "We conclude that information retrieval and information filtering are indeed two sides of the same coin. They work together to help people get the information needed to perform their tasks."
These are the slides of my talk at the 2019 Netflix Workshop on Personalization, Recommendation and Search (PRS). This talk is based on previous talks on research we are doing at Spotify, but here I focus on the work we do on personalizing Spotify Home, with respect to success, intent & diversity. The link to the workshop is https://prs2019.splashthat.com/. This is research from various people at Spotify, and has been published at RecSys 2018, CIKM 2018 and WWW (The Web Conference) 2019.
Building Data Pipelines for Music Recommendations at SpotifyVidhya Murali
In this talk, we will get into the architectural and functional details as to how we build scalable and robust data pipelines for music recommendations at Spotify. We will also discuss some of the challenges and an overview of work to address these challenges.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
The current revolution in the music industry represents great opportunities and challenges for music recommendation systems. Recommendation systems are now central to music streaming platforms, which are rapidly increasing in listenership and becoming the top source of revenue for the music industry. It is increasingly more common for a music listener to simply access music than to purchase and own it in a personal collection. In this scenario, recommendation calls no longer for a one-shot recommendation for the purpose of a track or album purchase, but for a recommendation of a listening experience, comprising a very wide range of challenges, such as sequential recommendation, or conversational and contextual recommendations. Recommendation technologies now impact all actors in the rich and complex music industry ecosystem (listeners, labels, music makers and producers, concert halls, advertisers, etc.).
Interactive Recommender Systems with Netflix and SpotifyChris Johnson
Interactive recommender systems enable the user to steer the received recommendations in the desired direction through explicit interaction with the system. In the larger ecosystem of recommender systems used on a website, it is positioned between a lean-back recommendation experience and an active search for a specific piece of content. Besides this aspect, we will discuss several parts that are especially important for interactive recommender systems, including the following: design of the user interface and its tight integration with the algorithm in the back-end; computational efficiency of the recommender algorithm; as well as choosing the right balance between exploiting the feedback from the user as to provide relevant recommendations, and enabling the user to explore the catalog and steer the recommendations in the desired direction.
In particular, we will explore the field of interactive video and music recommendations and their application at Netflix and Spotify. We outline some of the user-experiences built, and discuss the approaches followed to tackle the various aspects of interactive recommendations. We present our insights from user studies and A/B tests.
The tutorial targets researchers and practitioners in the field of recommender systems, and will give the participants a unique opportunity to learn about the various aspects of interactive recommender systems in the video and music domain. The tutorial assumes familiarity with the common methods of recommender systems.
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
These are the slides I used for my talk at the BIG Track at the Web Conference 2019. This is a very similar talk to what I gave at the celebration kickoff of Chalmers AI Research Centre in Gothenburg in March 2019. It has a bit more and reflect some of the most recent work we are doing at Spotify Research. I am posted these again as people are asking for the slides. Thank you.
UPDATED VERSION (2011): http://www.slideshare.net/plamere/music-recommendation-and-discovery
As the world of online music grows, music 2.0 recommendation systems become an increasingly important way for music listeners to discover new music.
Commercial recommenders such as Last.fm and Pandora have enjoyed commercial and critical success. But how well do these systems really work? How good are the recommendations? How far into The Long Tail do these recommenders reach?
In this tutorial we look at the current stateof theart in music recommendation. We examine current commercial and research systems, focusing on the advantages and the disadvantages of the various recommendation strategies. We look at some of the challenges in building music recommenders and we explore some of the ways that MIR techniques can be used to improve future recommenders.
Scala Data Pipelines for Music RecommendationsChris Johnson
Are you still building data pipelines with Java and Python? Are you curious about the current buzz in the Big Data community surrounding Scala as a data processing environment? In this talk I'll discuss how Spotify migrated its music recommendations pipeline from Python to Scala. I'll dive into the language specific features that make Scala the ideal candidate for big data processing as well as highlight the rich set of tools and APIs that we take advantage of to process music recommendations for our 50 Million active users including Scalding, Breeze, Kafka, Spark, Parquet, Driven and Zeppelin.
At the BCS Search Solutions 2018, I gave a talk about work on search we are doing at Spotify. The talk described what search means in the context of Spotify, how it differs what we know about search, and the challenges associated with understanding user intents and mindsets in an "entertainment" context. The talk also discussed various efforts at Spotify to understand why users submit search queries, what they expect, how they assess their search experience, and how Spotify responds to these search queries. This is work done with many colleagues at Spotify in Boston, London, New York and Stockholm, and our wonderful summer interns.
Machine Learning and Big Data for Music Discovery at SpotifyChing-Wei Chen
Spotify is the world’s largest on-demand music streaming company, with over 100 million active users who generate around 2TB of interaction data every day. With over 30 million songs to choose from, discovery and personalization play an essential role in helping users discover the best music for them. In this talk, given at the newly opened Galvanize space in NYC in March 2017, we’ll explain how Spotify uses Latent Space Models and Deep Learning to power features such as Discover Weekly and Release Radar.
Algorithmic Music Recommendations at SpotifyChris Johnson
In this presentation I introduce various Machine Learning methods that we utilize for music recommendations and discovery at Spotify. Specifically, I focus on Implicit Matrix Factorization for Collaborative Filtering, how to implement a small scale version using python, numpy, and scipy, as well as how to scale up to 20 Million users and 24 Million songs using Hadoop and Spark.
These are the slides of a talk about some of our research at Spotify, as part of the celebration kickoff of Chalmers AI Research Centre in Gothenburg. I always like to make a story in my talk, and this time I wanted to reflect on the "push" (think recommender system) and "pull" (think search) paradigms. I am using this quote from Nicholas Belkin and Bruce Croft from their Communications of the ACM article published in 1992 to frame my story: "We conclude that information retrieval and information filtering are indeed two sides of the same coin. They work together to help people get the information needed to perform their tasks."
These are the slides of my talk at the 2019 Netflix Workshop on Personalization, Recommendation and Search (PRS). This talk is based on previous talks on research we are doing at Spotify, but here I focus on the work we do on personalizing Spotify Home, with respect to success, intent & diversity. The link to the workshop is https://prs2019.splashthat.com/. This is research from various people at Spotify, and has been published at RecSys 2018, CIKM 2018 and WWW (The Web Conference) 2019.
Building Data Pipelines for Music Recommendations at SpotifyVidhya Murali
In this talk, we will get into the architectural and functional details as to how we build scalable and robust data pipelines for music recommendations at Spotify. We will also discuss some of the challenges and an overview of work to address these challenges.
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
The current revolution in the music industry represents great opportunities and challenges for music recommendation systems. Recommendation systems are now central to music streaming platforms, which are rapidly increasing in listenership and becoming the top source of revenue for the music industry. It is increasingly more common for a music listener to simply access music than to purchase and own it in a personal collection. In this scenario, recommendation calls no longer for a one-shot recommendation for the purpose of a track or album purchase, but for a recommendation of a listening experience, comprising a very wide range of challenges, such as sequential recommendation, or conversational and contextual recommendations. Recommendation technologies now impact all actors in the rich and complex music industry ecosystem (listeners, labels, music makers and producers, concert halls, advertisers, etc.).
Interactive Recommender Systems with Netflix and SpotifyChris Johnson
Interactive recommender systems enable the user to steer the received recommendations in the desired direction through explicit interaction with the system. In the larger ecosystem of recommender systems used on a website, it is positioned between a lean-back recommendation experience and an active search for a specific piece of content. Besides this aspect, we will discuss several parts that are especially important for interactive recommender systems, including the following: design of the user interface and its tight integration with the algorithm in the back-end; computational efficiency of the recommender algorithm; as well as choosing the right balance between exploiting the feedback from the user as to provide relevant recommendations, and enabling the user to explore the catalog and steer the recommendations in the desired direction.
In particular, we will explore the field of interactive video and music recommendations and their application at Netflix and Spotify. We outline some of the user-experiences built, and discuss the approaches followed to tackle the various aspects of interactive recommendations. We present our insights from user studies and A/B tests.
The tutorial targets researchers and practitioners in the field of recommender systems, and will give the participants a unique opportunity to learn about the various aspects of interactive recommender systems in the video and music domain. The tutorial assumes familiarity with the common methods of recommender systems.
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
Research at MAC Lab, Academia Sincia, in 2017Yi-Hsuan Yang
Some research projects we did in 2017 at the Music & Audio Computing (MAC) Lab, Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan. It includes three parts: 1) vocal separation, 2) music generation, 3) AI DJ.
Annotating Music Collections: How Content-Based Similarity Helps to Propagate...Oscar Celma
In this paper we present a way to annotate music collections by exploiting audio similarity. In this sense, similarity is used to propose labels (tags) to yet unlabeled songs, based on the content–based distance between them. The main goal of our work is to ease the process of annotating huge music collections, by using content-based similarity distances as a way to propagate labels among songs.
We present two different experiments. The first one propagates labels that are related with the style of the piece, whereas the second experiment deals with mood labels. On the one hand, our approach shows that using a music collection annotated at 40% with styles, and using content– based, the collection can be automatically annotated up to 78% (that is, 40% already annotated and the rest, 38%, only using propagation), with a recall greater than 0.4. On the other hand, for a smaller music collection annotated at 30% with moods, the collection can be automatically annotated up to 65% (e.g. 30% plus 35% using propagation).
Music Information Retrieval is about retrieving information from music entities.
The slides will introduce the basic concepts of the music language, passing through different kind of music representations and it will end up describing some low level features that are used when dealing with music entities.
Anat Gilboa's thesis presentation to the University of Virginia School of Engineering and Applied Science.
Over the course of her final semester, Anat and Dr. Qi, PhD, tested various methods to determine similarity between songs using features extracted from metadata in the Million Song Dataset.
Human Perception and Recognition of Musical Instruments: A ReviewEditor IJCATR
Musical Instrument is the soul of music. Musical Instrument and Player are the two fundamental component of Music. In
the past decade the growth of a new research field targeting the Musical Instrument Identification, Retrieval, Classification,
Recognition and management of large sets of music is known as Music Information Retrieval. An attempt to review the methods,
features and database is done.
Would you like some inspirational sheet music? Do you listen to music? For those who do not know, musicians use sheet music, written or printed on paper with musical notation, to perform compositions. Typically, sheet music consists of a single, independently composed song or piece.
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
An up-to-date version of slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/), focusing on introducing the following two publications from our group.
[1] "Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions," in Proc. ACM Multimedia, 2020.
[2] "Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs," in Proc. AAAI 2021.
For the last version of the slides, please visit: https://www2.slideshare.net/affige/research-on-automatic-music-composition-at-the-taiwan-ai-labs-april-2020/edit?src=slideview
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
Slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/). The following URLs link to some demo audio files we have put on SoundCloud: all of them were fully automatically generated without any manual post-processing or editing.
@ai_piano demo: https://soundcloud.com/yating_ai/sets/ai-piano-generation-demo-202004
@ai_piano+drum demo: https://soundcloud.com/yating_ai/sets/ai-pianodrum-generation-demo-202004
@ai_guitar demo: https://soundcloud.com/yating_ai/ai-guitar-tab-generation-202003/s-KHozfW0PTv5
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
Slides Hao-Wen Dong and I presented at the ISMIR 2019 tutorial on "Generating Music with GANs—An Overview and Case Studies". More info: https://salu133445.github.io/ismir2019tutorial/
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
A very brief introduction of what we have been working on at the AI Labs on "music AI" (specifically, automatic music composition/generation) and "speech AI" (specifically, Mandarin ASR).
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
An up-to-date overview of our recent research on music/audio and AI. It contains four parts:
* AI Listener: source separation (ICMLA'18a) and sound event detection (IJCAI'18)
* AI DJ: music thumbnailing (TISMIR'18) and music sequencing (AAAI'18a)
* AI Composer: melody generation (ISMIR'17), lead sheet generation (ICMLA'18b), multitrack pianoroll generation (AAAI'18b), and instrumentation generation (arxiv)
* AI Performer: CNN-based score-to-audio generation (AAAI'19)
Invited talk at Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain, 2011/12/15.
This talk describes a computational framework for automatically predicting the emotion we perceive in music listening. Based on this computational framework, novel emotion-based music organization, browsing, and retrieval methods can be created to provide users an an intuitive, easy-to-use, and effective way for accessing music information.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
2. About Me
• Ph.D. National Taiwan University 2010
• Research Professor, Music & AI Lab, Academia Sinica, since 2011
• Chief Music Scientist, Taiwan AI Labs, 2019/3‒2023/2
• Over 200 publications
2
3. About the Music and AI Lab @ Sinica
About Academia Sinica
National academy of Taiwan, founded in 1928
About 1,000 Full/Associate/Assistant Researchers
About Music and AI Lab (musicai)
Since Sep 2011
Members
PI [me]
research assistants
PhD/master students
3
4. About the Music AI Team @
About Taiwan AI Labs
Privately-funded research organization,
founded by Ethan Tu (PTT) in 2017
Three main research area: 1) HCI, 2) medicine, 3) smart city
About the Music AI team
Members
scientist [me; since March 2019]
ML engineers (for models)
musicians
program manager
software engineers (for frontend/backend)
4
(an image of our musicians)
5. Outline
• Types of music related research/products
• Fundamentals of music signal processing
• Types of data
9
6. Types of Music Related Research/Products
10
• Intelligent ways to analyze, retrieve, and create music
1. Music informa-
tion analysis
2. Music informa-
tion retrieval
3. Music
generation
music → features query → music X → music
7. Types of Music Related Research/Products
1. Music information
“analysis”
11
automatic page turner
automatic
Karaoke scoring
interactive
concert
8. Types of Music Related Research/Products
1. Music information
“analysis”
12
chord recognizer music browsing assistant
9. Types of Music Related Research/Products
2. Music information “retrieval”
• Search
‒ through keywords/labels (genre, instrument, emotion)
13
10. Types of Music Related Research/Products
2. Music information “retrieval”
• Search
‒ through keywords/labels (genre, instrument, emotion)
14
musical event localization
J.-Y. Liu and Y.-H. Yang, "Event localization in music auto-tagging," MM 2016
11. Types of Music Related Research/Products
2. Music information “retrieval”
• Search
‒ through keywords/labels (genre, instrument, emotion)
‒ through audio examples (humming, audio recording)
15
12. Types of Music Related Research/Products
2. Music information “retrieval”
• Search
‒ through keywords/labels (genre, instrument, emotion)
‒ through audio examples (humming, audio recording)
16
…
13. Types of Music Related Research/Products
2. Music information “retrieval”
• Match
‒ to match 1) a video clip, 2) a photo slideshow,
3) a song lyrics, or 4) a given context
‒ cross-domain retrieval
17
14. Types of Music Related Research/Products
2. Music information “retrieval”
• Discover
‒ recommendation: diversity, serendipity, explanations
18
15. Types of Music Related Research/Products
2. Music information “retrieval”
• Discover
‒ recommendation: diversity, serendipity, explanations
19
16. Types of Music Related Research/Products
Context
Music User
• Activity: driving, studying, working, walking
• Mood: happy, sad, angry, relaxed
• Location: home, work, public place
• Social company: alone, w/ friends, w/ strangers
• age
• gender
• personality
• cultural background
• musical background
2. Music information “retrieval”
• Discover
‒ Context-aware Music Recommendation
17. Types of Music Related Research/Products
3. Music creation
21
18. Types of Music Related Research/Products
3. Music creation
22
https://www.youtube.com/watch?v=k1DgNfz1g_s
19. Types of Music Related Research/Products
3. Music creation
23
20. Types of Music Related Research/Products
3. Music creation
24
http://www.inside.com.tw/2016/05/04/positive-grid-bias-head
21. Types of Music Related Research/Products
3. Music creation
25
https://youtu.be/rL5YKZ9ecpg?t=50m
22. Types of Music Related Research/Products
1. Music information analysis
• Education, data visualization
2. Music information retrieval
• Search: through keywords (genre, instrument, emotion) or
audio examples (humming or audio recording)
• Match: cross domain retrieval
• Discover: recommendation
3. Music creation
• Google Magenta, Smule AutoRap, Samsung Hum-On,
Positive Grid, Yamaha Vocaloid
26
23. ML in Music: “Music Info Retrieval/Analysis”
28
Music transcription (audio2score)
• audio → note (pitch, onset, offset)
• audio → instrument (flute, cello)
• audio → meter (4/4)
• audio → key (E-flat major)
audio score
Music semantic labeling
• audio → genre (classical)
• audio → emotion (yearning)
• audio → other attributes (slow/fast)
labels
applications in
music retrieval,
education,
archival, etc
(existing
song)
AI listener
24. Music transcription (audio2score)
• audio → note (pitch, onset, offset)
• audio → instrument (flute, cello)
• audio → meter (4/4)
• audio → key (E-flat major)
ML in Music: “Music Generation/Synthesis”
29
audio score
Music semantic labeling
• audio → genre (classical)
• audio → emotion (yearning)
• audio → other attributes (slow/fast)
labels
(new
song)
AI composer
random seed
AI performer (score2audio)
25. Music transcription (audio2score)
• audio → note (pitch, onset, offset)
• audio → instrument (flute, cello)
• audio → meter (4/4)
• audio → key (E-flat major)
ML in Music: “Music Generation/Synthesis”
30
audio
features
Music semantic labeling
• audio → genre (classical)
• audio → emotion (yearning)
• audio → other attributes (slow/fast)
labels
(existing
songs)
AI listener
score
AI DJ
audio
(a new
song)
remix, mashup, etc
(image from the Internet)
26. Music AI Research
• Four broad topics
audio → audio: signal processing
audio → score: transcription
score → score: composition
score → audio: synthesis
31
27. Outline
• Types of music related research/products
• Fundamentals of music signal processing
• Types of data
32
28. Fundamentals of Music Signal Processing
• Pitch: which notes are played?
• Rhythm: how fast?
• Timbre: which instrument(s)?
33
Mozart’s Variationen
(1st phrase)
33. Music Information Analysis
• music → features
38
8. Semantic description
─ genre: pop, classical, jazz, rock, R&B, “Tai,” “aboriginal”
─ emotion: happy, angry, sad, relaxed
─ usage: at party, working, driving, reading, sleeping, romance
─ theme: lonely, breakup, celebration, in love, friend, battle
─ vocal timbre: aggressive, breathy, duet, emotional, rapping, screaming
genre listening context
emotion
34. Fundamentals of Music Signal Processing
Pitch ♪♪♪ ♪♪♪ ♪♪♪
Rhythm
Timbre ♪♪
39
Karaoke scorer chord recognizer
page turner
35. Fundamentals of Music Signal Processing
Pitch ♪♪♪ ♪
Rhythm ♪♪♪
Timbre ♪♪♪ ♪
40
instrument
classifier
content ID Spotify running
36. Fundamentals of Music Signal Processing
Pitch ♪♪♪ ♪♪♪ ♪♪♪
Rhythm ♪♪♪ ♪♪♪ ♪♪♪
Timbre ♪♪♪ ♪♪♪ ♪♪♪
41
similarity search
or
recommendation
music
emotion or
genre
recognizer
automatic
music video
generation
37. Fundamentals of Music Signal Processing
42
• Listens to music
tempo, instrumentation,
key, time signature, energy,
harmonic & timbral structures
• Reads about music
lyrics, blog posts, reviews,
playlists and discussion forums
• Learns about trends
online music behavior — who's
talking about which artists this
week, what songs are being
streamed or downloaded
• Not everything is in audio
38. Fundamentals of Music Signal Processing
• Let’s have a look at what we can extract from audio
anyway
• Time-domain waveform
43
39. Fundamentals of Music Signal Processing
• Frequency domain
representation
• Spectrogram (obtained
by Short-Time Fourier
Transform)
44
40. Fundamentals of Music Signal Processing
• Pitch
• Simple for monophonic
signals (almost table
lookup)
• Challenging for polyphonic
signals; known as multi-
pitch estimation (MPE)
‒ overlapping partials
‒ missing fundamentals
45
8ve
8ve
8ve
8ve
8ve
L. Su and Y.-H. Yang, "Combining spectral and temporal representations for multipitch
estimation of polyphonic music,“ TASLP 2015
41. Fundamentals of Music Signal Processing
• Tempo: beats
per minute (bpm)
• Onset detection,
downbeat estimation
tempo estimation,
beat tracking,
rhythm pattern
extraction
48
energy-based spectrum-based
42. Fundamentals of Music Signal Processing
• Timbre: difference in time-frequency distribution
50
43. Fundamentals of Music Signal Processing
• Timbre: difference in time-frequency distribution
‒ odd-to-even harmonic ratio, decay rate, vibrato etc
51
piano solo human voice
44. Fundamentals of Music Signal Processing
• Spectrogram, or the reduced-dimension version “Mel-
spectrogram,” is usually considered as a “raw” feature
representation of music
• Can be treated as an image and then processed by
convolutional neural nets (CNN)
52
figure made by
Sander Dieleman
http://benanne.github.io/2014/
08/05/spotify-cnns.html
45. Fundamentals of Music Signal Processing
• Chromagram: a better “timbre-invariant” feature
representation for pitch related tasks (e.g. chord
recognition, cover song identification)
‒ merge all the frequency bins
with the same note name
(C, C#, D, D#, …)
‒ 12-dim vector for each
time frame
53
figure made by
Meinard Meuller
46. • Source separation can sometimes be helpful
‒ harmonic/percussion separation: given a mixture, separate
the percussive part from the harmonic part
‒ harmonic: pitch related info
‒ percussive: tempo related info
Fundamentals of Music Signal Processing
54
(a) original (b) harmonic (c) percussive
47. • Source separation can sometimes be helpful
‒ singing voice separation: given a mixture, separate the
singing voice from the accompaniment
Fundamentals of Music Signal Processing
55
48. Fundamentals of Music Signal Processing
• Pitch, tempo, timbre play different roles in different
tasks
• Spectrogram: a basic feature representation
• Multipitch estimation: for better pitch info
• Source separation: might improve the extraction for
pitch, tempo and also timbre
• Feature design (based on domain knowledge) versus
feature learning (data-driven; deep learning)
56
49. Outline
• Types of music related research/products
• Fundamentals of music signal processing
• Types of data
57
50. Types of Data
• Music audio data
─ not sharable due to copyright issues and business interest
─ however, audio features can be shared
─ or, start with copyright free music
58
free music
archive
51. Types of Data
• Music listening data
‒ from social platforms via e.g., last.fm API, Spotify API
‒ from Twitter: #nowplaying dataset
59
52. Types of Data
• Big music text data
─ score, lyrics, review, playlist, tags, Wikipedia, etc
─ not everything is in audio
─ some of them are easier to get from non-audio data
60
53. Types of Data
• Big sensor data?
─ sensors attached to “things” or “human beings”
61
54. Data Science in Music
• The missing “D” in Data Science —
domain knowledge
• Music information retrieval
= musicology
+ signal processing
+ machine learning
+ others
62
55. Resources
• Conference proceedings
‒ Int’l Soc. Music Information Retrieval Conf. (ISMIR)
‒ Int’l Conf. Acoustic, Speech, and Signal Processing (ICASSP)
‒ AAAI, IJCAI, ICML, NeurIPS, ICLR, ACM MM
• Transactions
‒ Transactions of the Int’l Soc. Music Information Retrieval
(TISMIR)
‒ IEEE Trans. Audio, Speech and Language Processing (TASLP)
‒ IEEE Trans. Multimedia (TMM)
63