Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
An up-to-date overview of our recent research on music/audio and AI. It contains four parts:
* AI Listener: source separation (ICMLA'18a) and sound event detection (IJCAI'18)
* AI DJ: music thumbnailing (TISMIR'18) and music sequencing (AAAI'18a)
* AI Composer: melody generation (ISMIR'17), lead sheet generation (ICMLA'18b), multitrack pianoroll generation (AAAI'18b), and instrumentation generation (arxiv)
* AI Performer: CNN-based score-to-audio generation (AAAI'19)
Research at MAC Lab, Academia Sincia, in 2017Yi-Hsuan Yang
Some research projects we did in 2017 at the Music & Audio Computing (MAC) Lab, Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan. It includes three parts: 1) vocal separation, 2) music generation, 3) AI DJ.
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
Slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/). The following URLs link to some demo audio files we have put on SoundCloud: all of them were fully automatically generated without any manual post-processing or editing.
@ai_piano demo: https://soundcloud.com/yating_ai/sets/ai-piano-generation-demo-202004
@ai_piano+drum demo: https://soundcloud.com/yating_ai/sets/ai-pianodrum-generation-demo-202004
@ai_guitar demo: https://soundcloud.com/yating_ai/ai-guitar-tab-generation-202003/s-KHozfW0PTv5
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
An up-to-date version of slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/), focusing on introducing the following two publications from our group.
[1] "Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions," in Proc. ACM Multimedia, 2020.
[2] "Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs," in Proc. AAAI 2021.
For the last version of the slides, please visit: https://www2.slideshare.net/affige/research-on-automatic-music-composition-at-the-taiwan-ai-labs-april-2020/edit?src=slideview
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
A very brief introduction of what we have been working on at the AI Labs on "music AI" (specifically, automatic music composition/generation) and "speech AI" (specifically, Mandarin ASR).
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
An up-to-date overview of our recent research on music/audio and AI. It contains four parts:
* AI Listener: source separation (ICMLA'18a) and sound event detection (IJCAI'18)
* AI DJ: music thumbnailing (TISMIR'18) and music sequencing (AAAI'18a)
* AI Composer: melody generation (ISMIR'17), lead sheet generation (ICMLA'18b), multitrack pianoroll generation (AAAI'18b), and instrumentation generation (arxiv)
* AI Performer: CNN-based score-to-audio generation (AAAI'19)
Research at MAC Lab, Academia Sincia, in 2017Yi-Hsuan Yang
Some research projects we did in 2017 at the Music & Audio Computing (MAC) Lab, Research Center for IT Innovation, Academia Sinica, Taipei, Taiwan. It includes three parts: 1) vocal separation, 2) music generation, 3) AI DJ.
Research on Automatic Music Composition at the Taiwan AI Labs, April 2020Yi-Hsuan Yang
Slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/). The following URLs link to some demo audio files we have put on SoundCloud: all of them were fully automatically generated without any manual post-processing or editing.
@ai_piano demo: https://soundcloud.com/yating_ai/sets/ai-piano-generation-demo-202004
@ai_piano+drum demo: https://soundcloud.com/yating_ai/sets/ai-pianodrum-generation-demo-202004
@ai_guitar demo: https://soundcloud.com/yating_ai/ai-guitar-tab-generation-202003/s-KHozfW0PTv5
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
An up-to-date version of slides introducing our ongoing projects on automatic music composition at the Yating Music AI Team of the Taiwan AI Labs (https://ailabs.tw/), focusing on introducing the following two publications from our group.
[1] "Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions," in Proc. ACM Multimedia, 2020.
[2] "Compound Word Transformer: Learning to compose full-song music over dynamic directed hypergraphs," in Proc. AAAI 2021.
For the last version of the slides, please visit: https://www2.slideshare.net/affige/research-on-automatic-music-composition-at-the-taiwan-ai-labs-april-2020/edit?src=slideview
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
Machine Learning for Creative AI Applications in Music, slides presented at the Fifth Taiwanese Music and Audio Computing Workshop (http://mac.citi.sinica.edu.tw/tmac18/)
20190625 Research at Taiwan AI Labs: Music and Speech AIYi-Hsuan Yang
A very brief introduction of what we have been working on at the AI Labs on "music AI" (specifically, automatic music composition/generation) and "speech AI" (specifically, Mandarin ASR).
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
slides presented at a three-hour local AI music course in Taiwan in Oct 2021; part 1: a brief introduction to music information retrieval (+analysis, +generation)
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
Slides Hao-Wen Dong and I presented at the ISMIR 2019 tutorial on "Generating Music with GANs—An Overview and Case Studies". More info: https://salu133445.github.io/ismir2019tutorial/
A co-presentation by Thomas Crenshaw of PBS and Javaun Moradi, NPR.
A look at NPR and PBS's APIs past and present and how they've supported our product roadmaps. We'll also give a glimpse at where we're headed.
PBS’ Tom Crenshaw and NPR’s Javaun Moradi discuss the PBS and NPR APIs. Topics covered are radio, television and dual-licensee stations can leverage the PBS and NPR APIs to innovate and build audience on their websites, mobile devices, and beyond. Tom and Javaun discuss retrieving API content for use on station sites, putting station content into our APIs for reuse elsewhere, and finding station information based on location or call letters. They share their ideas on where the public media APIs are headed, and they look forward to hearing your questions, feedback, and pain points.
Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, Artificial intelligence and music (AIM) has, for a long time, been a common subject in several conferences and workshops, including the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence.
Annotating Music Collections: How Content-Based Similarity Helps to Propagate...Oscar Celma
In this paper we present a way to annotate music collections by exploiting audio similarity. In this sense, similarity is used to propose labels (tags) to yet unlabeled songs, based on the content–based distance between them. The main goal of our work is to ease the process of annotating huge music collections, by using content-based similarity distances as a way to propagate labels among songs.
We present two different experiments. The first one propagates labels that are related with the style of the piece, whereas the second experiment deals with mood labels. On the one hand, our approach shows that using a music collection annotated at 40% with styles, and using content– based, the collection can be automatically annotated up to 78% (that is, 40% already annotated and the rest, 38%, only using propagation), with a recall greater than 0.4. On the other hand, for a smaller music collection annotated at 30% with moods, the collection can be automatically annotated up to 65% (e.g. 30% plus 35% using propagation).
Social Tags and Music Information Retrieval (Part I)Paul Lamere
Part 1 of the Social Tags and Music Information Retrieval Tutorial. Abstract: Social Tags are free text labels that are applied to items such as artists, playlists and songs. These tags have the potential to have a positive impact on music information retrieval research. In this tutorial we describe the state of the art in commercial and research social tagging systems for music. We explore some of the motivations for tagging. We describe the factors that affect the quantity and quality of collected tags. We present a toolkit that MIR researchers can use to harvest and process tags. We look at how tags are collected and used in current commercial and research systems. We explore some of the issues and problems that are encountered when using tags. We present current MIR-related research centered on social tags and suggest possible areas of exploration for future resear
This is a presentation I made (in French) at the Siestes Electroniques Music Festival in Toulouse, in June 2013.
It starts with a brief history of music distribution and then gets into to the details of digital music and streaming
The slides for my seminar on adaptive music at the Charles University in Prague // Introduction to the topic of adaptive music // Music Design // Sequence Music Engine // On development of the Kingdom Come: Deliverance soundtrack.
Yi-Hsuan Yang is an Associate Research Fellow with Academia Sinica. He received his Ph.D. degree in Communication Engineering from National Taiwan University in 2010, and became an Assistant Research Fellow in Academia Sinica in 2011. He is also an Adjunct Associate Professor with the National Tsing Hua University, Taiwan. His research interests include music information retrieval, machine learning and affective computing. Dr. Yang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Young Author Best Paper Award, the 2012 ACM Multimedia Grand Challenge First Prize, and the 2014 Ta-You Wu Memorial Research Award of the Ministry of Science and Technology, Taiwan. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retrieval Conference (ISMIR 2012). In 2014, he served as a Technical Program Co-chair of ISMIR, and a Guest Editor of the IEEE Transactions on Affective Computing and the ACM Transactions on Intelligent Systems and Technology.
a set of slides introducing the application of machine learning to music related applications; intended for audience not with computer science background;
slides presented at a three-hour local AI music course in Taiwan in Oct 2021; part 1: a brief introduction to music information retrieval (+analysis, +generation)
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
This set of slides briefly describes what we have been working on at the Yating Music AI team at the Taiwan AI Labs. We are going to talk about these as two demo papers at the 20th annual conference of the International Society for Music Information Retrieval (ISMIR),
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
Slides Hao-Wen Dong and I presented at the ISMIR 2019 tutorial on "Generating Music with GANs—An Overview and Case Studies". More info: https://salu133445.github.io/ismir2019tutorial/
A co-presentation by Thomas Crenshaw of PBS and Javaun Moradi, NPR.
A look at NPR and PBS's APIs past and present and how they've supported our product roadmaps. We'll also give a glimpse at where we're headed.
PBS’ Tom Crenshaw and NPR’s Javaun Moradi discuss the PBS and NPR APIs. Topics covered are radio, television and dual-licensee stations can leverage the PBS and NPR APIs to innovate and build audience on their websites, mobile devices, and beyond. Tom and Javaun discuss retrieving API content for use on station sites, putting station content into our APIs for reuse elsewhere, and finding station information based on location or call letters. They share their ideas on where the public media APIs are headed, and they look forward to hearing your questions, feedback, and pain points.
Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, Artificial intelligence and music (AIM) has, for a long time, been a common subject in several conferences and workshops, including the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence.
Annotating Music Collections: How Content-Based Similarity Helps to Propagate...Oscar Celma
In this paper we present a way to annotate music collections by exploiting audio similarity. In this sense, similarity is used to propose labels (tags) to yet unlabeled songs, based on the content–based distance between them. The main goal of our work is to ease the process of annotating huge music collections, by using content-based similarity distances as a way to propagate labels among songs.
We present two different experiments. The first one propagates labels that are related with the style of the piece, whereas the second experiment deals with mood labels. On the one hand, our approach shows that using a music collection annotated at 40% with styles, and using content– based, the collection can be automatically annotated up to 78% (that is, 40% already annotated and the rest, 38%, only using propagation), with a recall greater than 0.4. On the other hand, for a smaller music collection annotated at 30% with moods, the collection can be automatically annotated up to 65% (e.g. 30% plus 35% using propagation).
Social Tags and Music Information Retrieval (Part I)Paul Lamere
Part 1 of the Social Tags and Music Information Retrieval Tutorial. Abstract: Social Tags are free text labels that are applied to items such as artists, playlists and songs. These tags have the potential to have a positive impact on music information retrieval research. In this tutorial we describe the state of the art in commercial and research social tagging systems for music. We explore some of the motivations for tagging. We describe the factors that affect the quantity and quality of collected tags. We present a toolkit that MIR researchers can use to harvest and process tags. We look at how tags are collected and used in current commercial and research systems. We explore some of the issues and problems that are encountered when using tags. We present current MIR-related research centered on social tags and suggest possible areas of exploration for future resear
This is a presentation I made (in French) at the Siestes Electroniques Music Festival in Toulouse, in June 2013.
It starts with a brief history of music distribution and then gets into to the details of digital music and streaming
The slides for my seminar on adaptive music at the Charles University in Prague // Introduction to the topic of adaptive music // Music Design // Sequence Music Engine // On development of the Kingdom Come: Deliverance soundtrack.
eMarketer Webinar: Mobile Messaging Trends—Tapping into SMS, Mobile Email and...eMarketer
eMarketer's Catherine Boyle discusses the latest mobile messaging trends and how leading brands are beginning to use a variety of channels to engage consumers.
Music data is scary, beautiful and excitingBrian Whitman
Brian Whitman, co-Founder/CTO of the Echo Nest, shows off some scary things, some beautiful things and some exciting things about the integration of data into music.
I've been making software and hardware to make music instead of actually making music for ten years and I blame dorkbot. "Scared Straight" style, I'll show you the things I presented exactly ten years ago at the very first dorkbot-nyc and then do some group therapy to try to convince at least one of you to put away the italian microcontrollers with python interpreters running Conway's Game of Creative Death and go make something beautiful that your mother will love. She misses you, you know.
How Spotify uses large scale Machine Learning running on top of Hadoop to power music discovery. From the NYC Predictive Analytics meetup: http://www.meetup.com/NYC-Predictive-Analytics/events/129778152/
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
Nearest neighbors refers to something that is conceptually very simple. For a set of points in some space (possibly many dimensions), we want to find the closest k neighbors quickly.
This presentation covers a library called Annoy built my me that that helps you do (approximate) nearest neighbor queries in high dimensional spaces. We're going through vector models, how to measure similarity, and why nearest neighbor queries are useful.
Spotify uses a range of Machine Learning models to power its music recommendation features including the Discover page and Radio. Due to the iterative nature of training these models they suffer from IO overhead of Hadoop and are a natural fit to the Spark programming paradigm. In this talk I will present both the right way as well as the wrong way to implement collaborative filtering models with Spark. Additionally, I will deep dive into how Matrix Factorization is implemented in the MLlib library.
Algorithmic Music Recommendations at SpotifyChris Johnson
In this presentation I introduce various Machine Learning methods that we utilize for music recommendations and discovery at Spotify. Specifically, I focus on Implicit Matrix Factorization for Collaborative Filtering, how to implement a small scale version using python, numpy, and scipy, as well as how to scale up to 20 Million users and 24 Million songs using Hadoop and Spark.
Music Recommendations at Scale with SparkChris Johnson
Spotify uses a range of Machine Learning models to power its music recommendation features including the Discover page, Radio, and Related Artists. Due to the iterative nature of these models they are a natural fit to the Spark computation paradigm and suffer from the IO overhead incurred by Hadoop. In this talk, I review the ALS algorithm for Matrix Factorization with implicit feedback data and how we’ve scaled it up to handle 100s of Billions of data points using Scala, Breeze, and Spark.
How does having 30 million songs in our pocket affect how we listen to music? In this data-driven and demo-laden talk we’ll explore the behavior of today’s music listener. We’ll look at how today’s easy and ubiquitous access to nearly all of recorded music is changing how a listener organizes, discovers and experiences music. By exploring big music data being collected by organizations such as Spotify and The Echo Nest we can get a deeper and more nuanced view of how today’s listener really interacts with their music.
I've got 10 million songs in my pocket. Now what? Paul Lamere
The proverbial 'celestial jukebox' has become a reality. With today's online music services a music fan is never more than a few clicks away from being able to listen to nearly any song that has ever been recorded. Recommender systems can play a key role in this new music ecosystem, helping listeners explore, discover, organize and share music. However, in many ways music recommendation is very different than recommendation in other well-studied domains such as books and movies. In this talk we explore how recommender systems can be used in the music space, and the particular challenges that the music domain presents to the designers of recommender systems.
Finding Music With Pictures: Using Visualization for DiscoveryPaul Lamere
Slides from my SXSW 2011 talk. Here's the abstract:
With so much music available, finding new music that you like can be like finding a needle in a haystack. We need new tools to help us to explore the world of music, tools that can help us separate the wheat from the chaff. In this panel we will look at how visualizations can be used to help people explore the music space and discover new, interesting music that they will like. We will look at a wide range of visualizations, from hand drawn artist maps, to highly interactive, immersive 3D environments. We'll explore a number of different visualization techniques including graphs, trees, maps, timelines and flow diagrams and we'll examine different types of music data that can contribute to a visualization. Using numerous examples drawn from commercial and research systems we'll show how visualizations are being used now to enhance music discovery and we'll demonstrate some new visualization techniques coming out of the labs that we'll find in tomorrow's music discovery applications.
The Echo Nest workshop for Boston Music Hack DayPaul Lamere
This is a slide deck for the Echo Nest API workshop presented at the Boston Music Hack Day on November 21, 2010. Note that the live presentation has music and video that is not present in this deck
Using Visualizations for Music DiscoveryPaul Lamere
As the world of online music grows, tools for helping people find new and interesting music in these extremely large collections become increasingly important. In this tutorial we look at one such tool that can be used to help people explore large music collections: information visualization. We survey the state-of-the-art in visualization for music discovery in commercial and research systems. Using numerous examples, we explore different algorithms and techniques that can be used to visualize large and complex music spaces, focusing on the advantages and the disadvantages of the various techniques. We investigate user factors that affect the usefulness of a visualization and we suggest possible areas of exploration for future research.
This is the slide set for the ISMIR 2009 tutorial by Justin Donaldson and Paul Lamere. Note that this presentation originally included a large number of audio and video samples which are not included in this slide deck due to space considerations.
Music recommendation is broken - automatic music recommenders make mistakes that no human would ever make. In this talk, we will explore why recommenders make such dumb mistakes and we will explore some of the new ideas coming from recommendation and music researchers to help make music recommendations better.
Slides from the SXSW 2009 Panel. Speakers: Paul Lamere from The Echo Nest, and Anthony Volodkin from The Hype Machine.
Social Tags and Music Information Retrieval (Part II)Paul Lamere
Part 2 of the slides for the Social Tags and Music Information Retrieval Tutorial - Abstract: Social Tags are free text labels that are applied to items such as artists, playlists and songs. These tags have the potential to have a positive impact on music information retrieval research. In this tutorial we describe the state of the art in commercial and research social tagging systems for music. We explore some of the motivations for tagging. We describe the factors that affect the quantity and quality of collected tags. We present a toolkit that MIR researchers can use to harvest and process tags. We look at how tags are collected and used in current commercial and research systems. We explore some of the issues and problems that are encountered when using tags. We present current MIR-related research centered on social tags and suggest possible areas of exploration for future resear
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Free Complete Python - A step towards Data Science
Echo nest-api-boston-2012
1.
2. The Echo Nest Solution
understanding music content and consumers
Rich Music Data
content culture
12 years of R&D at MIT, Columbia and Berkeley
Our API is our product
3. Our API
Our API is our product. Everything a
customer can do so can you.
developer.echonest.com
4. Artist API
2 million artists
• Search • News
• Similar • Reviews
• Familiarity • Images
• Hottttnesss • Video
• Bios • Location
• Blogs • Suggest
• Terms • Extract
5. SIMILAR ARTISTS IN 2 LINES OF CODE
for a in artist.similar(names=['lady gaga']): print
a.name
MadonnaChristina AguileraBritney SpearsKylie MinogueKaty
PerryScissor SistersRihannaBeyoncéAshley TisdaleLivvi FrancLa
RouxParis HiltonShe Wants RevengeThe Pussycat DollsMarina and
The Diamonds
6. Top recent news stories for Adele
adele = artist.Artist('Adele')for news in adele.news: print
news['date_posted'], news['name']
2012-02-06T17:37:00 Grammys: Who Should Win the Major Categories2012-02-06T00:00:00
Noel Gallagher: Adele's Music Career Won't Last2012-02-06T00:00:00 Noel Gallagher Admits
He Feels Sorry For Adele2012-02-06T00:00:00 Dave Grohl's Grammy pride2012-02-
06T00:00:00 British Artists Dominate 2011 Market: Adele, Jessie J2012-02-06T00:00:00 Adele
called 'too fat'
7. Song API
30 million songs
• Search • Segments
• Similar Songs • Timbre
• Tempo • Pitch
• Key & Mode • Loudness
• Time Signature • Energy
• Beats • Danceability
• Downbeats • Speechiness
8. Track Analysis and Remix Summary
Song I/O
• Upload to analyze tracks
• Render audio and video
auditory spectrogram
Song search
• Search for songs
segments
Song analysis
• Tempo, Key, Mode, Time Signature
Song Hierarchy pitch features
• Section, Bars, Beats, Tatums
Segments
• Timbre, Pitch, Loudness timbre features
Manipulations
• Rearranging, blending, time stretching,
pitch shifting, video, looping, It turns music into silly putty
• fade-ins, fade-outs, crossfades, find
similar, sorting
9. Song API example
Find the loudest songs by thrash artists
song/search?sort=loudness-desc&description=thrash
Find indie songs for jogging
song/search?min_tempo=120&style=indie&max_tempo=125
Find hottest songs by Lady Gaga
song/search?sort=hotttnesss-desc&artist=lady+gaga
10. Audio properties in a few lines of code
results = song.search(artist='Michael Jackson', title='billie jean')if
len(results) > 0: print 'tempo', results[0].audio_summary['tempo']
print 'dance', results[0].audio_summary['danceability'] print
'energy', results[0].audio_summary['energy']
tempo 117.128dance 0.97energy 0.47
11. More APIs!
• Taste Profiles for personalization
• Advanced Playlisting
• Song identification
Plus, client libraries for popular platforms:
Python Java Ruby iOS Android etc
12. ARTIST RADIO IN 2 LINES OF CODE
for song in playlist.static(type='artist-radio', artist='weezer'): print son
song.artist_name
Island In The Sun by Weezer1979 by The Smashing PumpkinsWalk by
Foo FightersDance, Dance by Fall Out BoyBlast Off! by Rivers
CuomoOh Me, Oh My by Nerf HerderBirdhouse in Your Soul by They
Might Be GiantsSmells Like Teen Spirit by NirvanaAlison by Elvis
CostelloGirl, You'll Be a Woman Soon by Urge OverkillStacy's Mom by
Fountains of WayneThe Middle by Jimmy Eat WorldWorry A Lot by The
Like Young1985 by Bowling for SoupDo You Realize?? by The Flaming
Lips
13. Our playlist engine powers the listening
experience for millions of music listeners
14. The Playlist API
• Fine grained control over:
• artist selection, variety
• hotttness, familiarity, location
• song selection
• Any musical attributes (e.g. tempo range, key)
• song ordering
• Artist or song attributes (e.g. loudness)
15. Some examples
• Play tracks by Weezer and Radiohead
playlist/static?&artist=weezer&artist=radiohead&results=20&type=artist
• Weezer artist radio
playlist/static?&artist=weezer&artist=radiohead&type=artist-radio
• Playlist of music by pop divas ordered by tempo
playlist/static?&description=pop&description=diva&type=artist-
description&artist_min_familiarity=.9&sort=tempo-asc
16. Audio Fingerprinter
• Identify songs based upon audio
• Fingerprinter executables and libraries for
Windows, Mac and Linux
• Song ID typically in less than a second per song
• Currently in beta
• More info at:
http://groups.google.com/group/enmfp
18. Open EMI
• Dozens of artist sandboxes
• Audio
• Video
• Images
• More ...
19. Content Available
Audio (inc metadata) Video Imagery Promo Tools Web Tools
Selection 2,000 tracks
Over 10,000 tracks
+ artwork
70 tracks
Web banners
41 albums 135 86 Image assets
27 Photosessions
26 Games
+ artwork (coming soon)
Screensavers
71 albums 180 26 Image assets Web banners
8 Photosessions
35 Games
+ artwork (coming soon)
24 albums 32 Logos
2 Photosessions
16
+ artwork (coming soon)
11 albums 49 Logos
4 Photosessions
9
+ artwork (coming soon)
13 albums 31 Logos
Photosession
12
+ artwork (coming soon)
14 albums 27 Logos
5 Photosessions
11
+ artwork (coming soon)
10 albums 23 Logo
{hotosession
9
+ artwork (coming soon)
19
20. Get ready for Christmas!
Constrain song searches and playlists to songs that match a
given ‘song type’
Example: Justin Bieber Christmas Radio
http://developer.echonest.com/api/v4/playlist/static?api_key=key&art
song_type=christmas
Demo: http://static.echo
nest.com/demo/xmas.html
34. With remix you can
chop sound into:
Sections
Bars
Beats
And then
programmatically
Tatums manipulate all of the
bits and pieces
Segments
35. slicing and dicing
Create a remix from beat one of every bar
Create a remix from beat one of every bar
bars = audiofile.analysis.bars collect =
[] for bar in bars:
collect.append(bar.children()[0]) out =
audio.getpieces(audiofile, collect)
out.encode(output_filename)
audio.getpieces(audiofile, collect)
out.encode(output_filename)
36. beat reversing
beats = audiofile.analysis.beats collec
= []
beats.reverse() for beat in beats:
collect.append(beat) out =
audio.getpieces(audiofile, collect)
out.encode(output_filename)
audio.getpieces(audiofile, collect)
out.encode(output_filename)
audio.getpieces(audiofile, collect)
40. How can I get started?
Get a key & check out our api docs -
developer.echonest.com
Get a wrapper for your language - C,
iOS, Python, Java, Ruby, PHP, more
If you want to make music get Remix
from our GitHub: github.com/echonest/
Talk to us!
paul@echonest.com
AUDIO “albums” = multi track singles + different territory releases + clean/explicit versions VIDEO = official (+ 30sec clips of official) + EPKs + interviews + documentaries + teasers + clean/explicit versions of each where appropriate - full vid + 30 sec clips are considered separate assets (therefore number of full vid approx = half of vid assets listed) IMAGERY = posters + print ads + photosessions + logos + wallpapers etc. PROMO TOOLS = biographies + press releases