The document provides an overview of physical modeling of musical instruments on handheld mobile devices. It discusses the history of physical modeling synthesis techniques and commercial applications. It then demonstrates moForte's modeled guitar for mobile devices, which uses digital signal processing techniques like the Karplus-Strong and waveguide algorithms to model string vibration and effects. The guitar model can be calibrated to mimic different instruments and includes features like strumming, effects chains, and various articulations. Physical modeling on mobile devices continues to advance, allowing more expressive virtual instruments to be created with the increasing processing power of smartphones and tablets.
Builders and sales associates were interviewed to understand their perceptions of green building materials and energy efficiency. Regarding the product category, builders saw benefits like protection from weather and moisture but were unsure if it was truly "green". They wanted more information on its performance and energy savings. Providing an interactive tool like an energy calculator that demonstrates product category's benefits was suggested to help increase builders' confidence and ability to market it effectively to homeowners.
The EMS Musys system was the first digital sampler, developed in the late 1960s using two PDP-8 mini computers with 12k of memory. The Fairlight CMI, designed in 1979, was an early digital sampling synthesizer that used recorded sounds as starting points due to limitations of earlier synthesizers. The SP-1200 became iconic for hip hop due to allowing artists to construct full songs on a portable device.
Lê Trường An – Dịch giả – Tác giả – Marketer – chuyên thực hiện các dự án SEO, Social Media, Dịch thuật và xuất bản nội dung. Ngoài ra, Lê Trường An liên tục cập nhật nội dung blog với các chủ đề SEO, Marketing và nhiều hơn nữa…
---
Content Creator Lê Trường An
Chuyên viên Marketing – Tác giả - Dịch giả tại letruongan.com
Chuyên viên Marketing tại BrainCoach
Chuyên viên Content Marketing tại FoogleSEO
Dịch vụ Marketing – SEO – Content Marketing
The document describes a student project that used a Texas Instruments DSP starter kit to implement digital signal processing techniques for real-time audio effects and a haptic beat detector device. Key aspects included designing digital filters in MATLAB to create effects like echo, reverb and chorus. A haptic motor controller connected to the DSP board detected beats in music and vibrated in time. The project provided hands-on experience with DSP concepts and their applications in areas like assistive technology. Evaluation showed the audio effects and beat detector worked as intended.
CrestaTV is the next step in the evolution of computing by bringing in Live Broadcast in addition to all the Music Pictures Documents and Contacts we carry with us.
This document discusses the history and technology behind different methods of audio recording, including mechanical, magnetic, optical, and digital formats. It covers early developments like the phonograph and gramophone, as well as modern technologies like vinyl record cutting lathes, magnetic tape recording using reel-to-reel and cassette tapes, optical discs like CDs that use lasers to read encoded data pits, and digital audio formats like DAT tapes and portable recorders that store audio digitally. Key advantages of digital formats are freedom from noise, error correction, high information density, and ability to compress data.
The first CPU chip, the Intel 4004, was released in 1971. It had a clock speed of 740KHz and was capable of executing up to 92,600 instructions per second. The first computer mouse was invented in 1963 by Douglas Engelbart. The first hard disk drive was the IBM Model 350, which was part of the IBM 305 RAMAC computer delivered in 1956 and had a storage capacity of 4.4 MB. The first laser printer was invented by Gary Starkweather at Xerox in 1969.
Builders and sales associates were interviewed to understand their perceptions of green building materials and energy efficiency. Regarding the product category, builders saw benefits like protection from weather and moisture but were unsure if it was truly "green". They wanted more information on its performance and energy savings. Providing an interactive tool like an energy calculator that demonstrates product category's benefits was suggested to help increase builders' confidence and ability to market it effectively to homeowners.
The EMS Musys system was the first digital sampler, developed in the late 1960s using two PDP-8 mini computers with 12k of memory. The Fairlight CMI, designed in 1979, was an early digital sampling synthesizer that used recorded sounds as starting points due to limitations of earlier synthesizers. The SP-1200 became iconic for hip hop due to allowing artists to construct full songs on a portable device.
Lê Trường An – Dịch giả – Tác giả – Marketer – chuyên thực hiện các dự án SEO, Social Media, Dịch thuật và xuất bản nội dung. Ngoài ra, Lê Trường An liên tục cập nhật nội dung blog với các chủ đề SEO, Marketing và nhiều hơn nữa…
---
Content Creator Lê Trường An
Chuyên viên Marketing – Tác giả - Dịch giả tại letruongan.com
Chuyên viên Marketing tại BrainCoach
Chuyên viên Content Marketing tại FoogleSEO
Dịch vụ Marketing – SEO – Content Marketing
The document describes a student project that used a Texas Instruments DSP starter kit to implement digital signal processing techniques for real-time audio effects and a haptic beat detector device. Key aspects included designing digital filters in MATLAB to create effects like echo, reverb and chorus. A haptic motor controller connected to the DSP board detected beats in music and vibrated in time. The project provided hands-on experience with DSP concepts and their applications in areas like assistive technology. Evaluation showed the audio effects and beat detector worked as intended.
CrestaTV is the next step in the evolution of computing by bringing in Live Broadcast in addition to all the Music Pictures Documents and Contacts we carry with us.
This document discusses the history and technology behind different methods of audio recording, including mechanical, magnetic, optical, and digital formats. It covers early developments like the phonograph and gramophone, as well as modern technologies like vinyl record cutting lathes, magnetic tape recording using reel-to-reel and cassette tapes, optical discs like CDs that use lasers to read encoded data pits, and digital audio formats like DAT tapes and portable recorders that store audio digitally. Key advantages of digital formats are freedom from noise, error correction, high information density, and ability to compress data.
The first CPU chip, the Intel 4004, was released in 1971. It had a clock speed of 740KHz and was capable of executing up to 92,600 instructions per second. The first computer mouse was invented in 1963 by Douglas Engelbart. The first hard disk drive was the IBM Model 350, which was part of the IBM 305 RAMAC computer delivered in 1956 and had a storage capacity of 4.4 MB. The first laser printer was invented by Gary Starkweather at Xerox in 1969.
Audio is important because it informs us and moves us in ways visuals can’t. Despite its importance, developing compelling audio can often be a confusing endeavor. We welcome those seeking to improve their fundamental comprehension of audio and challenge how they develop their audio moving forward. This event will demystify the art of broadcast audio by covering fundamental concepts for people looking to improve their setups and overall audio experience for their respective audiences.
The workshop is being taught by Michel Henein with the space being provided by DigiPen Institute of Technology. Along with other senses, audio provides an important context for human experience. Seeking to utilize his knowledge, skills, and experience Michel wants to challenge how our community understands audio and what they can do to take it to the next levels. Come learn from someone who has spent their career bringing life to audio. *Please note we are unsure, at the moment, if we will be able to provide resources for note-taking and writing for those that attend, so bring your own if you can.
BroadcasterU is a program from the Seattle Online Broadcasters Association to educate the local online broadcasting community in various topics related to online broadcasting. This workshop will be taught by Michel Henein: Film and TV experience - credited on the Academy Award-winning documentary “The Last Days. Worked on video game titles including Pixar’s Cars, MX Unleashed, MX vs. ATV Unleashed. Co-developed the world’s first video game audio educational curriculum for Wwise, a leading audio engine for AAA games, for the Conservatory of Recording Arts and Sciences. Currently, VP of Product for VisiSonics Corporation, supplier of 3D audio tech solutions to leading electronics, gaming, and VR companies, and also Chief Product Officer for VZR, Inc., a headphone technology company.
These slides were from the culminating presentation of my residency at Eyebeam. In it I discuss the experiments and prototypes I developed while exploring semi-modular synthesizer design. I cover my inspirations and references and the potential applications for the instrument I designed.
Video: https://youtu.be/mElsxW8DtGs
The synthesizer for A2 music tech studentsmusic_hayes
The document discusses the history and evolution of synthesizers from early electronic instruments like the Theremin to modern digital synths. It covers important analog synths like the Minimoog and Prophet-5, early digital synths like the Yamaha DX7 and Roland D-50 that used new synthesis methods, and how sampling synths like the Korg M1 allowed realistic sounds. The document provides examples of classic synths and how they were used to shape popular music genres.
My amazing journey from mainframes to smartphones chm lecture aug 2014 finalDileep Bhandarkar
Disruptive technologies have caused dramatic changes in computing technology for decades, often in unacknowledged ways. In this talk, Dr. Dileep Bhandarkar will paint a picture that puts these changes into perspective, and which shows how this series of disruptions have set a course that has evolved from the mainframe to the current smartphone, mobile and cloud computing world.
Build an Analog Synthesizer with littleBitsChad Mairn
Discover the wonderful world of littleBits! Learn the basics of sound, understand synthesizers and their history, and build a basic analog synthesizer that generates beeps, blips, and other fun electronic sounds. This workshop is hands-on and you will also learn how to control littleBits with an external MIDI controller and use littleBits to control Ableton Live and other Digital Audio Workstations (DAW).
LittleBits are magnetic modules that snap together to create electronic circuits without wiring. They can be used to build analog synthesizers and control music software. The presentation discusses the basics of sound, the history of synthesizers, and how to build a synthesizer and control a DAW using the LittleBits synth kit and MIDI module. The synth kit contains modules like oscillators, filters, envelopes and effects that can be combined using the magnetic connections to create sounds, which can then be controlled or triggered using the MIDI module to interface with software.
The document discusses the evolution of computer architectures from early technological achievements like the transistor and integrated circuit. It describes increasing transistor densities following Moore's Law. Future technologies will focus on increasing core counts while decreasing cycle times and voltages. Performance will come from parallelism rather than clock speed increases due to heat limitations. The document outlines challenges in scaling to exascale systems by 2018.
The document is a glossary assignment for a games design course requiring research and definitions of sound design and production terms. It contains a template with over 30 terms defined along with their relevance to the student's own production practice. Definitions are from online sources and include terms like foley artistry, sound libraries, file formats like .wav and .mp3, audio hardware and software like MIDI, samplers and plugins.
Real Time Drum Augmentation with Physical ModelingBen Eyes
This document discusses augmenting acoustic drums with physical modeling to create new sounds and performances. It summarizes previous research that used convolution or spectral processing to digitally process drum sounds. The author then describes his own project that uses a physical model of strings as a VST plugin to process drum sounds from a snare drum and rototoms in real time. An interview with the percussionist discusses the collaborative composition process and how playing with the system required experimenting with extended techniques. The author concludes that future work will involve developing their own drum models and exploring new interfaces like facial recognition to control sound parameters.
We were pioneers: early applications of dwn simulations_2Piero Belforte
The early applications (1970s) of a revolutionary electrical circuit simulation method (DWN) are presented including device modelling and signal integrity driven design of high speed digital modules. These modules were utilized to develop the prototypes of digital switching systems deployed in Italian Telecom network in the 1970s.
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...BigDataEverywhere
The document discusses the history and future of high performance computing (HPC). It outlines the key technologies and architectures that have enabled exponential increases in computational power over recent decades. These include vector processing, parallelization, GPUs, and interconnects like Infiniband. The document also examines emerging technologies like exascale computing and quantum computing that could further push the boundaries of HPC. Overall, the document argues that HPC will remain indispensable for scientific discovery and engineering innovation into the future.
Both analogue and digital recordings start by converting sound into an electrical analog signal using a microphone. Analogue recordings directly transfer this signal to tape or vinyl records, while digital recordings convert the analog signal into numeric code. After digitization, digital recordings can be copied to formats like CDs, hard drives, or streamed online. While both methods have advantages, analogue recordings are vulnerable to deterioration over time and have limited editing capabilities, while digital recordings have greater editing flexibility but risk data corruption and compatibility issues between software.
Shaun Warburton produced a glossary of sound design and production terms. He researched definitions from online sources and described how each term relates to his own production practice. Some key terms he defined and related to his work include foley artistry, sound libraries, uncompressed audio file formats like .wav and .aiff, lossy compression formats like .mp3, limitations of early sound technology like Sound Processor Units, and digital audio recording systems like MIDI keyboards.
Digital resistance, East European demo artJari Jaanto
The Alternative Party presents a showcase of old computer hardware (and new software for them) from the Soviet Union and East European countries, showing a rare glimpse of the unique computing culture that is very different from the corporate-driven, western hardware offerings. A show of unique styles of digital art, influenced by demo art and local computer hobbyist cliques, with a distinct Eastern flavour.
Improvising Songwriting and Composition Within A Hybrid Modular Synthesis SystemHussein Boon
This paper discusses a semi-improvised compositional approach, within a hybridised electroacoustic music context. It will feature a live presentation and discussion of a novel form of professional application to expand contemporary, artistic practice.
A central component of this approach is a discussion of the Analogue Shift Register (ASR) including its various digital representations. Historically the ASR emerged during the early 70s with the first example by Serge Tcherepnin, described as a '.. sequential sample and hold module for producing arabesque-like forms in musical space'.
A contemporary realisation of the ASR can be found in devices such as Ornament and Crime's (O&C) CopierMaschine and Turing Machine (Music Thing Modular/Tom Whitwell). The Shift Register, whether analogue or digital, as part of a composing/writing practice can be a potentially more engaging vehicle, due to its self-generative capabilities, than perhaps a sequencer would be for many practitioners. Whilst it is a slightly more 'esoteric' device, the lines or patterns developed using these systems can in turn be resampled and integrated into various types of composition. As a practical aid the device can seed results applicable to any electroacoustic medium whether for stage, recording studio or live performance. Outputs can be managed with varying levels of granularity and artists can produce innovative results when combined with knowledge of harmony, oscillator tuning, cv quantisation, alongside the exploration of various generative algorithms. Assisted by these devices, the performer/composer would be able to extend their practice to generate structurally complex pieces using a novel approach that hasn’t previously been realised or considerably experimented with alongside contemporary music making tools
The presentation will demonstrate some approaches that allow for original work to be devised using a modular synthesiser as part of the compositional/songwriting process and will enable discussion of the relative merits of this novel form of professional application.
This document discusses real-time embedded acoustic DSP projects. It begins by describing the objective of creating audio special effects using DSP algorithms. It then discusses the technologies applied, including the hardware used which is a TMS320C6713 DSK and the Code Composer Studio software. The core of the document discusses the theory behind building blocks like comb filters, all-pass filters, and notch filters that are used to create effects like echo, flanging, chorus, phasing, reverb, tremolo, and more. It concludes by discussing considerations for real-time embedded DSP applications using the TMS320C6713 architecture.
The document is a glossary of terms related to sound design and production for computer games created by Phillip Norris Wynne. It contains definitions for over 20 key terms sourced from online references and describes the relevance of each term to the author's own production practice. Some of the terms defined and summarized include Foley artistry, sound libraries, audio file formats like .wav and .mp3, lossy compression, audio limitations of hardware like sound processor units, and audio sampling concepts like bit depth and sample rate.
This document discusses managing sound data from digitized sound archives. It covers inventorying and cataloguing physical media like tapes, discs, and cassettes to make the digital sound files accessible. Topics include writing file specifications, assigning metadata, and ensuring files are stored securely and can be discovered. Proper storage conditions and handling of physical media are also reviewed to support long-term preservation of and access to the digitized content.
Audio is important because it informs us and moves us in ways visuals can’t. Despite its importance, developing compelling audio can often be a confusing endeavor. We welcome those seeking to improve their fundamental comprehension of audio and challenge how they develop their audio moving forward. This event will demystify the art of broadcast audio by covering fundamental concepts for people looking to improve their setups and overall audio experience for their respective audiences.
The workshop is being taught by Michel Henein with the space being provided by DigiPen Institute of Technology. Along with other senses, audio provides an important context for human experience. Seeking to utilize his knowledge, skills, and experience Michel wants to challenge how our community understands audio and what they can do to take it to the next levels. Come learn from someone who has spent their career bringing life to audio. *Please note we are unsure, at the moment, if we will be able to provide resources for note-taking and writing for those that attend, so bring your own if you can.
BroadcasterU is a program from the Seattle Online Broadcasters Association to educate the local online broadcasting community in various topics related to online broadcasting. This workshop will be taught by Michel Henein: Film and TV experience - credited on the Academy Award-winning documentary “The Last Days. Worked on video game titles including Pixar’s Cars, MX Unleashed, MX vs. ATV Unleashed. Co-developed the world’s first video game audio educational curriculum for Wwise, a leading audio engine for AAA games, for the Conservatory of Recording Arts and Sciences. Currently, VP of Product for VisiSonics Corporation, supplier of 3D audio tech solutions to leading electronics, gaming, and VR companies, and also Chief Product Officer for VZR, Inc., a headphone technology company.
These slides were from the culminating presentation of my residency at Eyebeam. In it I discuss the experiments and prototypes I developed while exploring semi-modular synthesizer design. I cover my inspirations and references and the potential applications for the instrument I designed.
Video: https://youtu.be/mElsxW8DtGs
The synthesizer for A2 music tech studentsmusic_hayes
The document discusses the history and evolution of synthesizers from early electronic instruments like the Theremin to modern digital synths. It covers important analog synths like the Minimoog and Prophet-5, early digital synths like the Yamaha DX7 and Roland D-50 that used new synthesis methods, and how sampling synths like the Korg M1 allowed realistic sounds. The document provides examples of classic synths and how they were used to shape popular music genres.
My amazing journey from mainframes to smartphones chm lecture aug 2014 finalDileep Bhandarkar
Disruptive technologies have caused dramatic changes in computing technology for decades, often in unacknowledged ways. In this talk, Dr. Dileep Bhandarkar will paint a picture that puts these changes into perspective, and which shows how this series of disruptions have set a course that has evolved from the mainframe to the current smartphone, mobile and cloud computing world.
Build an Analog Synthesizer with littleBitsChad Mairn
Discover the wonderful world of littleBits! Learn the basics of sound, understand synthesizers and their history, and build a basic analog synthesizer that generates beeps, blips, and other fun electronic sounds. This workshop is hands-on and you will also learn how to control littleBits with an external MIDI controller and use littleBits to control Ableton Live and other Digital Audio Workstations (DAW).
LittleBits are magnetic modules that snap together to create electronic circuits without wiring. They can be used to build analog synthesizers and control music software. The presentation discusses the basics of sound, the history of synthesizers, and how to build a synthesizer and control a DAW using the LittleBits synth kit and MIDI module. The synth kit contains modules like oscillators, filters, envelopes and effects that can be combined using the magnetic connections to create sounds, which can then be controlled or triggered using the MIDI module to interface with software.
The document discusses the evolution of computer architectures from early technological achievements like the transistor and integrated circuit. It describes increasing transistor densities following Moore's Law. Future technologies will focus on increasing core counts while decreasing cycle times and voltages. Performance will come from parallelism rather than clock speed increases due to heat limitations. The document outlines challenges in scaling to exascale systems by 2018.
The document is a glossary assignment for a games design course requiring research and definitions of sound design and production terms. It contains a template with over 30 terms defined along with their relevance to the student's own production practice. Definitions are from online sources and include terms like foley artistry, sound libraries, file formats like .wav and .mp3, audio hardware and software like MIDI, samplers and plugins.
Real Time Drum Augmentation with Physical ModelingBen Eyes
This document discusses augmenting acoustic drums with physical modeling to create new sounds and performances. It summarizes previous research that used convolution or spectral processing to digitally process drum sounds. The author then describes his own project that uses a physical model of strings as a VST plugin to process drum sounds from a snare drum and rototoms in real time. An interview with the percussionist discusses the collaborative composition process and how playing with the system required experimenting with extended techniques. The author concludes that future work will involve developing their own drum models and exploring new interfaces like facial recognition to control sound parameters.
We were pioneers: early applications of dwn simulations_2Piero Belforte
The early applications (1970s) of a revolutionary electrical circuit simulation method (DWN) are presented including device modelling and signal integrity driven design of high speed digital modules. These modules were utilized to develop the prototypes of digital switching systems deployed in Italian Telecom network in the 1970s.
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...BigDataEverywhere
The document discusses the history and future of high performance computing (HPC). It outlines the key technologies and architectures that have enabled exponential increases in computational power over recent decades. These include vector processing, parallelization, GPUs, and interconnects like Infiniband. The document also examines emerging technologies like exascale computing and quantum computing that could further push the boundaries of HPC. Overall, the document argues that HPC will remain indispensable for scientific discovery and engineering innovation into the future.
Both analogue and digital recordings start by converting sound into an electrical analog signal using a microphone. Analogue recordings directly transfer this signal to tape or vinyl records, while digital recordings convert the analog signal into numeric code. After digitization, digital recordings can be copied to formats like CDs, hard drives, or streamed online. While both methods have advantages, analogue recordings are vulnerable to deterioration over time and have limited editing capabilities, while digital recordings have greater editing flexibility but risk data corruption and compatibility issues between software.
Shaun Warburton produced a glossary of sound design and production terms. He researched definitions from online sources and described how each term relates to his own production practice. Some key terms he defined and related to his work include foley artistry, sound libraries, uncompressed audio file formats like .wav and .aiff, lossy compression formats like .mp3, limitations of early sound technology like Sound Processor Units, and digital audio recording systems like MIDI keyboards.
Digital resistance, East European demo artJari Jaanto
The Alternative Party presents a showcase of old computer hardware (and new software for them) from the Soviet Union and East European countries, showing a rare glimpse of the unique computing culture that is very different from the corporate-driven, western hardware offerings. A show of unique styles of digital art, influenced by demo art and local computer hobbyist cliques, with a distinct Eastern flavour.
Improvising Songwriting and Composition Within A Hybrid Modular Synthesis SystemHussein Boon
This paper discusses a semi-improvised compositional approach, within a hybridised electroacoustic music context. It will feature a live presentation and discussion of a novel form of professional application to expand contemporary, artistic practice.
A central component of this approach is a discussion of the Analogue Shift Register (ASR) including its various digital representations. Historically the ASR emerged during the early 70s with the first example by Serge Tcherepnin, described as a '.. sequential sample and hold module for producing arabesque-like forms in musical space'.
A contemporary realisation of the ASR can be found in devices such as Ornament and Crime's (O&C) CopierMaschine and Turing Machine (Music Thing Modular/Tom Whitwell). The Shift Register, whether analogue or digital, as part of a composing/writing practice can be a potentially more engaging vehicle, due to its self-generative capabilities, than perhaps a sequencer would be for many practitioners. Whilst it is a slightly more 'esoteric' device, the lines or patterns developed using these systems can in turn be resampled and integrated into various types of composition. As a practical aid the device can seed results applicable to any electroacoustic medium whether for stage, recording studio or live performance. Outputs can be managed with varying levels of granularity and artists can produce innovative results when combined with knowledge of harmony, oscillator tuning, cv quantisation, alongside the exploration of various generative algorithms. Assisted by these devices, the performer/composer would be able to extend their practice to generate structurally complex pieces using a novel approach that hasn’t previously been realised or considerably experimented with alongside contemporary music making tools
The presentation will demonstrate some approaches that allow for original work to be devised using a modular synthesiser as part of the compositional/songwriting process and will enable discussion of the relative merits of this novel form of professional application.
This document discusses real-time embedded acoustic DSP projects. It begins by describing the objective of creating audio special effects using DSP algorithms. It then discusses the technologies applied, including the hardware used which is a TMS320C6713 DSK and the Code Composer Studio software. The core of the document discusses the theory behind building blocks like comb filters, all-pass filters, and notch filters that are used to create effects like echo, flanging, chorus, phasing, reverb, tremolo, and more. It concludes by discussing considerations for real-time embedded DSP applications using the TMS320C6713 architecture.
The document is a glossary of terms related to sound design and production for computer games created by Phillip Norris Wynne. It contains definitions for over 20 key terms sourced from online references and describes the relevance of each term to the author's own production practice. Some of the terms defined and summarized include Foley artistry, sound libraries, audio file formats like .wav and .mp3, lossy compression, audio limitations of hardware like sound processor units, and audio sampling concepts like bit depth and sample rate.
This document discusses managing sound data from digitized sound archives. It covers inventorying and cataloguing physical media like tapes, discs, and cassettes to make the digital sound files accessible. Topics include writing file specifications, assigning metadata, and ensuring files are stored securely and can be discovered. Proper storage conditions and handling of physical media are also reviewed to support long-term preservation of and access to the digitized content.
Similar to moForte's Audio Modeling Technology Deck 3/17/2014 (20)
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
1. Physical Modeling of Musical
Instruments on Handheld Mobile
Devices.
Pat Scandalis (CTO, acting CEO) gps@moforte.com
Dr. Julius O. Smith III (Founding Consultant)
Nick Porcaro (Chief Scientist)
moForte Inc.
moForte.com's Technology Deck
3/17/2014
12/6/13 1
2. Physical Modeling Synthesis
• Methods in which a sound is generated using
a mathematical model of the physical source
of sound.
• Any gestures that are used to interact with a
real physical system can be mapped to
parameters yielded an interactive an
expressive performance experience.
• Physical modeling is a collection of different
techniques.
12/6/13 2
3. First a Quick Demo!
12/6/13 3
Demo
(youTube)
DEMO:
Modeled
Guitar
Features,
Purple Haze
4. Overview
• A brief history of physically modeled musical
instruments as well as some commercial
products that have used this technology.
• Demonstration what is currently possible on
handheld mobile devices using the moForte
Guitar.
• Brief overview of where the technology is
heading.
12/6/13 4
5. Why Musical Physical Models on
handheld mobile devices?
• Handheld mobile computing devices are now
ubiquitous.
• These devices are powerful, connected and
equipped with a variety of sensors.
• Pervasiveness of mobile/sensor rich
computing devices has created an
opportunity to revisit parametrically
controlled, physically modeled, virtual musical
instruments using handheld mobile devices.
12/6/13 5
6. Properties of Handheld Mobile
Devices
• Ubiquitous
• Small
• Powerful
• Multi-touch screens
• Sensors: acceleration, compass, gyroscope,
camera, gestures
• Connected to networks
• Socially connected
• Integrated payment systems
12/6/13 6
7. Brief (though not complete) History
of Physical Modeling Synthesis
As well as a few commercial products
using the technology
12/6/13 7
9. Daisy Bell (1961)
• Daisy Bell (MP3)
• Vocal part by Kelly and Lochbaum (1961)
• Musical accompaniment by Max Mathews
• Computed on an IBM 704
• Based on Russian speech-vowel data from
Gunnar Fant’s book
• Probably the first digital physical-modeling
synthesis sound example by any method
• Inspired Arthur C. Clarke to adapt it for “2001: A
Space Odyssey” the Hal 9000’s “first song”
12/6/13 9
10. Karplus-Strong (KS) Algorithm (1983)
12/6/13 10
• Discovered (1978) as “self-modifying wavetable synthesis”
• Wavetable is preferably initialized with random numbers
• Licensed to Mattel
• The first musical use of the algorithm was in the work “May
All Your Children Be Acrobats” written in 1981 by David A.
Jaffe. (MP3)
11. EKS Algorithm (Jaffe-Smith 1983)
12/6/13 11
• Musical Example “Silicon Valley Breakdown” (Jaffe 1992) (MP3)
• Musical Example BWV-1041 (used to intro the NeXT machine 1988) (MP3)
12. Digital Waveguide Models (Smith 1985)
• Useful for efficient models of
– Strings
– Bores
– plane waves
– conical waves
12/6/13 12
15. Commuted Synthesis Examples
12/6/13 15
• Electric guitar, different pickups and bodies (Sondius)
(MP3)
• Mandolin (STK) (MP3)
• Classical Guitar (Mikael Laurson, Cumhur Erkut, and
Vesa Välimäki) (MP3)
• Bass (Sondius) (MP3)
• Upright Bass (Sondius) (MP3)
• Cello (Sondius) (MP3)
• Piano (Sondius) (MP3)
• Harpsichord (Sondius) (MP3)
16. Yamaha VL Line (1994)
• Yamaha Licensed “Digital Waveguide
Synthesis” for use in its products including the
VL line (VL-1, VL-1m, VL-70m, EX-5, EX-7, chip
sets, sound cards, soft-synth drivers)
• Shakuhachi: (MP3)
• Oboe and Bassoon: (MP3)
• Tenor Saxophone: (MP3)
12/6/13 16
18. “The Next Big Thing” (1994)
12/6/13 18
The Next Big Thing 2/94 The History of PM 9/94
19. Stanford Sondius Project (1994-1997)
• Stanford OTL/CCRMA created the Sondius project
to assist with commercializing physical modeling
technologies.
• The result was a modeling tool known as
SynthBuilder, and a set of models covering about
two thirds of the General MIDI set.
• Many modeling techniques were used including
EKS, Waveguide, Commuted Synthesis, Coupled
Mode Synthesis, Virtual Analog.
12/6/13 19
20. SynthBuilder (Porcaro, et al) (1995)
• SynthBuilder was a user-
extensible, object-oriented,
NEXTSTEP Music Kit
application for interactive
real-time design and
performance of synthesizer
patches, especially physical
models.
• Patches were represented
by networks consisting of
digital signal processing
elements called unit
generators and MIDI event
elements called note filters
and note generators.
12/6/13 20
21. The Frankenstein Box (1996)
• The Frankenstein box was
an 8 DSP 56k compute
farm build by Bill Putnam
and Tim Stilson
• There was also a single
card version know as the
“Cocktail Frank”
• Used for running models
developed with
SynthBuilder
• The distortion guitar ran
on 6 DSPs with an
additional 2 DSPs used for
outboard effects.
12/6/13 21
22. The Sondius Electric Guitar (1996)
• Pick model for different guitars/pickups (commuted synthesis, Scandalis)
• Feedback and distortion with amp distance (Sullivan)
• Wah-wah based on cry baby measurements (Putnam, Stilson)
• Reverb and flanger (Dattorro)
• Hybrid allpass delay line for pitchBend (Van Duyne, Jaffe, Scandalis)
• Performed using a 6-channel MIDI guitar controller.
• With no effects, 6 strings ran at 22k on a 72 Mhz Motorola 56002 DSP.
• Waveguide Guitar Distortion, Amplifier Feedback (MP3)
12/6/13 22
23. Sondius Sound Examples (1996)
• Waveguide Flute Model (MP3)
• Waveguide Guitar Model, Different Pickups (MP3)
• Waveguide Guitar Distortion, Amplifier Feedback (MP3)
• Waveguide Guitar Model, Wah-wah (MP3)
• Waveguide Guitar Model, Jazz Guitar (ES-175) (MP3)
• Harpsichord Model (MP3)
• Tibetan Bell Model (MP3)
• Wind Chime Model (MP3)
• Tubular Bells Model (MP3)
• Percussion Ensemble (MP3)
• Bass (MP3)
• Upright Bass (MP3)
• Cello (MP3)
• Piano (MP3)
• Harpsichord (MP3)
• Virtual Analog (MP3)
12/6/13 23
24. Coupled Mode Synthesis (CMS)
(Van Duyne) (1996)
• Modeling of percussion sounds
• Modal technique with coupling
• Tibetan Bell Model (MP3)
• Wind Chime Model (MP3)
• Tubular Bells Model (MP3)
• Percussion Ensemble (MP3)
12/6/13 24
25. Virtual Analog (Stilson-Smith) (1996)
• Alias-Free Digital Synthesis of Classic Analog
Waveforms
• Digital implementation of the Moog VCF. Four
identical one-poles in series with a feedback loop.
• Sounds great! (MP3) (youTube)
12/6/13 25
26. Synthesis Tool Kit (STK) (1997)
• Synthesis Tool Kit (STK) by Perry Cook, Gary
Scavone, et al. distributed by CCRMA
• The Synthesis Toolkit (STK) is an open source
API for real time audio synthesis with an
emphasis on classes to facilitate the
development of physical modeling
synthesizers.
• Pluck example (MP3)
• STK Clarinet (MP3)
12/6/13 26
27. Seer Systems “Reality” (1997)
• Stanley Jungleib, Dave Smith (MIDI, Sequential
Circuits)
• Ring-0 SW MIDI synth. Native Signal Processing.
• Offered a number of Sondius Models.
12/6/13 27
28. Staccato SynthCore (1999)
• Staccato Systems spun out of Sondius in 1997 to commercialize
Physical Modeling technologies.
• SynthCore was a ring-0 synthesis driver that supported both
DLS (Down Loadable Sounds) and Staccato’s proprietary Down
Loadable Algorithms (DLAs). It was distributed in two forms.
• Packaged as a ring-0 “MIDI driver”, SynthCore could replace
the wavetable chip on a sound card, as a software based XG-
lite/DLS audio solution (SynthCore-OEM) (SigmaTel, ADI)
• Packaged as a DLL/COM service, SynthCore could be integrated
into game titles so that games could make use of interactive
audio algorithms (race car, car crashes, light sabers)
(SynthCore-SDK) (Electronic Arts, Lucas Arts…)
12/6/13 28
29. SynthCore Game Models (2000)
12/6/13 29
• Jet (Stilson) (MP3)
• Race Car (Cascone, et al) (MP3)
30. SynthCore Wavetable Chip
Replacement
• About half of the General MIDI set was implemented with
physical models though few existing MIDI scores could
make use of the expression parameters.
• Staccato was purchased by Analog Devices in 2000. ADI
combined Staccato’s ring-0 software based XG-lite/DLS
MIDI synth with a low cost AC97 codec and transformed
the PC audio market from sound cards to built-in audio.
12/6/13 30
31. Faust-STK (2011)
• FAUST [Functional Audio Stream] is a
synchronous functional programming
language specifically designed for real-
time signal processing and synthesis.
• The FAUST compiler translates DSP
specifications into equivalent C++
programs, taking care of generating
efficient code.
• The FAUST-STK is a set of virtual musical
instruments written in the FAUST
programming language and based on
waveguide algorithms and on modal
synthesis. Most of them were inspired
by instruments implemented in the
Synthesis ToolKit (STK) and the program
SynthBuilder.
12/6/13 31
32. Smule Magic Fiddle (2010)
12/6/13 32
Smule | Magic Fiddle for iPad [St. Lawrence String Quartet] (youTube)
33. Compute for string models over the years
• NeXT Machine (1992)
– Motorola DSP56001 20MHz 128k dram, 22k sample rate
• 6 plucks
• or 2-4 Guitar Strings
• Frankenstein, Cocktail Frank (1996)
– Motorola DSP56301 72MHz 128k dram, 22k sample rate
• 6 guitar strings, feedback and distortion,
• Reverb, wah-wah, flange running on a additional DSPs
• Staccato (1999)
– 500MHz Pentium, native signal processing, 22k sample rate
– 6 strings, feedback and distortion used around 80% cpu
• iPhone 4S (2013)
– 800 MHz A5, 44k sample rate
– 6 strings, feedback and distortion use around 37% cpu
• iPad 2 (2013)
– 800 MHz A5, 44k sample rate
– 6 strings, feedback and distortion use around 37% cpu
12/6/13 33
37. The DSP Guitar Model
• Numerous extensions on EKS and Waveguide
• Can be calibrated to sound like various guitars. Realized in Faust
• Charts can access and control ~50 controllers.
• A selection of controllers:
– Instrument (select a calibrated instrument)
– velocity
– pitchBend, pitchBendT60 (bending and bend smoothing rate)
– t60 (overall decay time)
– brightness (overall spectral shape)
– velocity
– harmonic (configure the model to generate harmonics)
– pinchHarmonic (pinch harmonics)
– pickPosition (play position on the string)
– Apagado (palm muting)
12/6/13 37
DEMO:
Different
Guitars, Rock
and Roll -
Strum
38. The Effects Chain
• Chart Player, Guitar, Distortion, Compressor,
Wah, Auto Wah, 4 band Parametric EQ,
Phaser, Flanger, Reverb, Amplifier.
• Realized in Faust.
12/6/13 38
DEMO:
Strumming
Chart
39. The Performance Model
• Strumming and PowerChording Gestures.
• Slides
• Strum Separation Time
• Variances
• Strum Kernels
• Chart Player
12/6/13 39
40. Disrupting the Uncanny Valley
• We want the playing experience to be fun.
• Aiming toward “Suspension of Disbelief”.
• Use modeling to get close to the real physical
sound generation experience.
• Sometimes “go over the top”. Its expressive
and fun!
• Use statistical variances to disrupt repetitive
performance.
12/6/13 40
41. Controls With Statistical Variance
• velocity
• pickPosition
• brightness
• t60
• keyNum
• strumSeparationTime
• strumVariation (in auto strum mode)
12/6/13 41
DEMO:
Strum
Variations
42. Strum Kernels
• Small strumming sequences that model
how guitar players strum.
• Separates the harmonic context and the
musical presentation. Thus the same
chord sequence can be performed with
different strum kernels.
• A strum is an rhythmic event that is part
of a strum kernel. Each strum can
model, direction, strings, velocity,
pickPosition, t60, brightness, strum
separation time.
• Many types of expressive performance
possible, strumming, strum clamps,
finger picking, comping.
12/6/13 42
DEMOS:
Finger Picking,
Stairway to
Heaven,
Rasguedo
43. What’s Next: Modeling More
Articulations
12/6/13 43
Currently implement Articulations
Apagado
Arpeggio strum
Bend
Bend by distressing the neck
Burn or destroy guitar
Feedback harmonics
Finger picking
Glissando
Hard dive with the whammy bar
Harmonic
Muted strum
Pinch harmonic
Play harmonics with tip of finger and thumb
Polyphonic bend
Polyphonic slide, Polyphonic slide + open strings
Scrape
Slide
Staccato
Steinberger trans- trem
Strum
Surf apagado
Surf quick slide up the neck
Tap time
Vibrato
Walk bass
Whammy bend
Whammy spring restore
Future Articulations
Bottleneck (portamento Slide)
Bowing
Bridge/neck short strings
ebowing
Finger Style (Eddie Van Halen)
Hammer, polyphonic hammer
Individual String Pitch Bend
Legato
Pluck, sharp or soft pick
Pop
Prepared string (masking tape)
Pull, polyphonic pull
Rasqueado
Reverb spring Bang.
Scrape+ (ala Black Dog)
Slap
Strum and body tap
Strum and string tap
Touching Ungrounded Cable
Trill
Trill up the neck into echo
Vibrato onset delay
Volume pedal swell
Volume pedal swell into delay device
44. moForte Guitar 2014
• R1.4 in iTunes App
Store
• Soft launch and user
testing will continue
until R1.8.
• Upcoming App split:
–moForte GuitarInator
–moForte Guitar
12/6/13 44
DEMO:
Blue Swirl
46. When will it be available for Android?
• We plan to support Android by holiday 2014.
• We see Android as an important opportunity
and key to meeting our target goals.
• The Core DSP is implemented in Faust which is
emitted as C++. Faust now supports Android,
and the core DSP is easily ported.
• We are still evaluating what strategy to take
with the performance model (likely a C++
port) and the UI.
12/6/13 46
47. What is moForte's "Conduct and
Express" metaphor?
• moForte's mission is to provide highly interactive, social
applications that empower everyone to make and share musical
and sonic experiences.
• moForte has developed a unique “conduct and express”
performance metaphor that enables everyone to experience
performing the guitar. The performance experience has been
transformed into a to a small number of gestures:
– tap/hold, for electric lead “PowerChording”
– swiping, for strumming)
– rotations and hold swiping for expression
• MoForte Guitar makes it possible for everyone to experience
strumming a guitar, to experience what it’s like to play feedback-
distortion Guitar.
12/6/13 47
48. Can users jam together across the
internet? (1 of 2)
• moForte has investigated this area but is NOT currently working on
creating a platform for jamming across the internet.
• Latency is a significant issue.
– see http://en.wikipedia.org/wiki/Latency_(audio)
• The shared performance experience is particularly sensitive to perceived
latency. Within the MI (Musical Instrument) industry its a rule of thumb
that if key->sound latency is much larger than 11ms, the performer will
need to "play ahead" leading to a performance that is “loose”, error prone
and even frustrating.
• Audio latency facts:
– Audio Latency in air at sea level/room temp ~1ms/ft
– Using the speed of light the fastest round trip around the earth is 135ms
(vacuum) - 200ms (FO cable).
– Real inter-network latencies can be much greater and more variable.
12/6/13 48
49. Can users jam together across the
internet? (2 of 2)
• Some types of performances are possible:
– Slow performances
– Cascaded
– Side by side (one player after the other)
– Electrifying, tight duets, or real ensembles are less likely to work.
• For consumers an experience like a band jamming across the
internet is not likely be a good experience
12/6/13 49
• In Flamenco music the interaction between
two players is referred to as Duende "It comes
from inside as a physical/emotional response
to art. It is what gives you chills, makes you
smile or cry as a bodily reaction to an artistic
performance that is particularly
expressive". These players are performing
and syncing with around 3ms of air
latency. This is typical of many performance
situations.
50. What is the latency?
• The largest source of latency (for ios) appears to between
screen interaction and the guitar model. Note that the
audio buffer latency is about 5ms.
• We started at 180ms screen to audio out.
• We brought this down to 21-36ms by replacing Apple's
gesture handlers with a custom gesture handler. This
makes sense. Gesture handling requires analysis of a
moderate amount of state to initiate an action.
• We have not yet measured MIDI/OSC to audio latency, but
we believe that it will allow us to get close to our 11ms
goal.
• PowerStomp which is audio-in/effects chain/audio out is
around 11ms.
12/6/13 50
51. What about wireless audio out of the
device?
• We've looked a number of wireless audio
solutions. Most are intended for playback of
recorded music and have significant latency;
some as much as 1 second.
• We've not found a solution yet with reasonable
latency.
• We've also looked a number of "legacy" wireless
FM transmitters. None of what we have tried
have good audio performance.
• We may need to build our own technology in this
area.
12/6/13 51
52. What about wireless synchronized
performances (virtual orchestras)?
• We have been experimenting with the idea of wireless
conductor/performer.
• One device is the conductor and the source of time.
• Each device (performer) has its own part.
• The performers receive temporal corrections from the
conductor using techniques similar to NTP.
• These temporal corrections can be very minimal data
in the wireless network. We estimate that temporal
corrections can be as infrequent as once every 30
seconds.
• This will enable a large number of devices in a wireless
network to coordinate a performance.
12/6/13 52
53. What about playing along with your
music library?
• Its possible, but may not be a great user experience.
• Currently the Screen->sound latency is a bit long (~21-
36ms) to make this a great user experience.
• Playing along with the music library may be possible via
MIDI/OSC or even the Guitar-Iinator enclosure concept.
12/6/13 53
54. Can the app listen to your music
library and automatically generate
charts to play?
• We've been looking at various MIR
(Music Information Retrieval) technologies to
support this idea.
• There are a number of products on the market that
try to do harmonic context recognition (the chords) with various degrees of
success.
– CAPO an assisted/manual transcription program used by music transcribers has
some support to recognize chords using spectral techniques.
– A website called chordify.net that works to recognize the chords for a song using
MIR techniques.
• This is an active area of research.
• We may partner with other companies that work in this area. The goal would
be to get them to generate our chart XML based on MIR techniques.
12/6/13 54
55. What will the social network sharing
look like?
12/6/13 55
56. Will moForte do Physical Models
for games?
• At Staccato we did physical models for games:
http://www.scandalis.com/Jarrah/PhysicalModels/index.ht
ml#Staccato
– We had adoption success (1997-2000): The race car and crashes
in the EA Nascar line of games, a light sabre for Lucas Arts.
– The monetization opportunity was not there. The studios
wanted to pay as little as $5k/title for a buyout of the
technology.
• In 1999 games were selling upwards of $50/seat. Today a
game is a few dollars and we don't think that there is a
reasonable monetization opportunity.
12/6/13 56
57. Can you sense pressure/impulse with
the touch screen?
• This would be useful for percussion and other instruments.
• We've experimented with using the accelerometer to extract a parameter that
correlates with pressure. There are a number of challenges with this approach.
– On iOS devices the accelerometer appears to be under-sampled to properly identify an
impulse peak.
– The result is highly skewed by how rigidly the user is holding the device, and when the device
is set down on a rigid surface (table), it does not work at all.
• We believe that there is a correlation between spot-size and force. This would
need to be sampled at a reasonable rate and integrated over an appropriate
window.
– iOS has some non-public API to read spot size, but use of this API is known to be a reason for
app rejection.
– We understand that Android provides access to spot size for a touch. We've not yet
experimented with this.
• Search reveals that there are a number of efforts to implement a HW solution.
12/6/13 57
58. Do you have backing tracks?
• We are planning to support backing tracks in a
future release.
• Playing with a backing track involves some of the
same latency issues that exist as with playing
along with your music library.
• We are developing an "auto-solo" technology
that will mitigate most of these issues and allow
even the enthusiast to play along with a backing
track and sound like an amazing player.
12/6/13 58
59. How much of the CPU is moForte
Guitar utilizing?
• We are currently running six strings and the
effects chain.
• On an iPhone 4s or iPad2 this is using about
70% or the CPU… ~37% for the 6 guitar
strings.
• Visualization graphics are running on the GPU.
• The compute opportunity gets better with
time and we plan to exploit that.
12/6/13 59
60. How accurate is the timing in moForte
Guitar?
• In iOS for audio we are using CoreAudio with
5ms buffers.
• The sequencer is very accurate. In iOS we are
using a CoreAnimation timer which is tied to
the graphics refresh rate.
• We are using standard techniques to manage
jitter (~2ms on average).
12/6/13 60
61. Why even model a guitar, don't
samples sound great?
• Sampled guitars do sound
great. But they are not
interactive, and they can have a
flat repetitive playback
experience.
• By modeling the guitar its
possible to make interactive
features like, feedback,
harmonics, pick position, slides
brightness, palm muting part of
a performance.
• moForte has identified a list of
around 70 guitar articulations
that can be used by
players. The physicality of the
model makes it possible for
these articulations to be used in
performances.
12/6/13 61
Currently implement Articulations
Apagado
Arpeggio strum
Bend
Bend by distressing the neck
Burn or destroy guitar
Feedback harmonics
Finger picking
Glissando
Hard dive with the whammy bar
Harmonic
Muted strum
Pinch harmonic
Play harmonics with tip of finger and
thumb
Polyphonic bend
Polyphonic slide, Polyphonic slide +
open strings
Scrape
Slide
Staccato
Steinberger trans- trem
Strum
Surf apagado
Surf quick slide up the neck
Tap time
Vibrato
Walk bass
Whammy bend
Whammy spring restore
Future Articulations
Bottleneck (portamento Slide)
Bowing
Bridge/neck short strings
ebowing
Finger Style (Eddie Van Halen)
Hammer, polyphonic hammer
Individual String Pitch Bend
Legato
Pluck, sharp or soft pick
Pop
Prepared string (masking tape)
Pull, polyphonic pull
Rasqueado
Reverb spring Bang.
Scrape+ (ala Black Dog)
Slap
Strum and body tap
Strum and string tap
Touching Ungrounded Cable
Trill
Trill up the neck into echo
Vibrato onset delay
Volume pedal swell
Volume pedal swell into delay device
62. Do you model all oscillation modes of
the string, x-y-torsional. Coupling,
multi stage decay?
• We are modeling one of the primary modes.
• We are looking at adding bridge coupling
• As available compute increases we may add a
second primary mode as well as other
features.
12/6/13 62
63. What about acoustic guitars and all
the other chordophones?
• Yes we are working on
many different types of
electric and acoustic
chordophones.
• moForte is developing a
calibration process that
will allow us to
generate model data
for these different
instruments.
• These instruments will
be offered as in-app
purchases for moForte
Guitar.
12/6/13 63
64. When will moForte offer a Ukulele?
• We are working on modeling a ukulele along
with a number of other chordophones.
• These instruments will be offered as in-app
purchases for moForte Guitar.
• The ukulele is one of the most requested
instruments that we are asked about ;-)
12/6/13 64
65. Can I plug my real guitar into the
effects chain?
• moForte has been working on an in-app
upgrade to moForte Guitar called
PowerStomp that will allow a user to plug
a real instrument into the effects chain.
• PowerStomp can be combined with a
special audio in/out cable to connect the
guitar, device and amplifier. Also
PowerStomp supports the Airturn
next/previous pedal to step through a
chart of effects changes.
• We demo-ed PowerStomp at NAMM in
January.
• PowerStomp will likely ship in the spring
or summer.
12/6/13 65
66. What’s the plan for growing the
number of effects that are offered?
• moForte's monetization model includes selling
additional effects both for the model guitar and
for PowerStomp, the effects chain upgrade.
• There is a large body of open-source and BSD
effects processor algorithms to draw on. We
will likely re-implement these processors in
Faust.
• moForte has a list of effect units that it plans to
offer for sale in the near term. We expect this
list to grow to between 20-40 different types of
effect processors.
12/6/13 66
67. The distortion sounds great. What
about overdrive?
• Our distortion unit implements hard
distortion.
• As we expand our effects offerings we
will offer an amp/tube/overdrive
modeling unit
• There is a body of open-source and BSD
algorithms in this area to draw on. We
will likely re-implement these algorithms
in Faust.
12/6/13 67
68. Tell me about the chart editor
• The Chart Editor is an advanced feature that allows
users to create their own charts.
• moForte's underlying chart representation is specified
as XML with an XSD for validation.
• The chart editor that creates chart XMLs is currently
designed for a phone size device.
• Over time we will provide an alternative more
expansive chart UI for tablet devices.
• We may also provide a browser based UI for chart
creation.
• We may also open our chart specification for 3rd
party apps to be able to create charts.
12/6/13 68
69. Tell me about the chordTape UI
• The chordTape is the UI presentation of moForte
Guitar’s chart format.
• moForte started out with the concept that a score is a
simple list of chords that you strum.
• We quickly moved on to supporting lines (riffs) with
single note chords.
• moForte will soon make a transition to tablature as its
primary chart presentation method.
• Tablature is a very well known score presentation
method, used by millions of guitar players. There is a
large body of tablature literature that can be brought
into moForte Guitar.
12/6/13 69
70. Why would a guitar player be
interested in moForte Guitar?
• Guitar-inator is aimed at entertainment
(gamified-tablature, guitar accompanied
karaoke)
• moForte Guitar offers real utility to musicians
and guitar players in the form of:
– Real instrument performance
– Effects processor for real instruments
– Accompaniment
– Song writing.
12/6/13 70
71. Will you do plugins VST, Audio Units,
other audio plugin architectures?
• At the present time there are roughly 10 different
audio plug-in architectures and dozens of
different Digital Audio Workstations (DAWs).
• The task of qualifying and supporting a plugin for
these combinations is enormous.
• Many of the plug-in companies dedicate large
number of resources to qualification and support.
• At the present time moForte does not plan to
market and sell plugins.
• However we may partner with a plugin company
to offer moForte Guitar.
12/6/13 71
72. How are you different from the Guitar
Hero & Rock Band line of games?
• The Guitar Hero and Rockband line of games are rhythm games.
– The goal of game play is for the player to win points by tapping (and strum) notes at the right time based on
cues on the screen.
– The player is presented with a pre-recorded track,
– The player earns scores and feedback about the performance.
– In these games the virtual guitar does not appear to be organized like a real guitar. Thus game play does not
translate into a real learning experience.
– Because playback is a pre-recorded track, slowing down for learning mastery is difficult.
• In contrast in moForte’s Guitar-inator
– The goal is to learn the rhythm of the part so that the part can be played and expressed in a performance
visualizer.
– The player is NOT presented with a pre-recorded track. The user is actually playing the guitar part.
– If the user plays with the correct timing a tally is incremented to show the number of correct taps.
– The user can play the guitar and share that performance with friends.
– Guitar-inator is a gamification of guitar tablature and as such can be used to learn to play the song on a
real guitar.
– Playback can be slowed down for learning mastery.
• Note that moForte Guitar is not a game. Its a set of performance and composing tools for
musicians and guitar players.
12/6/13 72
73. Will moForte provide guitar training
software?
• Training software is targeted to the market of aspiring
guitar players.
• This is a complex educational problem and requires a
significant body of training material to be authored and
proven.
• A the present time we are not approaching this market,
though we may license our technology to companies
who are working on this problem.
• We will however, be providing the means for guitar
players (~20M in the US) to learn to play specific pieces
of music via single stepping through tablature (~R1.8)
12/6/13 73
74. Thanks!
• Mary Albertson
• Chris Chafe
• John Chowning
• Perry Cook
• Jon Dattorro
• David Jaffe
• Joe Koepnick
• Fernando Lopez-Lezcano
• OTL
• Nick Porcaro
• Bill Putnam
• Gregory Pat Scandalis
• Julius Smith
• Tim Stilson
• Scott Van Duyne
• Yamaha
12/6/13 75
and CCRMA