1. AVAudioEngine is a framework that simplifies building real-time audio apps on iOS and macOS. It allows playing, recording, and processing audio using a node-based graph architecture.
2. Audio flows through a graph of connected audio nodes, forming active chains that establish audio processing threads. Common node types include sources, processors, and destinations.
3. The engine handles audio format management, buffer scheduling, and thread synchronization. It supports audio file and buffer playback, recording, effects processing, audio mixing, and MIDI playback through sampler instruments.
The respective talk was held by Oleksandr Shevchenko (Senior Engineering Consultant, GlobalLogic) at GlobalLogic Lviv Embedded TechTalk #2 on May 23, 2018.
Oleksandr presentation is about features of software architecture, which provides parallel work of Linux and operating system real-time on different cores of a single processor. The talk is also about the Linux mechanism, which allows to connect the processor cores after the boot process has finished, the so-called "CPU Hotplug".
The respective talk was held by Oleksandr Shevchenko (Senior Engineering Consultant, GlobalLogic) at GlobalLogic Lviv Embedded TechTalk #2 on May 23, 2018.
Oleksandr presentation is about features of software architecture, which provides parallel work of Linux and operating system real-time on different cores of a single processor. The talk is also about the Linux mechanism, which allows to connect the processor cores after the boot process has finished, the so-called "CPU Hotplug".
Performance comparison between different versions of Asterisk and different channels.
Presentation at Astricon 2017
Online Trainings
Complete Asterisk PBX Training Coupon http://bit.ly/2E6U7fP
Understanding and Troubleshooting SIP http://bit.ly/2WZKkzy
VoIP Design Training Coupon http://bit.ly/2tlLFmG
VoIP Hacking Training Coupon http://bit.ly/2X4Pjz7
Quick Start to OpenSIPS Coupon http://bit.ly/2Bt72XJ
Books
Complete Asterisk Training Paperback and Kindle Book https://amzn.to/2tm7TFb
Complete Asterisk Training eBook PDF http://bit.ly/2UUebHG
Building Telephony Systems with OpenSIPS http://bit.ly/2SsUt9a
Treinamento Online
SIP em Profundidade Coupon de Treinamento http://bit.ly/2GpatTq
Asterisk Essencial Coupon de Treinamento http://bit.ly/2BzCjs1
Livros:
Guia e Configuração do Asterisk Paperback e Kindle https://amzn.to/2S2lwmZ
Guia de Configuração do Asterisk eBook PDF http://www.asteriskguide.com
Grand Central Dispatch (GCD) dispatch queues are a powerful tool for performing tasks. Dispatch queues let you execute arbitrary blocks of code either asynchronously or synchronously with respect to the caller.
A session in the DevNet Zone at Cisco Live, Berlin. The APIC REST API is a programmatic interface to the Application Policy Infrastructure Controller (APIC) that uses a Representational State Transfer (REST) architecture. The API accepts and returns HTTP or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents. Developers can use any programming language to generate the messages and the JSON or XML documents that contain the API methods or managed object (MO) descriptions.
I have tried to present maximum detail on android booting sequence in a very abstract way. I hope it would be useful. If you find any correction needed please mention it on comments. Happy Coding :)
The slides begins with introduction to the character drivers and then mentions the various APIs for registering the character driver. Dynamically creating the device file and IOCTL
Stupid Video Tricks (CocoaConf DC, March 2014)Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
Have a quick overview of most of the embedded linux components and their details. How ti build Embedded Linux Hardware & Software, and developing Embedded Products
his Course is about learning How Linux Processes Talk to each Other. This is a sub-domain of Linux System Programming. We shall explore various popular mechanism used in the industry through which Linux processes to exchange data with each other. We will go through the concepts in detail behind each IPC mechanism, discuss the implementation, and design and analyze the situation where the given IPC is preferred over others.
Performance comparison between different versions of Asterisk and different channels.
Presentation at Astricon 2017
Online Trainings
Complete Asterisk PBX Training Coupon http://bit.ly/2E6U7fP
Understanding and Troubleshooting SIP http://bit.ly/2WZKkzy
VoIP Design Training Coupon http://bit.ly/2tlLFmG
VoIP Hacking Training Coupon http://bit.ly/2X4Pjz7
Quick Start to OpenSIPS Coupon http://bit.ly/2Bt72XJ
Books
Complete Asterisk Training Paperback and Kindle Book https://amzn.to/2tm7TFb
Complete Asterisk Training eBook PDF http://bit.ly/2UUebHG
Building Telephony Systems with OpenSIPS http://bit.ly/2SsUt9a
Treinamento Online
SIP em Profundidade Coupon de Treinamento http://bit.ly/2GpatTq
Asterisk Essencial Coupon de Treinamento http://bit.ly/2BzCjs1
Livros:
Guia e Configuração do Asterisk Paperback e Kindle https://amzn.to/2S2lwmZ
Guia de Configuração do Asterisk eBook PDF http://www.asteriskguide.com
Grand Central Dispatch (GCD) dispatch queues are a powerful tool for performing tasks. Dispatch queues let you execute arbitrary blocks of code either asynchronously or synchronously with respect to the caller.
A session in the DevNet Zone at Cisco Live, Berlin. The APIC REST API is a programmatic interface to the Application Policy Infrastructure Controller (APIC) that uses a Representational State Transfer (REST) architecture. The API accepts and returns HTTP or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents. Developers can use any programming language to generate the messages and the JSON or XML documents that contain the API methods or managed object (MO) descriptions.
I have tried to present maximum detail on android booting sequence in a very abstract way. I hope it would be useful. If you find any correction needed please mention it on comments. Happy Coding :)
The slides begins with introduction to the character drivers and then mentions the various APIs for registering the character driver. Dynamically creating the device file and IOCTL
Stupid Video Tricks (CocoaConf DC, March 2014)Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
Have a quick overview of most of the embedded linux components and their details. How ti build Embedded Linux Hardware & Software, and developing Embedded Products
his Course is about learning How Linux Processes Talk to each Other. This is a sub-domain of Linux System Programming. We shall explore various popular mechanism used in the industry through which Linux processes to exchange data with each other. We will go through the concepts in detail behind each IPC mechanism, discuss the implementation, and design and analyze the situation where the given IPC is preferred over others.
, AV Foundation moves to center stage as the essential media framework on the device, offering support for playing, capturing, and even editing audio and video. Borrowing some of the core ideas from the Mac's QuickTime, while adding many new concepts of its own, AV Foundation offers extraordinary capabilities for application programmers. This talk will offer a high-level overview of what's in AV Foundation, and a taste of what it can do.
Lars-Erik from Movi presented the current state of video players, how they're constrained by legacy and how we wrote a video player mobile first, video experience first.
https://www.meetup.com/Singapore-Video-Technology/
Core Audio in iOS 6 (CocoaConf Portland, Oct. '12)Chris Adamson
Core Audio gets a bunch of neat new tricks in iOS 6, particularly for developers working with Audio Units. New effect units include an improved ability to vary pitch and playback speed, a digital delay unit, and OS X's powerful matrix mixer. There's now a new place to use units too, as the Audio Queue now offers developers a way to "tap" into the data being queued up for playback. To top it all off, a new "multi-route" system allows us to play out of multiple, multi-channel output devices at the same time.
Want to see, and hear, how all this stuff works? This section is the place to find out.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
Get On The Audiobus (CocoaConf Boston, October 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
In today’s blog post and video we’re talking about how to add video to your podcast. We have just finished outfitting our podcasting studio with all new equipment we’re connecting it to work existing broadcast studio. This is a great opportunity for organizations who already have broadcasting it live streaming studios add a podcasting system into the mix. For us it allows us to host a regular live streaming show about Facebook and YouTube and seamlessly transition into a live podcast which is recorded and uploaded to podcasting sites such as iTunes and stitcher radio.
Podcasting is fun at least news from around the world are moving from radio a traditional television Internet-based media such as podcast and live video. podcasting is fun at least news from around the world are moving from radio international television Internet-based media such as podcasts and live video. Shirt at our office in West Chester Pennsylvania we have a multi-camera video production studio running Vm shirt at our office in West Chester Pennsylvania we have a multi-camera video production studio running vMix. We also have multiple smaller video production and video conferencing areas that are all connected over our local area network using a gigabit switch. Using a new technology called the NewTek NDI we are able to connect all of our video production systems together including our do podcasting studio.
Stupid Video Tricks, CocoaConf Seattle 2014Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff
Core Audio in iOS 6 (CocoaConf Chicago, March 2013)Chris Adamson
Core Audio gets a bunch of neat new tricks in iOS 6, particularly for developers working with Audio Units. New effect units include an improved ability to vary pitch and playback speed, a digital delay unit, and OS X's powerful matrix mixer. There's now a new place to use units too, as the Audio Queue now offers developers a way to "tap" into the data being queued up for playback. To top it all off, a new "multi-route" system allows us to play out of multiple, multi-channel output devices at the same time.
Want to see, and hear, how all this stuff works? This section is the place to find out.
Audio for websites has a very checkered past. Finally, however, we can forget about using media tags like “embed” & “object”, and browser plugins like flash, along with the annoying “bgsound” of IE. The HTML5 <audio> tag is a big step forward…. But the “Web Audio API”, modeled on a graph of “audio nodes” providing filters, gains, spectral analysis, and spatially-located sound sources, is more of a giant leap forward for sounds in games and online music synthesis. That, along with “getUserMedia” to capture real-time camera and microphone input are arriving “as we speak”. Plan on lots of eye- (and ear-) candy to whet your appetite, with a modest taste of geeky codes and advances in Javascript Arrays and XHR2.
Presentation by Bob McCune of TapHarmonic detailing how to use the Quartz (CoreGraphics) framework with Swift 3.
Sample code available here: https://github.com/tapharmonic/QuartzDemosSwift
Slides for my Master Video session at Renaissance 2014. This session provided a high-level overview of some of AV Foundation's video playback and editing capabilities.
The demo app for this talk can be found at:
https://github.com/tapharmonic/AVFoundationEditor
Updated version using Swift 3 is available here:
https://www.slideshare.net/bobmccune/quartz-2d-with-swift-3/
Presentation by Bob McCune of TapHarmonic detailing how to use the Quartz framework to perform 2D drawing on the iOS platform. Quartz Demos:
https://github.com/tapharmonic/QuartzDemos
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. -MN Developer and Instructor
-Owner of TapHarmonic, LLC.
-Author of Learning AV Foundation
About...
Bob McCune
http://LearningAVFoundation.com
3. -What is AVAudioEngine?
- Goals and Capabilities
-Understanding AVAudioEngine
- Understanding Audio Nodes
- Playing and Recording Audio
- Audio Mixing and Effect Processing
- Working with MIDI and Samplers
Agenda
What will I learn?
4. AV Foundation Evolution
Humble Beginnings
AV Foundation
MediaPlayer AVKit
UIKit AppKit
Core Video Core MediaCore Animation
Audio-Only
Core Audio
MavericksiOS 7
5. AV Foundation Evolution
Humble Beginnings
AVKit AVKit
UIKit AppKit
Core Video Core MediaCore Animation Core Audio
AVF Video AVF Audio
YosemiteiOS 8
7. -Objective-C API simplifying low-latency, real-time audio
-Features and Capabilities:
- Read and write audio files in all Core Audio supported formats
- Play and record audio using files and buffers
- Dynamically configure audio processing blocks
- Perform audio tap processing
- Perform Stereo and 3D mixing off audio signals
- MIDI playback and control over sampler instruments
AVAudioEngine
Core Audio for Mortals
9. -Manages graphs of audio nodes
-Connects audio node into active chains
-Dynamically attach and reconfigure graph
-Start and stop the engine
The Engine
AVAudioEngine
11. -AVAudioEngine provides 3 implicit nodes:
- AVAudioInputNode: system input, cannot be created
- AVAudioOutputNode: system output, cannot be created
- AVAudioMixerNode: mixes multiple inputs to a single output
-Nodes are connected via their input and output busses
- Most nodes have one input and one output
- AVAudioMixerNode has multiple inputs and one output
- Busses have an associated audio format
Nodes
AVAudioNode
12. -Nodes are connected to form an active chain
- Source Node Destination Node = Active Chain
- Establishes and active render thread
Node Connections
Establishing Active Chains
Destination
Node
(Output)
Source
Node
(Player)
13. Node Connections
Establishing Active Chains
Destination
Node
(Output)
Source
Node
(Player)
Processing
Node
(Mixer)
-Nodes are connected to form an active chain
- Source Node Destination Node = Active Chain
- Establishes and active render thread
14. Node Connections
Establishing Active Chains
Destination
Node
(Output)
Source
Node
(Player)
Processing
Node
(Mixer)
X
-Nodes are connected to form an active chain
- Source Node Destination Node = Active Chain
- Establishes and active render thread
15. 1. Create the engine
2. Create the nodes
3. Attach the nodes to the engine
4. Connect the nodes together
5. Start the engine
Basic Recipe
Configuring the Graph
16. Engine Setup
Basic Recipe
// 1. Create engine (example only, needs to be strong reference)
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
// 2. Create a player node
AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];
// 3. Attach node to the engine
[engine attachNode:player];
// 4. Connect player node to engine's main mixer
AVAudioMixerNode *mixer = engine.mainMixerNode;
[engine connect:player to:mixer format:[mixer outputFormatForBus:0]];
// 5. Start engine
NSError *error;
if (![engine startAndReturnError:&error]) {
// handle error
}
19. -Reads and writes files in all Core Audio supported formats
-Automatically decodes when reading, encodes when writing
- Does not support sample rate conversion
-File has both a file format and a processing format
- fileFormat: on-disk format
- processingFormat: uncompressed, in-memory format
- Both are instances of AVAudioFormat
Audio Files
AVAudioFile
20. -Provides a format descriptor for the digital audio samples
- Provides access to sample rate, channel count, interleaving, etc.
- Wrapper over Core Audio AudioStreamBasicDescription
-Core Audio uses a “Standard” format for both platforms
- Noninterleaved linear PCM, 32-bit floating point samples
- Canonical formats are deprecated!
-Additionally supports “Common” formats
- AVAudioCommonFormat: 16/32-bit integer, 32/64-but floating point
Audio Formats
AVAudioFormat
21. -Memory buffer for audio data in any Linear PCM format
- Format and buffer capacity defined upon creation
-Provides a wrapper over a Core Audio AudioBufferList
- audioBufferList and mutableAudioBufferList properties
Audio Buffers
AVAudioPCMBuffer
@property (nonatomic, readonly) float * const *floatChannelData;
@property (nonatomic, readonly) int16_t * const *int16ChannelData;
@property (nonatomic, readonly) int32_t * const *int32ChannelData;
-Sample data accessed using:
22. -Pushes audio data onto the active render thread
-Schedule audio data from files and buffers
- Scheduled to play immediately or at future time
- Future times specified with AVAudioTime
- Files
- Schedule file or file segment with completion callback
- Buffers
- Schedule multiple buffers with individual completion callbacks
- Schedule looping buffer
Player Nodes
AVAudioPlayerNode
28. -Node that mixes multiple inputs into a single output
- Efficiently performs sample rate conversions
- Can upmix/downmix channel counts where needed
Audio Mixing
AVAudioMixerNode
AVAudioMixerNode
Input 1
Input 2
29. -Group and process similar inputs
- Simplify and improve efficiency of similar audio processing
Audio Mixing
Using Submixes
AVAudioMixerNode
Input 1
Input 2
AVAudioMixerNode
Input 3
Input 4
AVAudioMixerNode
(main mixer)
30. -AVAudioMixing Protocol:
- Defines properties to be applied to an input bus of a mixer node
- Source and mixer nodes conform to this protocol
Audio Mixing
AVAudioMixing
AVAudioOutputNodeAVAudioMixerNode
AVAudioPlayerNode
AVAudioPlayerNode
volume pan
volume pan
Input Bus 0
Input Bus 1
32. -Node tap pulls data off the render thread
-Captures the output of a particular node
- Record data from microphone
- Record data from a pre-recorded or live audio mix
- Perform data visualization or analysis
-Can install one tap per output bus
-Dynamically install and remove taps
-Audio data returned in a block callback
Node Taps
Pulling Data
35. Effect Nodes
Digital Signal Processing
-There are two main categories of effects available:
- AVAudioUnitEffect: performs real-time audio processing
- AVAudioUnitTimeEffect: performs non real-time audio processing
AVAudioUnitDelay
AVAudioUnitEQ
AVAudioUnitDistortion
AVAudioUnitReverb
AVAudioUnitEffect AVAudioUnitTimeEffect
AVAudioUnitTimePitch
AVAudioUnitVarispeed
36. -Delays original signal by delay time and mixes with original
-Configuration parameters:
- delayTime: The delay time of the input signal (up to 2 seconds)
- feedback: Amount of output fed back into delay line
- lowPassCutoff: Frequency past which high frequencies rolled off
- wetDryMix: The blend of wet/dry signals (0% to 100%)
Delay
AVAudioUnitDelay
37. -Multi-band equalizer and filter unit
-Configuration parameters:
- bands: Array of AVAudioUnitEQFilterParameters objects
- globalGain: Overall gain adjustment applied to input signal (-96 to 24db)
Equalization
AVAudioUnitEQ
38. -Used to define the EQ parameters to be applied
- Retrieved from the AVAudioUnitEQ object’s bands property
-Configuration parameters:
- filterType: Parametric, Low/High Pass, Band Pass, Low/High Shelf
- frequency: The center frequency or cutoff frequency
- bandwidth: The width around the main frequency in octaves
- gain: The gain adjustment (boost or cut) to the frequency
- bypass: The bypass state
Equalization (Continued)
AVAudioUnitEQFilterParameters
39. -Multi-stage distortion effect of original signal
-Configuration presets:
- loadFactoryPreset:(AVAudioUnitDistortionPreset)preset
- DrumsBitBrush, MultiBrokenSpeaker, SpeechWaves, etc
-Configuration parameters:
- preGain: Gain applied to signal before distorted (-80dB to 20dB)
- wetDryMix: The blend of wet/dry signals (0% to 100%)
Distortion
AVAudioUnitDistortion
40. -Simulates the reflective qualities of a particular environment
-Configuration presets:
- loadFactoryPreset:(AVAudioUnitReverbPreset)preset
- Small Room, Large Hall, Cathedral, etc.
-Configuration parameters:
- wetDryMix: The blend of wet/dry signals (0% to 100%)
Reverb
AVAudioUnitReverb
43. -Musical Instrument Digital Interface
-Specification defining:
- Communication protocol for controlling electronic instruments
- Hardware cables and connectors
- Standard MIDI file format
-Extensions:
- General MIDI (GM)
- Downloadable Sounds (DLS)
MIDI
What is MIDI?
http://www.midi.org
44. -High-quality sampler instrument to play sampled sounds
-Loads samples in EXS, SF2, or DLS formats
- Can additionally load and arbitrary array of sample data
-Responds to all standard MIDI messages
- Note on/off, controller messages, pitch bend, etc.
-Great solution for a live performance instrument
- What about playing sequences?
Sampling Instrument
AVAudioUnitSampler
45. -AVAudioEngine has a musicSequence property
- AudioToolbox MusicSequence type
-Use the MusicPlayer and MusicSequence APIs
- Attach the MusicSequence to the engine to play through Sampler
MusicSequence
Core MIDI Type
47. -Powerful new addition to AV Foundation
- Simplifies low-latency, realtime audio
- Great solution for music, audio, and gaming applications
- Core Audio for Mortals
-Enables you to build advanced audio applications
- Read and write audio files
- Play and record audio using files and buffers
- Add DSP effects: Reverb, Delays, etc.
- Perform Stereo and 3D mixing off audio signals
- MIDI playback and control over sampler instruments
Summary
AVAudioEngine