The document summarizes Chris Adamson's presentation on Core Audio in iOS 6. It discusses the key components of Core Audio including engines like Audio Units that process audio streams, and helpers that deal with formats, file I/O, and session management. It provides examples of using Audio Units to process and render audio in a pull model, and creating an AURemoteIO unit to handle playback and capture on iOS.
Core Audio, the only media framework available since day one of the public iPhone SDK, offers extremely low latency and powerful access to the device's audio processing system... assuming you can handle what's renowned as one of the hardest APIs on the platform. In iOS 5, Core Audio gets even better, with great new features that had previous been burdensome, if not impossible, to develop on your own. Once the iOS 5 NDA drops, the shiny new bits will be available to all, and this talk will be one of your first chances to learn how they work. Attendees will learn the basics of Core Audio -- the engine APIs that process sound (Audio Queue, Audio Units, and OpenAL) and the helper APIs that get samples into and out of them -- and then look where iOS 5 fills in some of the holes that have existed up to now.
Capturing Stills, Sounds, and Scenes with AV FoundationChris Adamson
AV Foundation -- introduced in iOS 4, ported to Lion, and enhanced further in iOS 5 -- delivers a comprehensive framework for audio and video capture and playback. The capture functionality is so good, it's now the preferred option for still photography applications. In this session, we'll focus squarely on AV Foundation as a media capture framework. Attendees will learn:
* How to get the most out of the device for still photography, by using AV Foundation to access the flash, white-balance, and image resolution.
* How to capture audio and video to the file system
* How to process incoming audio and video capture buffers in memory, to create real-time effects or pick out interesting parts of the scene on the fly
iOS Media APIs (MobiDevDay Detroit, May 2013)Chris Adamson
The document discusses various iOS media APIs for playing, capturing, editing, and exporting audio and video content in iOS applications. It provides an overview of key frameworks like AV Foundation, Core Media, and Core Animation and describes how to perform common media tasks like playing music and videos, capturing video, editing/mixing audio and video, and exporting edited content. Code examples are provided to demonstrate how to use APIs like AVAudioPlayer, AVPlayer, AVCaptureSession, AVMutableComposition, and AVVideoComposition to accomplish these tasks.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses next generation audio capabilities for game consoles like the PlayStation 3. It describes how audio has evolved from just a few channels on older systems to "unlimited" channels enabled by software on next gen hardware. This allows for much more complex routing of audio streams and effects processing. It also discusses how next gen audio requires new skills for musicians and encourages closer collaboration between audio, programming, and design teams. New tools are needed to manage the increased flexibility and complexity of next gen audio engines.
Public version of the slideshow I used during my presentation about adaptive music in video games and other interactive media at #UXMonday event, organized by http://asociaceux.cz in Prague, March 2, 2015.
Core Audio, the only media framework available since day one of the public iPhone SDK, offers extremely low latency and powerful access to the device's audio processing system... assuming you can handle what's renowned as one of the hardest APIs on the platform. In iOS 5, Core Audio gets even better, with great new features that had previous been burdensome, if not impossible, to develop on your own. Once the iOS 5 NDA drops, the shiny new bits will be available to all, and this talk will be one of your first chances to learn how they work. Attendees will learn the basics of Core Audio -- the engine APIs that process sound (Audio Queue, Audio Units, and OpenAL) and the helper APIs that get samples into and out of them -- and then look where iOS 5 fills in some of the holes that have existed up to now.
Capturing Stills, Sounds, and Scenes with AV FoundationChris Adamson
AV Foundation -- introduced in iOS 4, ported to Lion, and enhanced further in iOS 5 -- delivers a comprehensive framework for audio and video capture and playback. The capture functionality is so good, it's now the preferred option for still photography applications. In this session, we'll focus squarely on AV Foundation as a media capture framework. Attendees will learn:
* How to get the most out of the device for still photography, by using AV Foundation to access the flash, white-balance, and image resolution.
* How to capture audio and video to the file system
* How to process incoming audio and video capture buffers in memory, to create real-time effects or pick out interesting parts of the scene on the fly
iOS Media APIs (MobiDevDay Detroit, May 2013)Chris Adamson
The document discusses various iOS media APIs for playing, capturing, editing, and exporting audio and video content in iOS applications. It provides an overview of key frameworks like AV Foundation, Core Media, and Core Animation and describes how to perform common media tasks like playing music and videos, capturing video, editing/mixing audio and video, and exporting edited content. Code examples are provided to demonstrate how to use APIs like AVAudioPlayer, AVPlayer, AVCaptureSession, AVMutableComposition, and AVVideoComposition to accomplish these tasks.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses next generation audio capabilities for game consoles like the PlayStation 3. It describes how audio has evolved from just a few channels on older systems to "unlimited" channels enabled by software on next gen hardware. This allows for much more complex routing of audio streams and effects processing. It also discusses how next gen audio requires new skills for musicians and encourages closer collaboration between audio, programming, and design teams. New tools are needed to manage the increased flexibility and complexity of next gen audio engines.
Public version of the slideshow I used during my presentation about adaptive music in video games and other interactive media at #UXMonday event, organized by http://asociaceux.cz in Prague, March 2, 2015.
Core Audio in iOS 6 (CocoaConf DC, March 2013)Chris Adamson
The document summarizes Chris Adamson's presentation on Core Audio in iOS 6 given at CocoaConf DC on March 23, 2013. It discusses the key components of Core Audio including engines like Audio Units and Audio Queue for processing audio, and helpers for file I/O, format conversion and managing audio sessions on iOS. It provides examples of using the AURemoteIO audio unit for playback and capture, connecting audio units like effects, and new features in iOS 6 like the AUNewTimePitch effect unit for changing pitch and playback rate independently.
The document discusses intelligent interfaces for Android applications. It defines intelligent interfaces as unique, beautiful interfaces that are also consistent and easy to use across different devices. It provides principles for Android interfaces, such as following Android conventions and using density-independent pixels for responsiveness. The document offers tips for interface elements like icons, action bars, notifications and handling orientations. It emphasizes inspiration from other applications and references for Android interface guidelines.
The document discusses strategies for creating effective content for blogs and video. It suggests focusing on targeted sections by day of the week, posting daily, covering a variety of topics of interest to visitors, leveraging user-generated content, and expanding reach through guest blogging and leveraging multiple social media platforms and apps. It also provides tips for creating DIY video content through planning and using proper equipment and software. The overall message is that high-quality, regularly-posted content across different formats and platforms is key to success.
Zookeeper is a centralized service that helps coordinate distributed systems by providing configuration information, naming services, synchronization, and group services. It allows distributed applications to organize nodes and configuration in a filesystem-like structure and provides services like leader election, heartbeats, and atomic updates.
The document provides definitions for various audio and sound design terms. It includes definitions for terms like Foley artistry, sound libraries, audio file formats like .wav and .mp3, audio limitations like RAM, and audio recording systems like analog, digital, Mini Disc, and compact disc. For each term, it provides a short definition from an online source as well as how the term relates to the author's own sound production practice.
A talk that takes you through the fundamentals of concepting, producing, deploying, and distributing a killer podcast. Talk originally given as part of The Firehose Project Lightning Talks, where I am currently a developer apprentice. Video of the talk can be found here: http://community.thefirehoseproject.com/2015/08/20/eric-andrade-creating-a-killer-podcast.html
The Next-Gen Dynamic Sound System of Killzone Shadow FallGuerrilla
We'll describe our new audio run-time and toolset that was built specifically for Killzone Shadow Fall on PlayStation 4. However, the ideas used are widely applicable and the focus is on integration with the game engine and fast iteration, with special attention to shortcuts to get your creative spark translated into in-game sounds as quickly as possible. We will demonstrate our implementation of these demands, which is a next-gen sound system that was designed to combine artistic freedom with high run-time performance. It should be interesting to both creative as well as technical minds who are looking for inspiration on what to expect from a modern sound design environment. To emphasize the performance advantage, we will show the point of view of both the sound designer and the programmer simultaneously. We will use examples starting with simple sounds and build up to increasingly more complex dynamic ones to illustrate the benefits of this unique approach.
Image and Music: Processing plus Pure Data with libpd libraryPETER KIRN
Make Your Own Free Tools with Processing, Pure Data
Support slides from a talk to CrashSpace, Los Angeles, the debut workshop on using this Pure Data library for Processing
This document discusses combining the iOS and Arduino platforms. It describes the attractive qualities of iOS like its user interface and sensors, and capabilities of Arduino like interacting with the physical world through sensors and actuators. The document suggests that combining the two platforms through hardware connections and app development could enable new types of applications like controlling robots, appliances and other devices through an iPhone or iPad interface. Examples mentioned include power monitoring, sensor data collection and physical computing projects.
Whatever Happened to Visual Novel Anime? (AWA/Youmacon 2018)Chris Adamson
Ten years ago, visual novels were one of the most prominent sources for anime adaptations. Today, the flood has slowed to a trickle, and Steins;Gate 0 seems like the last gasp of a dying breed. This panel looks at what went right and wrong with VN anime, and whether they might ever make a comeback.
Whatever Happened to Visual Novel Anime? (JAFAX 2018)Chris Adamson
Not long ago, adaptations of visual novels were a major inspiration for anime, with a dozen or more shows a year based on VNs. These include classics like Clannad, Fate/Stay Night, and Higurashi. Today, the flood has slowed to a trickle, with only six VN anime in 2017, all of them commercial flops. This panel will track the rise and fall of VN adaptations, the traits that make them good and bad for anime, and whether this year's Steins;Gate 0 represents a new hope for the genre or a last gasp.
Media Frameworks Versus Swift (Swift by Northwest, October 2017)Chris Adamson
As much as we love Swift for developing our apps, playgrounds, and even on the server, there are some things for which Swift is not a good match. The media frameworks on iOS are a good example of this. Dropping into Core Audio can twist your Swift code so badly it’s hardly readable anymore. And there are parts of AV Foundation where using Swift is literally not allowed.
In this talk, we’ll show off some of these tricky scenarios, see what we can do to make things better, and think about what this means for the Swift language and its future prospects.
Fall Premieres: Media Frameworks in iOS 11, macOS 10.13, and tvOS 11 (CocoaCo...Chris Adamson
What’s Apple planning for its media frameworks in the next 12 months? What’s it doing with Apple TV, or the HTTP Live Streaming standard? We won’t know until the curtain drops on WWDC! In this talk, we’ll amass everything audio- and video-related that gets announced throughout the week, combine it with the solid base of frameworks already present in the Apple platforms, and figure out from there what we’re going to be playing with in 2018.
CocoaConf Chicago 2017: Media Frameworks and Swift: This Is FineChris Adamson
Swift is great for writing iOS and Mac apps, and its creators also mean for it to be used as a systems programming language. However, certain traits about Swift make it officially off-limits for use in some audio/video-processing scenarios. What's the deal, is it not fast enough or what? We'll look at what media apps can and can't do in Swift, and what you're supposed to do instead. We'll also look at strategies for knowing what responsibilities to dole out to Swift and to C, and how to make those parts of your code play nicely with each other.
(This is a longer version of a talk previously presented at Forward Swift 2017)
Forward Swift 2017: Media Frameworks and Swift: This Is FineChris Adamson
1. The document discusses using Swift with various media frameworks like AV Foundation and Audio Toolbox that are commonly used for audio and video processing but were designed for Objective-C.
2. It notes challenges like the frameworks using C APIs, working with real-time constraints, and capturing objects from Swift that could cause performance issues.
3. The author recommends strategies like using AV Foundation if possible, learning to balance Swift and C idioms, and aiming for idiomatic Swift rather than Swift that looks like translated Objective-C.
Firebase: Totally Not Parse All Over Again (Unless It Is) (CocoaConf San Jose...Chris Adamson
With Facebook shutting down Parse, everybody knows to never again depend on a third party for their backend solution, right? Sure, and after you spend six months trying to write your own syncing service, how's that working? In 2016, Google has added a ton of features to Firebase, their popular backend-as-a-service solution. Firebase's primary offering is a realtime database in the cloud that syncs changes to and from multiple concurrent users, and their Swift-friendly iOS SDK makes it ideal for mobile use. In this session, you'll learn how to set up a Firebase backend and build an iOS app around it.
Building A Streaming Apple TV App (CocoaConf San Jose, Nov 2016)Chris Adamson
This document discusses building a streaming video app for Apple TV. It covers topics like codecs, containers, livestreaming, adaptive streaming using HTTP Live Streaming (HLS), creating HLS streams with tools like mediafilesegmenter, and securing streams. HLS breaks video into small file segments delivered over HTTP, making streaming scalable and suitable for mobile. Variant playlists allow encoding at multiple bitrates to adapt to network conditions.
Firebase: Totally Not Parse All Over Again (Unless It Is)Chris Adamson
With Facebook shutting down Parse, everybody knows to never again depend on a third party for their backend solution, right? Sure, and after you spend six months trying to write your own syncing service, how's that working? In 2016, Google has added a ton of features to Firebase, their popular backend-as-a-service solution. Firebase's primary offering is a realtime database in the cloud that syncs changes to and from multiple concurrent users, and their Swift-friendly iOS SDK makes it ideal for mobile use. In this session, you'll learn how to set up a Firebase backend and build an iOS app around it.
Building A Streaming Apple TV App (CocoaConf DC, Sept 2016)Chris Adamson
Apple TV offers a friendly SDK, full of familiar view controllers and Foundation classes, with everything an iOS developer needs to develop their own streaming channel. Except for… you know… the streaming part. In this session, we'll look at how Apple's HTTP Live Streaming video works -- from flat files or live sources -- and how to get it from your computer to a streaming server and then to an Apple TV. We'll also look at common challenges for building streaming channel apps, like serving metadata, protecting content, and supporting single sign-on.
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)Chris Adamson
[updated from previous version to include Watch Connectivity, screenshots of WKInterfaceMovie]
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)Chris Adamson
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Revenge of the 80s: Cut/Copy/Paste, Undo/Redo, and More Big Hits (CocoaConf C...Chris Adamson
When the first 128K Macs landed in 1984, it was the first time many of us could undo a mistake with just a keystroke, or exchange data between documents or applications with cut/copy/paste and the system clipboard. Fast forward 30 years and we all use this stuff… but do you know how to actually impement it? Especially on iOS, these everyday features are surprisingly absent from many developers' toolchests. In this session, we'll flashback to the era of Reagan, Rubik's Cubes, and Return of the Jedi, to see these hot hits of the early 80's are represented in modern-day Cocoa.
More Related Content
Similar to Core Audio in iOS 6 (CocoaConf Raleigh, Dec. '12)
Core Audio in iOS 6 (CocoaConf DC, March 2013)Chris Adamson
The document summarizes Chris Adamson's presentation on Core Audio in iOS 6 given at CocoaConf DC on March 23, 2013. It discusses the key components of Core Audio including engines like Audio Units and Audio Queue for processing audio, and helpers for file I/O, format conversion and managing audio sessions on iOS. It provides examples of using the AURemoteIO audio unit for playback and capture, connecting audio units like effects, and new features in iOS 6 like the AUNewTimePitch effect unit for changing pitch and playback rate independently.
The document discusses intelligent interfaces for Android applications. It defines intelligent interfaces as unique, beautiful interfaces that are also consistent and easy to use across different devices. It provides principles for Android interfaces, such as following Android conventions and using density-independent pixels for responsiveness. The document offers tips for interface elements like icons, action bars, notifications and handling orientations. It emphasizes inspiration from other applications and references for Android interface guidelines.
The document discusses strategies for creating effective content for blogs and video. It suggests focusing on targeted sections by day of the week, posting daily, covering a variety of topics of interest to visitors, leveraging user-generated content, and expanding reach through guest blogging and leveraging multiple social media platforms and apps. It also provides tips for creating DIY video content through planning and using proper equipment and software. The overall message is that high-quality, regularly-posted content across different formats and platforms is key to success.
Zookeeper is a centralized service that helps coordinate distributed systems by providing configuration information, naming services, synchronization, and group services. It allows distributed applications to organize nodes and configuration in a filesystem-like structure and provides services like leader election, heartbeats, and atomic updates.
The document provides definitions for various audio and sound design terms. It includes definitions for terms like Foley artistry, sound libraries, audio file formats like .wav and .mp3, audio limitations like RAM, and audio recording systems like analog, digital, Mini Disc, and compact disc. For each term, it provides a short definition from an online source as well as how the term relates to the author's own sound production practice.
A talk that takes you through the fundamentals of concepting, producing, deploying, and distributing a killer podcast. Talk originally given as part of The Firehose Project Lightning Talks, where I am currently a developer apprentice. Video of the talk can be found here: http://community.thefirehoseproject.com/2015/08/20/eric-andrade-creating-a-killer-podcast.html
The Next-Gen Dynamic Sound System of Killzone Shadow FallGuerrilla
We'll describe our new audio run-time and toolset that was built specifically for Killzone Shadow Fall on PlayStation 4. However, the ideas used are widely applicable and the focus is on integration with the game engine and fast iteration, with special attention to shortcuts to get your creative spark translated into in-game sounds as quickly as possible. We will demonstrate our implementation of these demands, which is a next-gen sound system that was designed to combine artistic freedom with high run-time performance. It should be interesting to both creative as well as technical minds who are looking for inspiration on what to expect from a modern sound design environment. To emphasize the performance advantage, we will show the point of view of both the sound designer and the programmer simultaneously. We will use examples starting with simple sounds and build up to increasingly more complex dynamic ones to illustrate the benefits of this unique approach.
Image and Music: Processing plus Pure Data with libpd libraryPETER KIRN
Make Your Own Free Tools with Processing, Pure Data
Support slides from a talk to CrashSpace, Los Angeles, the debut workshop on using this Pure Data library for Processing
This document discusses combining the iOS and Arduino platforms. It describes the attractive qualities of iOS like its user interface and sensors, and capabilities of Arduino like interacting with the physical world through sensors and actuators. The document suggests that combining the two platforms through hardware connections and app development could enable new types of applications like controlling robots, appliances and other devices through an iPhone or iPad interface. Examples mentioned include power monitoring, sensor data collection and physical computing projects.
Similar to Core Audio in iOS 6 (CocoaConf Raleigh, Dec. '12) (9)
Whatever Happened to Visual Novel Anime? (AWA/Youmacon 2018)Chris Adamson
Ten years ago, visual novels were one of the most prominent sources for anime adaptations. Today, the flood has slowed to a trickle, and Steins;Gate 0 seems like the last gasp of a dying breed. This panel looks at what went right and wrong with VN anime, and whether they might ever make a comeback.
Whatever Happened to Visual Novel Anime? (JAFAX 2018)Chris Adamson
Not long ago, adaptations of visual novels were a major inspiration for anime, with a dozen or more shows a year based on VNs. These include classics like Clannad, Fate/Stay Night, and Higurashi. Today, the flood has slowed to a trickle, with only six VN anime in 2017, all of them commercial flops. This panel will track the rise and fall of VN adaptations, the traits that make them good and bad for anime, and whether this year's Steins;Gate 0 represents a new hope for the genre or a last gasp.
Media Frameworks Versus Swift (Swift by Northwest, October 2017)Chris Adamson
As much as we love Swift for developing our apps, playgrounds, and even on the server, there are some things for which Swift is not a good match. The media frameworks on iOS are a good example of this. Dropping into Core Audio can twist your Swift code so badly it’s hardly readable anymore. And there are parts of AV Foundation where using Swift is literally not allowed.
In this talk, we’ll show off some of these tricky scenarios, see what we can do to make things better, and think about what this means for the Swift language and its future prospects.
Fall Premieres: Media Frameworks in iOS 11, macOS 10.13, and tvOS 11 (CocoaCo...Chris Adamson
What’s Apple planning for its media frameworks in the next 12 months? What’s it doing with Apple TV, or the HTTP Live Streaming standard? We won’t know until the curtain drops on WWDC! In this talk, we’ll amass everything audio- and video-related that gets announced throughout the week, combine it with the solid base of frameworks already present in the Apple platforms, and figure out from there what we’re going to be playing with in 2018.
CocoaConf Chicago 2017: Media Frameworks and Swift: This Is FineChris Adamson
Swift is great for writing iOS and Mac apps, and its creators also mean for it to be used as a systems programming language. However, certain traits about Swift make it officially off-limits for use in some audio/video-processing scenarios. What's the deal, is it not fast enough or what? We'll look at what media apps can and can't do in Swift, and what you're supposed to do instead. We'll also look at strategies for knowing what responsibilities to dole out to Swift and to C, and how to make those parts of your code play nicely with each other.
(This is a longer version of a talk previously presented at Forward Swift 2017)
Forward Swift 2017: Media Frameworks and Swift: This Is FineChris Adamson
1. The document discusses using Swift with various media frameworks like AV Foundation and Audio Toolbox that are commonly used for audio and video processing but were designed for Objective-C.
2. It notes challenges like the frameworks using C APIs, working with real-time constraints, and capturing objects from Swift that could cause performance issues.
3. The author recommends strategies like using AV Foundation if possible, learning to balance Swift and C idioms, and aiming for idiomatic Swift rather than Swift that looks like translated Objective-C.
Firebase: Totally Not Parse All Over Again (Unless It Is) (CocoaConf San Jose...Chris Adamson
With Facebook shutting down Parse, everybody knows to never again depend on a third party for their backend solution, right? Sure, and after you spend six months trying to write your own syncing service, how's that working? In 2016, Google has added a ton of features to Firebase, their popular backend-as-a-service solution. Firebase's primary offering is a realtime database in the cloud that syncs changes to and from multiple concurrent users, and their Swift-friendly iOS SDK makes it ideal for mobile use. In this session, you'll learn how to set up a Firebase backend and build an iOS app around it.
Building A Streaming Apple TV App (CocoaConf San Jose, Nov 2016)Chris Adamson
This document discusses building a streaming video app for Apple TV. It covers topics like codecs, containers, livestreaming, adaptive streaming using HTTP Live Streaming (HLS), creating HLS streams with tools like mediafilesegmenter, and securing streams. HLS breaks video into small file segments delivered over HTTP, making streaming scalable and suitable for mobile. Variant playlists allow encoding at multiple bitrates to adapt to network conditions.
Firebase: Totally Not Parse All Over Again (Unless It Is)Chris Adamson
With Facebook shutting down Parse, everybody knows to never again depend on a third party for their backend solution, right? Sure, and after you spend six months trying to write your own syncing service, how's that working? In 2016, Google has added a ton of features to Firebase, their popular backend-as-a-service solution. Firebase's primary offering is a realtime database in the cloud that syncs changes to and from multiple concurrent users, and their Swift-friendly iOS SDK makes it ideal for mobile use. In this session, you'll learn how to set up a Firebase backend and build an iOS app around it.
Building A Streaming Apple TV App (CocoaConf DC, Sept 2016)Chris Adamson
Apple TV offers a friendly SDK, full of familiar view controllers and Foundation classes, with everything an iOS developer needs to develop their own streaming channel. Except for… you know… the streaming part. In this session, we'll look at how Apple's HTTP Live Streaming video works -- from flat files or live sources -- and how to get it from your computer to a streaming server and then to an Apple TV. We'll also look at common challenges for building streaming channel apps, like serving metadata, protecting content, and supporting single sign-on.
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)Chris Adamson
[updated from previous version to include Watch Connectivity, screenshots of WKInterfaceMovie]
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)Chris Adamson
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Revenge of the 80s: Cut/Copy/Paste, Undo/Redo, and More Big Hits (CocoaConf C...Chris Adamson
When the first 128K Macs landed in 1984, it was the first time many of us could undo a mistake with just a keystroke, or exchange data between documents or applications with cut/copy/paste and the system clipboard. Fast forward 30 years and we all use this stuff… but do you know how to actually impement it? Especially on iOS, these everyday features are surprisingly absent from many developers' toolchests. In this session, we'll flashback to the era of Reagan, Rubik's Cubes, and Return of the Jedi, to see these hot hits of the early 80's are represented in modern-day Cocoa.
Core Image: The Most Fun API You're Not Using, CocoaConf Atlanta, December 2014Chris Adamson
Graphics on iOS and OS X isn't just about stroking shapes and paths in Core Graphics and trying to figure out OpenGL. The Core Image framework gives you access to about 100 built-in filters, providing everything from photographic effects and color manipulation to face-finding and QR Code generation. It can leverage the power of the GPU to provide performance fast enough to perform complex effects work on real-time video capture. But even if you're not writing the next Final Cut Pro or Photoshop, it's easy to call in Core Image for simple tasks, like putting a blur in part of your UI for transitions or privacy reasons. In this session, we'll explore the many ways Core Image can make your app sizzle
Stupid Video Tricks, CocoaConf Seattle 2014Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff
Stupid Video Tricks, CocoaConf Las VegasChris Adamson
The document discusses various techniques for manipulating and processing video and audio using AV Foundation frameworks in iOS and Mac OS X. It begins with an overview of AV Foundation and describes common tasks like playback, capture, and editing. It then demonstrates tricks like animating AVPlayerLayers and recording the screen. The document dives deeper into techniques for reading and manipulating subtitle, audio, and video tracks using Core Media, Core Audio, Core Video, and Core Image frameworks. It provides code samples for applying filters to video in real-time and writing modified data back out.
Core Image: The Most Fun API You're Not Using (CocoaConf Columbus 2014)Chris Adamson
Graphics on iOS and OS X isn't just about stroking shapes and paths in Core Graphics and trying to figure out OpenGL. The Core Image framework gives you access to about 100 built-in filters, providing everything from photographic effects and color manipulation to face-finding and QR Code generation. It can leverage the power of the GPU to provide performance fast enough to perform complex effects work on real-time video capture. But even if you're not writing the next Final Cut Pro or Photoshop, it's easy to call in Core Image for simple tasks, like putting a blur in part of your UI for transitions or privacy reasons. In this session, we'll explore the many ways Core Image can make your app sizzle.
Stupid Video Tricks (CocoaConf DC, March 2014)Chris Adamson
The document discusses various techniques for working with time-based media like video and audio using the AV Foundation framework in iOS and macOS. It provides an overview of common tasks like playback, capture, and editing using classes like AVPlayer, AVCaptureSession, and AVComposition. It then demonstrates more advanced tricks like animating an AVPlayerLayer and processing video frames in real-time using Core Image filters. The document recommends exploring other related frameworks like Core Audio, Core Media, Video Toolbox, and Core Video for additional functionality and performance.
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
This document provides an introduction and overview of the Roku SDK. It discusses the basics of developing Roku channels using the BrightScript programming language and built-in component library. It covers setting up the development environment, the file structure of Roku channels, common screens and objects, debugging, and preparing and loading content like audio and video. The document concludes with next steps like following Roku's design guidelines and publishing channels privately or to the public Roku channel store.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Astute Business Solutions | Oracle Cloud Partner |
Core Audio in iOS 6 (CocoaConf Raleigh, Dec. '12)
1. Core Audio in iOS 6
Chris Adamson • @invalidname
CocoaConf Raleigh
December 1, 2012
Sides and code available on my blog:
http://www.subfurther.com/blog
Sunday, December 2, 12
7. Legitimate copies!
• Amazon (paper or Kindle)
• Barnes & Noble (paper or Nook)
• Apple (iBooks)
• Direct from InformIT (paper, eBook [.epub
+ .mobi + .pdf], or Bundle)
• 35% off with code COREAUDIO3174
Sunday, December 2, 12
8. What You’ll Learn
• What Core Audio does and doesn’t do
• When to use and not use it
• What’s new in Core Audio for iOS 6
Sunday, December 2, 12
10. Simple things should be simple,
complex things should be possible.
–Alan Kay
Sunday, December 2, 12
11. AV Foundation,
Media Player
Simple things should be simple,
complex things should be possible.
–Alan Kay
Sunday, December 2, 12
12. AV Foundation,
Media Player
Simple things should be simple,
complex things should be possible.
–Alan Kay
Core Audio
Sunday, December 2, 12
13. Core Audio
• Low-level C framework for processing
audio
• Capture, play-out, real-time or off-line
processing
• The “complex things should be possible”
part of audio on OS X and iOS
Sunday, December 2, 12
14. Chris’ CA Taxonomy
• Engines: process streams of audio
• Capture, play-out, mixing, effects
processing
• Helpers: deal with formats, encodings, etc.
• File I/O, stream I/O, format conversion,
iOS “session” management
Sunday, December 2, 12
15. Helpers: Audio File
• Read from / write to multiple audio file
types (.aiff, .wav, .caf, .m4a, .mp3) in a
content-agnostic way
• Get metadata (data format, duration,
iTunes/ID3 info)
Sunday, December 2, 12
16. Helpers: Audio File
Stream
• Read audio from non-random-access
source like a network stream
• Discover encoding and encapsulation on
the fly, then deliver audio packets to client
application
Sunday, December 2, 12
17. Helpers: Converters
• Convert buffers of audio to and from
different encodings
• One side must be in an uncompressed
format (i.e., Linear PCM)
Sunday, December 2, 12
18. Helpers: ExtAudioFile
• Combine file I/O and format conversion
• Read a compressed file into PCM buffers
• Write PCM buffers into a compressed file
Sunday, December 2, 12
19. Helpers: Audio Session
• iOS-only API to negotiate use of audio
resources with the rest of the system
• Deetermine whether your app mixes with
other apps’ audio, honors ring/silent
switch, can play in background, etc.
• Gets notified of audio interruptions
• See also AVAudioSession
Sunday, December 2, 12
20. Engines: Audio Units
• Low-latency (~10ms) processing of
capture/play-out audio data
• Effects, mixing, etc.
• Connect units manually or via an AUGraph
• Much more on this topic momentarily…
Sunday, December 2, 12
21. Engines: Audio Queue
• Convenience API for recording or play-out,
built atop audio units
• Rather than processing on-demand and on
Core Audio’s thread, your callback provides
or receives buffers of audio (at whatever size
is convenient to you)
• Higher latency, naturally
• Supports compressed formats (MP3, AAC)
Sunday, December 2, 12
22. Engines: Open AL
• API for 3D spatialized audio, implemented
atop audio units
• Set a source’s properties (x/y/z
coordinates, orientation, audio buffer, etc.),
OpenAL renders what it sounds like to the
listener from that location
Sunday, December 2, 12
23. Engines and Helpers
• Audio Units • Audio File
• Audio Queue • Audio File Stream
• Open AL • Audio Converter
• ExtAudioFile
• Audio Session
Sunday, December 2, 12
33. AURemoteIO
• Output unit used for play-out, capture
• A Core Audio thread repeatedly and
automatically calls AudioUnitRender()
• Must set EnableIO property to explicitly
enable capture and/or play-out
• Capture requires setting appropriate
AudioSession category
Sunday, December 2, 12
47. The problem with effect
units
• Audio Units available since iPhone OS 2.0
prefer int formats
• Effect units arrived with iOS 5 (arm7 era)
and only work with float format
• Have to set the AUEffect unit’s format on
AURemoteIO
Sunday, December 2, 12
49. AUNewTimePitch
• New in iOS 6!
• Allows you to change pitch independent of
time, or time independent of pitch
• How do you use it?
Sunday, December 2, 12
50. AUTimePitch
! AudioComponentDescription effectcd = {0};
! effectcd.componentType = kAudioUnitType_FormatConverter;
! effectcd.componentSubType = kAudioUnitSubType_NewTimePitch;
! effectcd.componentManufacturer = kAudioUnitManufacturer_Apple;
!
! AUNode effectNode;
! CheckError(AUGraphAddNode(self.auGraph,
! ! ! ! ! ! ! &effectcd,
! ! ! ! ! ! ! &effectNode),
! ! ! "couldn't get effect node [time/pitch]");
Notice the type is AUFormatConverter, not AUEffect
Sunday, December 2, 12
52. AUNewTimePitch
parameters
• Rate: kNewTimePitchParam_Rate takes a
Float32 rate from 1/32 speed to 32x
speed.
• Use powers of 2: 1/32, 1/16, …, 2, 4, 8…
• Pitch: kNewTimePitchParam_Pitch takes
a Float32 representing cents, meaning
1/100 of a musical semitone
Sunday, December 2, 12
53. Pitch shifting
• Pitch can vary, time does not
• Suitable for real-time sources, such as audio
capture
Sunday, December 2, 12
55. Rate shifting
• Rate can vary, pitch does not
• Think of 1.5x and 2x speed modes in
Podcasts app
• Not suitable for real-time sources, as data
will be consumed faster. Files work well.
• Sources must be able to map time
systems with
kAudioUnitProperty_InputSamplesInOutput
Sunday, December 2, 12
57. AUSplitter
AUSomethingElse
AUSplitter
AUSomethingElse
New in iOS 6!
Sunday, December 2, 12
58. AUMatrixMixer
AUSomethingElse
AUSomethingElse
AUSomethingElse AUMatrixMixer
AUSomethingElse
AUSomethingElse
New in iOS 6!
Sunday, December 2, 12
59. Audio Queues
(and the APIs that help them)
Sunday, December 2, 12
60. AudioQueue
• Easier than AURemoteIO - provide data
when you want to, less time pressure, can
accept or provide compressed formats
(MP3, AAC)
• Recording queue - receive buffers of
captured audio in a callback
• Play-out queue - enqueue buffers of audio
to play, optionally refill in a callback
Sunday, December 2, 12
62. Common AQ scenarios
• File player - Read from file and “prime”
queue buffers, start queue, when called
back with used buffer, refill from next part
of file
• Synthesis - Maintain state in your own
code, write raw samples into buffers during
callbacks
Sunday, December 2, 12
63. Web Radio
• Thursday class’ third project
• Use Audio File Stream Services to pick out
audio data from a network stream
• Enqueue these packets as new AQ buffers
• Dispose used buffers in callback
Sunday, December 2, 12
65. Parsing web radio
NSURLConnection delivers
NSData buffers, containing audio
and framing info. We pass it to NSData NSData
Audio File Services. Packets Packets Packets Packets Packets
Sunday, December 2, 12
66. Parsing web radio
NSURLConnection delivers
NSData buffers, containing audio
and framing info. We pass it to NSData NSData
Audio File Services. Packets Packets Packets Packets Packets
Packets Packets
Audio File Services calls us back
with parsed packets of audio data. Packets Packets Packets
Sunday, December 2, 12
67. Parsing web radio
NSURLConnection delivers
NSData buffers, containing audio
and framing info. We pass it to NSData NSData
Audio File Services. Packets Packets Packets Packets Packets
Packets Packets
Audio File Services calls us back
with parsed packets of audio data. Packets Packets Packets
We create an AudioQueueBuffer
Packets Packets
with those packets and enqueue it Packets
2 Packets
1 0
for play-out.
Packets Packets
Sunday, December 2, 12
68. A complex thing!
• What if we want to see that data after it’s
been decoded to PCM and is about to be
played?
• e.g., spectrum analysis, effects, visualizers
• AudioQueue design is “fire-and-forget”
Sunday, December 2, 12
69. AudioQueue Tap!
http://www.last.fm/music/Spinal+Tap
Sunday, December 2, 12
70. AudioQueueProcessingTap
• Set as a property on the Audio Queue
• Calls back to your function with decoded
(PCM) audio data
• Three types: pre- or post- effects (that the
AQ performs), or siphon. First two can
modify the data.
• Only documentation is in AudioQueue.h
Sunday, December 2, 12
71. Creating an AQ Tap
! ! // create the tap
! ! UInt32 maxFrames = 0;
! ! AudioStreamBasicDescription tapFormat = {0};
! ! AudioQueueProcessingTapRef tapRef;
! ! CheckError(AudioQueueProcessingTapNew(audioQueue,
! ! ! ! ! ! ! ! ! ! ! tapProc,
! ! ! ! ! ! ! ! ! ! ! (__bridge void *)(player),
! ! ! ! ! ! ! ! ! ! ! kAudioQueueProcessingTap_PreEffects,
! ! ! ! ! ! ! ! ! ! ! &maxFrames,
! ! ! ! ! ! ! ! ! ! ! &tapFormat,
! ! ! ! ! ! ! ! ! ! ! &tapRef),
! ! ! ! "couldn't create AQ tap");
Notice that you receive maxFrames and tapFormat. These do not appear to be settable.
Sunday, December 2, 12
77. AudioUnitRender()
• Last argument is an AudioBufferList, whose
AudioBuffer members have mData pointers
• If mData != NULL, audio unit does its
thing with those samples
• If mData == NULL, audio data pulls from
whatever it’s connected to
• So we just call with AudioBufferList ioData
we got from tap callback, right?
Sunday, December 2, 12
78. Psych!
• AQ tap provides data as signed ints
• Effect units only work with floating point
• We need to do an on-the-spot format
conversion
Sunday, December 2, 12
79. invalidname’s convert-
and-effect recipe
OSStatus converterInputRenderCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
CCFWebRadioPlayer *player = (__bridge CCFWebRadioPlayer*) inRefCon;
// read from buffer
ioData->mBuffers[0].mData = player.preRenderData;
return noErr;
}
AUConverter AUEffect AUConverter AUGenericOutput
Note: red arrows are float format, yellow arrows are int
Sunday, December 2, 12
80. How it works
• AUGraph: AUConverter → AUEffect →
AUConverter → AUGenericOutput
• Top AUConverter is connected to a render
callback function
Sunday, December 2, 12
81. The trick!
• Copy mData pointer to a state variable and
NULL it in ioData
• Call AudioQueueRender() on output unit.
The NULL makes it pull from the graph.
• Top of the graph pulls on render callback,
which gives it back the mData we copied
off.
Sunday, December 2, 12
82. Yes, really
This is the rest of tapProc()
! // copy off the ioData so the graph can read from it
// in render callback
! player.preRenderData = ioData->mBuffers[0].mData;
! ioData->mBuffers[0].mData = NULL;
!
! OSStatus renderErr = noErr;
! AudioUnitRenderActionFlags actionFlags = 0;
! renderErr = AudioUnitRender(player.genericOutputUnit,
! ! ! ! ! ! ! ! &actionFlags,
! ! ! ! ! ! ! ! player.renderTimeStamp,
! ! ! ! ! ! ! ! 0,
! ! ! ! ! ! ! ! inNumberFrames,
! ! ! ! ! ! ! ! ioData);
! NSLog (@"AudioUnitRender, renderErr = %ld",renderErr);
}
Sunday, December 2, 12
87. Multi-Route
• Ordinarily, one input or output is active:
earpiece, speaker, headphones, dock-
connected device
• “Last in wins”
• With AV Session “multi-route” category,
you can use several at once
• WWDC 2012 session 505
Sunday, December 2, 12
88. Utility classes moved
again
• C++ utilities, including the CARingBuffer
• < Xcode 4.3, installed into /Developer
• Xcode 4.3-4.4, optional download from
developer.apple.com
• ≧ Xcode 4.5, sample code project “Core
Audio Utility Classes”
Sunday, December 2, 12
89. Takeaways
• Core Audio fundamentals never change
• New stuff is added as properties, typedefs,
enums, etc.
• Watch the SDK API diffs document to find
the new stuff
• Hope you like header files and
experimentation
Sunday, December 2, 12
90. Q&A
• Slides will be posted to slideshare.net/
invalidname
• Code will be linked from there and my blog
• Watch CocoaConf RDU glassboard,
@invalidname on Twitter/ADN, or [Time
code]; blog for announcement
• Thanks!
Sunday, December 2, 12