Some of my favourite bits of AVFoundation. Topics include capture, composition, a custom player and scrubber interface, synchronized CAAnimations, and real-time VFX.
, AV Foundation moves to center stage as the essential media framework on the device, offering support for playing, capturing, and even editing audio and video. Borrowing some of the core ideas from the Mac's QuickTime, while adding many new concepts of its own, AV Foundation offers extraordinary capabilities for application programmers. This talk will offer a high-level overview of what's in AV Foundation, and a taste of what it can do.
Slides for my Master Video session at Renaissance 2014. This session provided a high-level overview of some of AV Foundation's video playback and editing capabilities.
The demo app for this talk can be found at:
https://github.com/tapharmonic/AVFoundationEditor
Composing and Editing Media with AV FoundationBob McCune
The document discusses using Apple's AV Foundation framework to work with audiovisual media on iOS and OS X, including playing, editing, and composing media through techniques like adding transitions between video clips, adjusting audio levels over time, and layering visual elements. It provides an overview of core AV Foundation concepts like assets, tracks, and playback before demonstrating examples of common media tasks.
Building Modern Audio Apps with AVAudioEngineBob McCune
1. AVAudioEngine is a framework that simplifies building real-time audio apps on iOS and macOS. It allows playing, recording, and processing audio using a node-based graph architecture.
2. Audio flows through a graph of connected audio nodes, forming active chains that establish audio processing threads. Common node types include sources, processors, and destinations.
3. The engine handles audio format management, buffer scheduling, and thread synchronization. It supports audio file and buffer playback, recording, effects processing, audio mixing, and MIDI playback through sampler instruments.
This document summarizes Chris Adamson's presentation on mastering media with AV Foundation. The presentation covered the fundamentals of digital media, including analog vs digital formats. It then discussed the iOS media frameworks, focusing on AV Foundation. It provided an overview of key AV Foundation classes for playback, capture, editing and advanced features. It also briefly covered HTTP Live Streaming and included demos of basic playback and recording functionality using AV Foundation.
The document summarizes some of the key techniques used in the Dubsmash iOS app for video creation and editing. It discusses using AVFoundation to handle video capture, stitching together multiple video parts while synchronizing them to audio, and rendering additional layers like text on top of the video. It notes some common pitfalls like crashes from doing AV work on background threads and unhelpful errors from AVFoundation.
Stupid Video Tricks (CocoaConf DC, March 2014)Chris Adamson
The document discusses various techniques for working with time-based media like video and audio using the AV Foundation framework in iOS and macOS. It provides an overview of common tasks like playback, capture, and editing using classes like AVPlayer, AVCaptureSession, and AVComposition. It then demonstrates more advanced tricks like animating an AVPlayerLayer and processing video frames in real-time using Core Image filters. The document recommends exploring other related frameworks like Core Audio, Core Media, Video Toolbox, and Core Video for additional functionality and performance.
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
, AV Foundation moves to center stage as the essential media framework on the device, offering support for playing, capturing, and even editing audio and video. Borrowing some of the core ideas from the Mac's QuickTime, while adding many new concepts of its own, AV Foundation offers extraordinary capabilities for application programmers. This talk will offer a high-level overview of what's in AV Foundation, and a taste of what it can do.
Slides for my Master Video session at Renaissance 2014. This session provided a high-level overview of some of AV Foundation's video playback and editing capabilities.
The demo app for this talk can be found at:
https://github.com/tapharmonic/AVFoundationEditor
Composing and Editing Media with AV FoundationBob McCune
The document discusses using Apple's AV Foundation framework to work with audiovisual media on iOS and OS X, including playing, editing, and composing media through techniques like adding transitions between video clips, adjusting audio levels over time, and layering visual elements. It provides an overview of core AV Foundation concepts like assets, tracks, and playback before demonstrating examples of common media tasks.
Building Modern Audio Apps with AVAudioEngineBob McCune
1. AVAudioEngine is a framework that simplifies building real-time audio apps on iOS and macOS. It allows playing, recording, and processing audio using a node-based graph architecture.
2. Audio flows through a graph of connected audio nodes, forming active chains that establish audio processing threads. Common node types include sources, processors, and destinations.
3. The engine handles audio format management, buffer scheduling, and thread synchronization. It supports audio file and buffer playback, recording, effects processing, audio mixing, and MIDI playback through sampler instruments.
This document summarizes Chris Adamson's presentation on mastering media with AV Foundation. The presentation covered the fundamentals of digital media, including analog vs digital formats. It then discussed the iOS media frameworks, focusing on AV Foundation. It provided an overview of key AV Foundation classes for playback, capture, editing and advanced features. It also briefly covered HTTP Live Streaming and included demos of basic playback and recording functionality using AV Foundation.
The document summarizes some of the key techniques used in the Dubsmash iOS app for video creation and editing. It discusses using AVFoundation to handle video capture, stitching together multiple video parts while synchronizing them to audio, and rendering additional layers like text on top of the video. It notes some common pitfalls like crashes from doing AV work on background threads and unhelpful errors from AVFoundation.
Stupid Video Tricks (CocoaConf DC, March 2014)Chris Adamson
The document discusses various techniques for working with time-based media like video and audio using the AV Foundation framework in iOS and macOS. It provides an overview of common tasks like playback, capture, and editing using classes like AVPlayer, AVCaptureSession, and AVComposition. It then demonstrates more advanced tricks like animating an AVPlayerLayer and processing video frames in real-time using Core Image filters. The document recommends exploring other related frameworks like Core Audio, Core Media, Video Toolbox, and Core Video for additional functionality and performance.
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
Stupid Video Tricks, CocoaConf Seattle 2014Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff
Stupid Video Tricks, CocoaConf Las VegasChris Adamson
The document discusses various techniques for manipulating and processing video and audio using AV Foundation frameworks in iOS and Mac OS X. It begins with an overview of AV Foundation and describes common tasks like playback, capture, and editing. It then demonstrates tricks like animating AVPlayerLayers and recording the screen. The document dives deeper into techniques for reading and manipulating subtitle, audio, and video tracks using Core Media, Core Audio, Core Video, and Core Image frameworks. It provides code samples for applying filters to video in real-time and writing modified data back out.
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)Chris Adamson
[updated from previous version to include Watch Connectivity, screenshots of WKInterfaceMovie]
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)Chris Adamson
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
This document provides an introduction and overview of the Roku SDK. It discusses the basics of developing Roku channels using the BrightScript programming language and built-in component library. It covers setting up the development environment, the file structure of Roku channels, common screens and objects, debugging, and preparing and loading content like audio and video. The document concludes with next steps like following Roku's design guidelines and publishing channels privately or to the public Roku channel store.
Building A Streaming Apple TV App (CocoaConf San Jose, Nov 2016)Chris Adamson
This document discusses building a streaming video app for Apple TV. It covers topics like codecs, containers, livestreaming, adaptive streaming using HTTP Live Streaming (HLS), creating HLS streams with tools like mediafilesegmenter, and securing streams. HLS breaks video into small file segments delivered over HTTP, making streaming scalable and suitable for mobile. Variant playlists allow encoding at multiple bitrates to adapt to network conditions.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses general bare-metal provisioning frameworks in OpenStack. It provides an overview of why bare-metal provisioning is useful, the history of bare-metal support in OpenStack releases from Essex to Grizzly, and how the bare-metal provisioning framework works including the bare-metal driver, power manager, instance type specifications, and scheduler. It also compares provisioning of virtual machines versus bare-metal machines.
1. Belvedere is a platform that aims to standardize environments from development to production by using the same OS image everywhere, convention-based configuration, and moving environment-specific configurations to environment variables.
2. Key aspects of Belvedere include using a single OS image built once then transformed for different environments, moving configurations to environment variables populated before app startup, and using short CNAMEs that resolve differently in each environment.
3. Benefits are finding problems earlier, familiarizing developers with production-like systems, and promoting images between environments easily with minimal manual steps.
iOS Media APIs (MobiDevDay Detroit, May 2013)Chris Adamson
The document discusses various iOS media APIs for playing, capturing, editing, and exporting audio and video content in iOS applications. It provides an overview of key frameworks like AV Foundation, Core Media, and Core Animation and describes how to perform common media tasks like playing music and videos, capturing video, editing/mixing audio and video, and exporting edited content. Code examples are provided to demonstrate how to use APIs like AVAudioPlayer, AVPlayer, AVCaptureSession, AVMutableComposition, and AVVideoComposition to accomplish these tasks.
This document discusses using Ironic to deploy Windows on bare metal servers. Ironic is OpenStack's bare metal provisioning service that allows physical servers to be provisioned like virtual machines. The document outlines how Ironic handles disk-based Windows images differently than partition-based Linux images. It also describes the deployment process for Windows, which involves PXE booting to instruct the server to boot from its local disk where the Windows image has been written, unlike Linux which uses PXE to provide the kernel and ramdisk for deployment. The document lists several areas of work around improving Windows support in Ironic and related projects like TripleO.
The document describes TOFU (Tofu on the Fly), a system for dynamically generating and caching image thumbnails on Amazon S3. It works by installing an Apache module that generates thumbnails using ImageMagick when images are requested, then caches them on S3 for future requests. This avoids hitting application servers for every image and provides scalability. The document discusses the TOFU architecture, implementation details, performance testing results, and strategies for integrating with CDNs like Akamai to improve performance.
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
Bit-isle has been using Ceph storage with OpenStack for 3 years, starting with a proof of concept in 2013. They have three Ceph environments - a development environment using OpenStack Havana and Ceph Dumpling, a staging environment using OpenStack Juno and Ceph Giant, and a production customer environment using OpenStack Kilo and Ceph Hammer. They chose Ceph because it provides high performance scalable storage without the need for expensive dedicated storage appliances or many storage engineers. Their initial POC was successful and showed Ceph could provide fault tolerance and cooperate well with OpenStack.
This document discusses wrapping Ruby gems into RPM packages for Fedora. It explains that RPM packages provide consistency, maintenance advantages, and security features compared to standalone gems. The gem2rpm tool can help automate wrapping gems to respect OS and gem conventions. The document also provides an overview of RPM packaging basics like the spec file structure and macros. It notes guidelines for naming Ruby gem RPM packages and that source packages typically package gems unchanged from Rubygems.
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
Jack Zhang is a Senior Enterprise Architect at Intel Corp. This document discusses Ceph storage configurations using Intel SSDs and discusses benchmark results. Tuning Ceph for all-flash storage can significantly improve performance, with up to 16x better random read performance and 7.6x better random write performance achieved. Using SSDs instead of HDDs provides much higher performance, needing 58x fewer drives for the same write performance and 175x fewer for the same read performance. The document also outlines several suggested Ceph storage node configurations using different ratios of SSDs and HDDs.
Using cobbler in a not so small environment 1.77chhorn
- cobbler basics
- why cobbler was chosen at a company
- how enterprise-requirements were met
- surrounding infrastructure (monitoring etc.)
- on community interaction
QConSP 2015 - Dicas de Performance para Aplicações WebFabio Akita
Antes de pensar em "vamos reescrever tudo na linguagem mais rápida da moda que tudo vai dar certo". Na verdade pra quase todas as aplicações Web, antes veja se você segue este checklist mínimo de 9 dicas. Você vai ver que a maioria não segue esse mínimo antes, e deveria.
Advanced AV Foundation (CocoaConf, Aug '11)Chris Adamson
The iOS version of iMovie uses the AV Foundation framework, and indications are that Final Cut Pro X will be using the Mac OS X version of AVF. And if AV Foundation is powerful enough to provide the core functionality of Final Cut, it must have some great stuff going on, right? In this session, we'll dig into the more powerful (and more challenging) APIs in AV Foundation, including reading and writing raw samples, performing live processing of incoming data at capture time, and advanced editing features like mixing audio and video tracks and adding Core Animation-based titles.
This document introduces the GPUImage framework, an open source iOS library for GPU-based image and video processing. It provides advantages over Core Image such as improved performance for real-time previews through OpenGL ES shader code and greater customization abilities. The framework uses frame buffer objects and textures to process video frames on the GPU, applying filters through shader programs before saving results back to the CPU. It employs Grand Central Dispatch for multi-threaded processing across CPU and GPU.
Stupid Video Tricks, CocoaConf Seattle 2014Chris Adamson
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff
Stupid Video Tricks, CocoaConf Las VegasChris Adamson
The document discusses various techniques for manipulating and processing video and audio using AV Foundation frameworks in iOS and Mac OS X. It begins with an overview of AV Foundation and describes common tasks like playback, capture, and editing. It then demonstrates tricks like animating AVPlayerLayers and recording the screen. The document dives deeper into techniques for reading and manipulating subtitle, audio, and video tracks using Core Media, Core Audio, Core Video, and Core Image frameworks. It provides code samples for applying filters to video in real-time and writing modified data back out.
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)Chris Adamson
[updated from previous version to include Watch Connectivity, screenshots of WKInterfaceMovie]
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)Chris Adamson
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
This document provides an introduction and overview of the Roku SDK. It discusses the basics of developing Roku channels using the BrightScript programming language and built-in component library. It covers setting up the development environment, the file structure of Roku channels, common screens and objects, debugging, and preparing and loading content like audio and video. The document concludes with next steps like following Roku's design guidelines and publishing channels privately or to the public Roku channel store.
Building A Streaming Apple TV App (CocoaConf San Jose, Nov 2016)Chris Adamson
This document discusses building a streaming video app for Apple TV. It covers topics like codecs, containers, livestreaming, adaptive streaming using HTTP Live Streaming (HLS), creating HLS streams with tools like mediafilesegmenter, and securing streams. HLS breaks video into small file segments delivered over HTTP, making streaming scalable and suitable for mobile. Variant playlists allow encoding at multiple bitrates to adapt to network conditions.
Get On The Audiobus (CocoaConf Atlanta, November 2013)Chris Adamson
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The document discusses general bare-metal provisioning frameworks in OpenStack. It provides an overview of why bare-metal provisioning is useful, the history of bare-metal support in OpenStack releases from Essex to Grizzly, and how the bare-metal provisioning framework works including the bare-metal driver, power manager, instance type specifications, and scheduler. It also compares provisioning of virtual machines versus bare-metal machines.
1. Belvedere is a platform that aims to standardize environments from development to production by using the same OS image everywhere, convention-based configuration, and moving environment-specific configurations to environment variables.
2. Key aspects of Belvedere include using a single OS image built once then transformed for different environments, moving configurations to environment variables populated before app startup, and using short CNAMEs that resolve differently in each environment.
3. Benefits are finding problems earlier, familiarizing developers with production-like systems, and promoting images between environments easily with minimal manual steps.
iOS Media APIs (MobiDevDay Detroit, May 2013)Chris Adamson
The document discusses various iOS media APIs for playing, capturing, editing, and exporting audio and video content in iOS applications. It provides an overview of key frameworks like AV Foundation, Core Media, and Core Animation and describes how to perform common media tasks like playing music and videos, capturing video, editing/mixing audio and video, and exporting edited content. Code examples are provided to demonstrate how to use APIs like AVAudioPlayer, AVPlayer, AVCaptureSession, AVMutableComposition, and AVVideoComposition to accomplish these tasks.
This document discusses using Ironic to deploy Windows on bare metal servers. Ironic is OpenStack's bare metal provisioning service that allows physical servers to be provisioned like virtual machines. The document outlines how Ironic handles disk-based Windows images differently than partition-based Linux images. It also describes the deployment process for Windows, which involves PXE booting to instruct the server to boot from its local disk where the Windows image has been written, unlike Linux which uses PXE to provide the kernel and ramdisk for deployment. The document lists several areas of work around improving Windows support in Ironic and related projects like TripleO.
The document describes TOFU (Tofu on the Fly), a system for dynamically generating and caching image thumbnails on Amazon S3. It works by installing an Apache module that generates thumbnails using ImageMagick when images are requested, then caches them on S3 for future requests. This avoids hitting application servers for every image and provides scalability. The document discusses the TOFU architecture, implementation details, performance testing results, and strategies for integrating with CDNs like Akamai to improve performance.
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
Bit-isle has been using Ceph storage with OpenStack for 3 years, starting with a proof of concept in 2013. They have three Ceph environments - a development environment using OpenStack Havana and Ceph Dumpling, a staging environment using OpenStack Juno and Ceph Giant, and a production customer environment using OpenStack Kilo and Ceph Hammer. They chose Ceph because it provides high performance scalable storage without the need for expensive dedicated storage appliances or many storage engineers. Their initial POC was successful and showed Ceph could provide fault tolerance and cooperate well with OpenStack.
This document discusses wrapping Ruby gems into RPM packages for Fedora. It explains that RPM packages provide consistency, maintenance advantages, and security features compared to standalone gems. The gem2rpm tool can help automate wrapping gems to respect OS and gem conventions. The document also provides an overview of RPM packaging basics like the spec file structure and macros. It notes guidelines for naming Ruby gem RPM packages and that source packages typically package gems unchanged from Rubygems.
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
Jack Zhang is a Senior Enterprise Architect at Intel Corp. This document discusses Ceph storage configurations using Intel SSDs and discusses benchmark results. Tuning Ceph for all-flash storage can significantly improve performance, with up to 16x better random read performance and 7.6x better random write performance achieved. Using SSDs instead of HDDs provides much higher performance, needing 58x fewer drives for the same write performance and 175x fewer for the same read performance. The document also outlines several suggested Ceph storage node configurations using different ratios of SSDs and HDDs.
Using cobbler in a not so small environment 1.77chhorn
- cobbler basics
- why cobbler was chosen at a company
- how enterprise-requirements were met
- surrounding infrastructure (monitoring etc.)
- on community interaction
QConSP 2015 - Dicas de Performance para Aplicações WebFabio Akita
Antes de pensar em "vamos reescrever tudo na linguagem mais rápida da moda que tudo vai dar certo". Na verdade pra quase todas as aplicações Web, antes veja se você segue este checklist mínimo de 9 dicas. Você vai ver que a maioria não segue esse mínimo antes, e deveria.
Advanced AV Foundation (CocoaConf, Aug '11)Chris Adamson
The iOS version of iMovie uses the AV Foundation framework, and indications are that Final Cut Pro X will be using the Mac OS X version of AVF. And if AV Foundation is powerful enough to provide the core functionality of Final Cut, it must have some great stuff going on, right? In this session, we'll dig into the more powerful (and more challenging) APIs in AV Foundation, including reading and writing raw samples, performing live processing of incoming data at capture time, and advanced editing features like mixing audio and video tracks and adding Core Animation-based titles.
This document introduces the GPUImage framework, an open source iOS library for GPU-based image and video processing. It provides advantages over Core Image such as improved performance for real-time previews through OpenGL ES shader code and greater customization abilities. The framework uses frame buffer objects and textures to process video frames on the GPU, applying filters through shader programs before saving results back to the CPU. It employs Grand Central Dispatch for multi-threaded processing across CPU and GPU.
Protocol can be used to refactor code with similar implementations in different classes by defining common behaviors and properties through a protocol. The document describes refactoring code to load album cover images by defining a LocalItemImageProvider protocol with methods to get the image loading approach, URL string, image, and callback. Classes then implement the protocol and a UIImageView category loads images by calling the protocol methods, simplifying code and improving testability.
This document discusses Spring on Kubernetes and containerization best practices. It provides an overview of Spring Boot 2.2 updates, how to build container images following best practices, memory configuration for containers, using testcontainers for testing, and an introduction to Spring Cloud Kubernetes for building portable apps on Kubernetes.
This document summarizes CoreOS deployment at Carnival using Docker containers and services. It discusses using Drone for continuous integration testing, tagging and pushing successful images to a private Docker repository. When changes are made on the master branch, the updated image is automatically deployed to staging. Production deployments involve tagging the image, updating an etcd version value, and having a watcher service restart impacted services when the version changes. Dabus observes systemd notifications and sends Slack alerts about service restarts.
This document summarizes CoreOS deployment at Carnival using Docker containers and services. It discusses using Drone for continuous integration testing on code commits to the master branch and automatically deploying successful builds to staging. Production deployments involve tagging a Docker image, updating an etcd version value, and having a watcher service restart Docker containers when the version changes to roll out updates. Dabus is used to send Slack notifications of service restarts.
The document provides an overview of iOS development basics including the iOS ecosystem, development tools like Xcode and Instruments, Objective-C language syntax, UI elements, memory management, and connecting to network resources. It covers setting up an iOS developer account, provisioning profiles, and submitting apps to the App Store. Key classes for networking like NSURL, NSURLRequest, and NSURLConnection are introduced along with using delegates and data sources. Parsing JSON and XML is also briefly discussed.
This document provides information about Node.js, Express, and using Node.js with databases like MySQL. It describes Node.js as a JavaScript web framework that is fast and small. It explains that Express is a web application framework built on Node.js and Connect. It provides instructions for installing Express and a quick start guide. It also lists features of Express like routing, views, and sessions. Finally, it discusses hosting Node.js applications on platforms like Heroku and connecting Node.js to MySQL.
audio, video and canvas in HTML5 - standards>next Manchester 29.09.2010Patrick Lauke
Part II of the standards-next.org workshop on HTML5 with Bruce Lawson, concentrating on audio, video and canvas (plus hints of additional HTML5 API niceness)
This document provides an overview of mobile development and the iOS ecosystem. It discusses that mobile apps require UI optimization and a mission statement. It also covers Xcode, Objective-C, memory management, UIKit, MapKit, and annotations for displaying locations on maps. The document recommends designing mobile apps differently than desktop apps and following Apple's human interface guidelines.
HTML5 APIs - native multimedia support and beyond - University of Leeds 05.05...Patrick Lauke
This document provides an overview of various HTML5 APIs for multimedia, including native <video> and <audio> elements, the <canvas> element for scriptable graphics, and geolocation APIs. It discusses key considerations around supporting different media formats in <video> and <audio> and controlling media playback via JavaScript. The document also briefly introduces other HTML5 APIs for offline applications, local storage, and databases. It emphasizes the importance of feature detection over browser sniffing for progressive enhancement.
Docker for Developers: Dev, Test, Deploy @ BucksCo Devops at MeetMe HQErica Windisch
The document discusses Docker's platform and ecosystem, which has grown significantly over 19 months to include over 640 contributors, 2.75 million downloads, and extensive community support and documentation. It also outlines the key components of Docker's platform, including the Docker Engine for building, shipping, and running containers, and Docker Hub for sharing images. Finally, it provides examples of how to use Docker to build, run, and manage applications and services across infrastructure.
Video here: http://youtu.be/eeGvMkicAv4
Xamarin.iOS enables us to write native applications that take full advantage of iOS's large number of libraries - from the user interface to motion processing, graphics, audio, cameras, sensors, networking... This incredibly rich software platform runs all day long in the pockets of hundreds of millions of people. When you couple it to .NET, you have a programmer's dream environment.
So let's learn to program iOS! In a little over an hour we will:
• Browse through iOS's APIs to find interesting bits of functionality
• Explore the architecture of UIKit - the user interface framework for iOS
• Use Xamarin Studio to write and debug applications
With this introduction you will have enough knowledge to write your first application that can use all the richness of iOS and all your favorite code written in .NET. Oh, and it will run on beautiful devices too.
HTML5 is all the rage with the cool kids, and although there’s a lot of focus on the new language, there’s plenty for web app developers with new JavaScript APIs both in the HTML5 spec and separated out as their own W3C specifications. This session will take you through demos and code and show off some of the outright crazy bleeding edge demos that are being produced today using the new JavaScript APIs. But it’s not all pie in the sky – plenty is useful today, some even in Internet Explorer!
A brief rollerskate along HTML5 multimedia beach, in which we pop into the soda shop of subtitling and the ice-cream parlour of synchronised media, before we incongruously pop into the igloo of JavaScript access to the camera (because I pulled in from slides from another presso after we talked about it in an earlier presentation).
Capture, record, clip, embed and play, search: video from newbie to ninjaVito Flavio Lorusso
This document provides an overview of building a video streaming solution using Azure Media Services. It discusses the key components involved including:
1. Creating Media Services and Storage accounts
2. Uploading videos as assets and encoding them
3. Generating thumbnails, subtitles and adaptive bitrate manifests
4. Creating a streaming endpoint and getting streaming URLs
5. Integrating with a web app using the Azure Media Player
The document also briefly covers integrating with Azure Search to enable video search functionality on the web app. It provides code samples for common tasks like uploading, encoding, and playing videos using Media Services and searching using Azure Search.
Lessons from Driverless AI going to ProductionSri Ambati
Driverless AI can run on various cloud platforms and on-premises servers. It supports Linux environments with CUDA GPUs. The document provides step-by-step instructions for setting up Driverless AI on an IBM Power P9 system, including installing prerequisites, running experiments through a web interface, and automating training with Python. It also addresses common customer questions about installation, deployment, and productionizing Driverless AI models and pipelines.
This slide telling what "adaptive streaming" is. In the beginning, it explains how content(media) is prepared to fit adaptive needs, then talking about yapi.js - which is the web player used by KKTV(VOD service) in Taiwan.
Bob McWhirter is a JBoss Fellow and Chief Architect of Middleware Cloud Computing. He founded The Codehaus, Drools, and TorqueBox. The document discusses BoxGrinder, a tool that can create virtual machine appliances from definition files in order to simplify deploying software to infrastructure platforms like Amazon EC2 or VMware. It describes how BoxGrinder supports both "baking" and "frying" approaches to creating VMs and walks through an example of using BoxGrinder to build a JBoss application server appliance.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
3. AVFoundation
• Mid-level Objective-C framework for playing,
recording and editing time-based media
• Available on iOS 4.0+ and Mac OS X 10.7+
AV Foundation
Core Audio Core Media Core Animation
Media Player
UIKit
5. A Brief History
iOS AVFoundation Features
2.2 AVAudioPlayer
3.0 AVAudioRecorder,AVAudioSession
4.0 Capture, playback and editing
4.1 Read/write sample buffers, queue player
5.0 OpenGLES compatibility,AirPlay
6. New in iOS 6.0
• Real-time access to video buffers
• Face tracking during capture
• Better support for encrypted streams
• Advanced synchronization features
7. AVAsset (abstract base class)
• AVURLAsset: local or remote
• AVComposition
• AVMutableComposition
AVAsset
AVAssetTrack
AVAssetTrackSegment
audio
video
8. @protocol AVAsynchronousKeyValueLoading
• Handler invoked on arbitrary thread;
dispatch_async to main queue
- (void)loadValuesAsynchronouslyForKeys:(NSArray *)keys
completionHandler:(void (^)())handler;
- (AVKeyValueStatus)statusOfValueForKey:(NSString *)key
error:(NSError **)outError;
10. CMTime
• C struct representing rational number
• numerator: value, denominator: scale
• time in seconds = value / scale
• Flags: valid, +/-ve infinity, has been rounded
• Time scale of 600 can conveniently
represents 24, 25 and 30 fps exactly
24. AVSynchronizedLayer
• Confers timing state upon sublayers
• Timing synced with AVPlayerItem instance
• +synchronizedLayerWithPlayerItem:
• When creating CAAnimations:
• Use AVCoreAnimationBeginTimeAtZero
• -setRemovedOnCompletion:NO
30. AVPlayerItemVideoDataOutput
• Access pixel buffers during playback
• Request wakeup, poll w/ display link
running pausedCADisplayLink
request change
notification
media data
will change
buffer for
time?
NO
YES
process &
display
entry
point