The document provides information about the CS193p course at Stanford for Fall 2017-18 and covers topics related to developing applications for iOS, including Core Motion, UIImagePickerController, and Core Image/Vision. Specifically, it discusses:
1) How to use Core Motion to access motion sensors like the accelerometer, gyroscope, and magnetometer to detect device movement and position.
2) How to use UIImagePickerController to take pictures and videos with the camera or select images from the photo library and access the media data.
3) How Core Image can be used to efficiently apply hundreds of filters to images for processing.
Magnetic tracking is one of miscellaneous motion capture methods, and maybe the oldest. However, its working principle is rarely introduced in detail perhaps due to its early adaptation resides in military and medical industry. Due to my interest in VR & animation MoCap, I’ve spent some time digging into the very depth of it and would like to share some non-confidential knowledge of it with you.
In this slide, a short history of magnetic tracking will be visited, followed by its working principle and algorithm simulation. Hope you enjoy it.
If you wanna discuss something in depth with me, please don’t hesitate to contact me via: dibao.wang@gmail.com
This presentations includes the basic fundamentals of time series data forecasting. It starts with basic naive, regression models and then explains advanced ARIMA models.
Magnetic tracking is one of miscellaneous motion capture methods, and maybe the oldest. However, its working principle is rarely introduced in detail perhaps due to its early adaptation resides in military and medical industry. Due to my interest in VR & animation MoCap, I’ve spent some time digging into the very depth of it and would like to share some non-confidential knowledge of it with you.
In this slide, a short history of magnetic tracking will be visited, followed by its working principle and algorithm simulation. Hope you enjoy it.
If you wanna discuss something in depth with me, please don’t hesitate to contact me via: dibao.wang@gmail.com
This presentations includes the basic fundamentals of time series data forecasting. It starts with basic naive, regression models and then explains advanced ARIMA models.
A detailed overview of Kapacitor, InfluxDB’s native data processing engine. How to install, configure and build custom TICKscripts enable alerting and anomaly detection
VideoMR - A Map and Reduce Framework for Real-time Video ProcessingMatthias Trapp
Presentation of research paper "VideoMR: A Map and Reduce Framework for Real-time Video Processing" 23rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2015 (WSCG 2015) in Plzen, Czech Republic.
Beyond Breakpoints: A Tour of Dynamic AnalysisFastly
Despite advances in software design and static analysis techniques, software remains incredibly complicated and difficult to reason about. Understanding highly-concurrent, kernel-level, and intentionally-obfuscated programs are among the problem domains that spawned the field of dynamic program analysis. More than mere debuggers, the challenge of dynamic analysis tools is to be able record, analyze, and replay execution without sacrificing performance. This talk will provide an introduction to the dynamic analysis research space and hopefully inspire you to consider integrating these techniques into your own internal tools.
Mobility insights at Swisscom - Understanding collective mobility in SwitzerlandFrançois Garillot
Swisscom is the leading mobile-service provider in Switzerland, with a market share high enough to enable us to model and understand the collective mobility in every area of the country. To accomplish that, we built an urban planning tool that helps cities better manage their infrastructure based on data-based insights, produced with Apache Spark, YARN, Kafka and a good dose of machine learning. In this talk, we will explain how building such a tool involves mining a massive amount of raw data (1.5E9 records/day) to extract fine-grained mobility features from raw network traces. These features are obtained using different machine learning algorithms. For example, we built an algorithm that segments a trajectory into mobile and static periods and trained classifiers that enable us to distinguish between different means of transport. As we sketch the different algorithmic components, we will present our approach to continuously run and test them, which involves complex pipelines managed with Oozie and fuelled with ground truth data. Finally, we will delve into the streaming part of our analytics and see how network events allow Swisscom to understand the characteristics of the flow of people on roads and paths of interest. This requires making a link between network coverage information and geographical positioning in the space of milliseconds and using Spark streaming with libraries that were originally designed for batch processing. We will conclude on the advantages and pitfalls of Spark involved in running this kind of pipeline on a multi-tenant cluster. Audiences should come back from this talk with an overall picture of the use of Apache Spark and related components of its ecosystem in the field of trajectory mining.
Measure anything, measure everything.
Effortless monitoring with Statsd, Collectd and Graphite can increase software development productivity and quality at the same time.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM) framework. While the algorithms need to be accurate it also needs to operate real time in low cost embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
The Wonderful-Amazing-Orientation-Motion-Sensormatic MachineAndrew Fisher
Mobile devices are magical things and the device API makes them even more magical. Being able to determine the orientation or acceleration of a device in space from a web browser affords a new set of interactions for developers and designers to play with.
This presentation shows how to use the device API properly and some applications of it.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
A detailed overview of Kapacitor, InfluxDB’s native data processing engine. How to install, configure and build custom TICKscripts enable alerting and anomaly detection
VideoMR - A Map and Reduce Framework for Real-time Video ProcessingMatthias Trapp
Presentation of research paper "VideoMR: A Map and Reduce Framework for Real-time Video Processing" 23rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2015 (WSCG 2015) in Plzen, Czech Republic.
Beyond Breakpoints: A Tour of Dynamic AnalysisFastly
Despite advances in software design and static analysis techniques, software remains incredibly complicated and difficult to reason about. Understanding highly-concurrent, kernel-level, and intentionally-obfuscated programs are among the problem domains that spawned the field of dynamic program analysis. More than mere debuggers, the challenge of dynamic analysis tools is to be able record, analyze, and replay execution without sacrificing performance. This talk will provide an introduction to the dynamic analysis research space and hopefully inspire you to consider integrating these techniques into your own internal tools.
Mobility insights at Swisscom - Understanding collective mobility in SwitzerlandFrançois Garillot
Swisscom is the leading mobile-service provider in Switzerland, with a market share high enough to enable us to model and understand the collective mobility in every area of the country. To accomplish that, we built an urban planning tool that helps cities better manage their infrastructure based on data-based insights, produced with Apache Spark, YARN, Kafka and a good dose of machine learning. In this talk, we will explain how building such a tool involves mining a massive amount of raw data (1.5E9 records/day) to extract fine-grained mobility features from raw network traces. These features are obtained using different machine learning algorithms. For example, we built an algorithm that segments a trajectory into mobile and static periods and trained classifiers that enable us to distinguish between different means of transport. As we sketch the different algorithmic components, we will present our approach to continuously run and test them, which involves complex pipelines managed with Oozie and fuelled with ground truth data. Finally, we will delve into the streaming part of our analytics and see how network events allow Swisscom to understand the characteristics of the flow of people on roads and paths of interest. This requires making a link between network coverage information and geographical positioning in the space of milliseconds and using Spark streaming with libraries that were originally designed for batch processing. We will conclude on the advantages and pitfalls of Spark involved in running this kind of pipeline on a multi-tenant cluster. Audiences should come back from this talk with an overall picture of the use of Apache Spark and related components of its ecosystem in the field of trajectory mining.
Measure anything, measure everything.
Effortless monitoring with Statsd, Collectd and Graphite can increase software development productivity and quality at the same time.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM) framework. While the algorithms need to be accurate it also needs to operate real time in low cost embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
The Wonderful-Amazing-Orientation-Motion-Sensormatic MachineAndrew Fisher
Mobile devices are magical things and the device API makes them even more magical. Being able to determine the orientation or acceleration of a device in space from a web browser affords a new set of interactions for developers and designers to play with.
This presentation shows how to use the device API properly and some applications of it.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2. CS193p
Fall 2017-18
Today
Core Motion
Detecting the position and movement of the device
Demo: Gravity-driven Playing Card
Camera
How to take pictures in your app
Demo: Taking a picture to be our background image in EmojiArt
3. CS193p
Fall 2017-18
Core Motion
API to access motion sensing hardware on your device
Primary inputs: Accelerometer, Gyro, Magnetometer
Not all devices have all inputs (e.g. only later model devices have a gyro)
Class used to get this input is CMMotionManager
Use only one instance per application (else performance hit)
It is a “global resource,” so getting one via a class method somewhere is okay
Usage
1. Check to see what hardware is available
2. Start the sampling going and poll the motion manager for the latest sample it has
... or ...
1. Check to see what hardware is available
2. Set the rate at which you want data to be reported from the hardware
3. Register a closure (and a queue to run it on) to call each time a sample is taken
4. CS193p
Fall 2017-18
Core Motion
Checking availability of hardware sensors
var {accelerometer,gyro,magnetometer,deviceMotion}Available: Bool
The “device motion” is a combination of all available (accelerometer, magnetometer, gyro).
We’ll talk more about that in a couple of slides.
Starting the hardware sensors collecting data
You only need to do this if you are going to poll for data.
Generally used when some architecture in your app is already periodic (e.g. animation frames).
func start{Accelerometer,Gyro,Magnetometer,DeviceMotion}Updates()
Is the hardware currently collecting data?
var {accelerometer,gyro,magnetometer,deviceMotion}Active: Bool
Stop the hardware collecting data
It is a performance hit to be collecting data, so stop during times you don’t need the data.
func stop{Accelerometer,Gyro,Magnetometer,DeviceMotion}Updates()
5. CS193p
Fall 2017-18
Core Motion
Checking the data (from existing periodic mechanism)
var accelerometerData: CMAccelerometerData?
CMAccelerometerData object provides var acceleration: CMAcceleration
struct CMAcceleration {
var x: Double // in g (9.8 m/s/s)
var y: Double // in g
var z: Double // in g
}
This raw data includes acceleration due to gravity
So, if the device were laid flat, z would be 1.0 and x and y would be 0.0
6. CS193p
Fall 2017-18
Core Motion
Checking the data (from existing periodic mechanism)
var gyroData: CMGyroData?
CMGyroData object provides var rotationRate: CMRotationRate
struct CMRotationRate {
var x: Double // in radians/s
var y: Double // in radians/s
var z: Double // in radians/s
}
Sign of the rotation data follows right hand rule
The data above will be biased
7. CS193p
Fall 2017-18
Core Motion
Checking the data (from existing periodic mechanism)
var magnetometerData: CMMagnetometerData?
CMMagnetometerData object provides var magneticField: CMMagneticField
struct CMMagneticField {
var x: Double // in microteslas
var y: Double // in microteslas
var z: Double // in microteslas
}
The data above will be biased
8. CS193p
Fall 2017-18
CMDeviceMotion
CMDeviceMotion is a “combined” motion data source
It uses information from all the hardware to improve the data from each.
var deviceMotion: CMDeviceMotion?
Acceleration Data in CMDeviceMotion
var gravity: CMAcceleration
var userAcceleration: CMAcceleration // gravity factored out using gyro
Other information in CMDeviceMotion
var rotationRate: CMRotationRate // bias removed from raw data using accelerometer
var attitude: CMAttitude // device’s attitude (orientation) in 3D space
class CMAttitude: NSObject // roll, pitch and yaw are in radians
var roll: Double // around longitudinal axis passing through top/bottom
var pitch: Double // around lateral axis passing through sides
var yaw: Double // around axis with origin at CofG and ⊥ to screen directed down
}
var heading: Double // in degrees, where 0 is north (true or magnetic depending on frame)
9. CS193p
Fall 2017-18
CMDeviceMotion
Reference Frame
Magnetometer use in CMDeviceMotion can be controlled by setting its reference frame.
Specify this when calling startDeviceMotionUpdates.
xArbitraryZVertical // the default, does not use magnetometer
xArbitraryCorrectedZVertical // uses magnetometer (if available) to correct yaw over time
xMagnetic/TrueNorthZVertical // uses magnetometer for device position/heading in world
These last two may require the user to calibrate the magnetometer.
And for TrueNorth, location information (e.g. GPS/Wifi/Cellular) will also be required.
North frames are necessary for apps that use things like Augmented Reality.
To get heading, for example, you must use a MagneticNorth or TrueNorth reference frame.
Always check to make sure the reference frame you want is available on the device …
static func availableAttitudeReferenceFrames() -> CMAttitudeReferenceFrame
10. CS193p
Fall 2017-18
Core Motion
Registering a block to receive Accelerometer data
func startAccelerometerUpdatesToQueue(queue: OperationQueue,
withHandler: CMAccelerometerHandler)
typealias CMAccelerationHandler = (CMAccelerometerData?, Error?) -> Void
queue can be an OperationQueue() you create or Operation.main (or current)
Registering a block to receive Gyro data
func startGyroUpdatesToQueue(queue: OperationQueue,
withHandler: CMGyroHandler)
typealias CMGyroHandler = (CMGyroData?, Error?) -> Void
Registering a block to receive Magnetometer data
func startMagnetometerUpdatesToQueue(queue: OperationQueue,
withHandler: CMMagnetometerHandler)
typealias CMMagnetometerHandler = (CMMagnetometerData?, Error?) -> Void
11. CS193p
Fall 2017-18
Core Motion
Registering a block to receive DeviceMotion data
func startDeviceMotionUpdates(using: CMAttitudeReferenceFrame,
queue: OperationQueue,
withHandler: (CMDeviceMotion?, Error?) -> Void)
queue can be an OperationQueue() you create or Operation.mainQueue (or currentQueue)
Errors … CMErrorDeviceRequiresMovement
CMErrorTrueNorthNotAvailable
CMErrorMotionActivityNotAvailable
CMErrorMotionActivityNotAuthorized
12. CS193p
Fall 2017-18
Core Motion
Setting the rate at which your block gets executed
var accelerometerUpdateInterval: TimeInterval
var gyroUpdateInterval: TimeInterval
var magnetometerUpdateInterval: TimeInterval
var deviceMotionUpdateInterval: TimeInterval
It is okay to add multiple handler blocks
Even though you are only allowed one CMMotionManager
However, each of the blocks will receive the data at the same rate (as set above)
(Multiple objects are allowed to poll at the same time as well, of course.)
13. CS193p
Fall 2017-18
Accelerometer Over Time
Historical Accelerometer Data
Sometimes you don’t need to look at the accelerometer in real time.
You just want to know what happened over a period of time in the past.
For example, if you want an idea of the user’s physical movement pattern.
The class CMSensorRecorder can record (at 50hz) and then play back accelerometer data.
Not all devices are capable of this (iPhone 7 and later, Apple Watch).
isAccelerometerRecordingAvailable() -> Bool // whether this device can record
Start recording data …
func recordAccelerometer(forDuration: TimeInterval) // keep this short for performance
Retrieving the recorded data …
func accelerometerData(from: Date, to: Date) -> CMSensorDataList // 3 day max
You enumerate over the CMAccelerometerData objects in a CMSensorDataList with for in …
for dataPoint: CMRecordedAccelerometerData in sensorDataList { . . . }
14. CS193p
Fall 2017-18
Activity Monitoring
Rough estimate of what the user is doing
For example, stationary, walking, running, automotive, or cycling.
You track this with a CMMotionActivityManager (not a CMMotionManager!).
func startActivityUpdates(to: OperationQueue, withHandler: (CMMotionActivity?) -> Void)
CMMotionActivity is one of the above activities.
You can also query historical activity with …
func queryActivityStarting(from: Date, to: Date, to: OperationQueue, withHandler: …)
15. CS193p
Fall 2017-18
Pedometer
Pedometer
Getting the user’s “step” information only makes sense over time.
Create a CMPedometer and then send it the message …
func startUpdates(from: Date, withHandler: (CMPedometerData?, Error?) -> Void)
The from Date is allowed to be in the past (but only last 7 days are recorded).
Your handler will be called periodically with the struct CMPedometerData which has …
startDate and endDate of the data
numberOfSteps, distance, averageActivePace, and currentPace during the time
also floorsAscended and floorsDescended
Altimeter
Get relative altitude changes.
func startRelativeAltitudeUpdates(to: OperationQueue, withHandler: (CMAltitudeData?, Error?))
CMAltitudeData has both change in altitude in meters and raw atmospheric pressure data.
16. CS193p
Fall 2017-18
Core Motion
Checking the authorization status of hardware sensors
Some information is considered “private” to the user (e.g. fitness data).
Specifically CMPedometer, CMSensorRecorder, CMMotionActivityManager and CMAltimeter.
iOS will automatically ask the user (once) for permission to access this information.
You can find out what the status is at any time with this static func on each of these.
static func authorizationStatus() -> CMAuthorizationStatus
struct CMAuthorizationStatus {
case notDetermined // user has not yet been asked
case restricted // fitness data access disabled in Settings
case denied // user has explicitly denied your app access
case authorized // ready to go!
}
Lack of authorization may also show up as an error when you request data.
18. CS193p
Fall 2017-18
UIImagePickerController
Modal view controller to get media from camera or photo library
i.e., you put it up with present(_:animated:completion:)
Usage
1. Create it & set its delegate (it can’t do anything without its delegate)
2. Configure it (source, kind of media, user edibility, etc.)
3. Present it
4. Respond to delegate methods when user is done/cancels picking the media
What the user can do depends on the platform
Almost all devices have cameras, but some can record video, some can not
You can only offer camera or photo library on iPad (not both together at the same time)
As with all device-dependent API, we want to start by check what’s available …
static func isSourceTypeAvailable(sourceType: UIImagePickerControllerSourceType) -> Bool
Source type is .photoLibrary or .camera or .savedPhotosAlbum (camera roll)
19. CS193p
Fall 2017-18
UIImagePickerController
But don’t forget that not every source type can give video
So, you then want to check ...
static func availableMediaTypes(for: UIImagePickerControllerSourceType) -> [String]?
Depending on device, will return one or more of these ...
kUTTypeImage // pretty much all sources provide this, hardly worth checking for even
kUTTypeLivePhoto // must also say kUTTypeImage for this one to work
kUTTypeMovie // audio and video together, only some sources provide this
You can get even more specific about cameras
(Though usually this is not necessary)
static func isCameraDeviceAvailable(UIImagePickerControllerCameraDevice) -> Bool
returns .rear or .front
There are other camera-specific interrogations too, for example …
static func isFlashAvailableForCameraDevice(UIImagePickerControllerCameraDevice) -> Bool
These are declared in the MobileCoreServices framework.
import MobileCoreServices
20. CS193p
Fall 2017-18
UIImagePickerController
Set the source and media type you want in the picker
Example setup of a picker for capturing video (kUTTypeMovie) …
(From here out, UIImagePickerController will be abbreviated UIIPC for space reasons.)
let picker = UIImagePickerController()
let mediaTypeMovie = kUTTypeMovie as String
picker.delegate = self // self must implement UINavigationControllerDelegate too
if UIIPC.isSourceTypeAvailable(.camera) {
picker.sourceType = .camera
if let availableTypes = UIIPC.availableMediaTypesForSourceType(.camera) {
if availableTypes.contains(mediaTypeMovie) {
picker.mediaTypes = [mediaTypeMovie]
// proceed to put the picker up
}
}
}
21. CS193p
Fall 2017-18
UIImagePickerController
Editability
var allowsEditing: Bool
If true, then the user will have opportunity to edit the image/video inside the picker.
When your delegate is notified that the user is done, you’ll get both raw and edited versions.
Limiting Video Capture
var videoQuality: UIImagePickerControllerQualityType
.typeMedium // default
.typeHigh
.type640x480
.typeLow
.typeIFrame1280x720 // native on some devices
.typeIFrame960x540 // native on some devices
var videoMaximumDuration: TimeInterval // defaults to 10 minutes
22. CS193p
Fall 2017-18
UIImagePickerController
Present the picker
present(picker, animated: true, completion: nil)
On iPad, if you are not offering Camera (just photo library), you must present with popover.
If you are offering the Camera on iPad, then full-screen is preferred.
Remember: on iPad, it’s Camera OR Photo Library (not both at the same time).
Delegate will be notified when user is done
func imagePickerController(UIIPC, didFinishPickingMediaWithInfo info: [String:Any]) {
// extract image/movie data/metadata here from info, more on the next slide
picker.presentingViewController?.dismiss(animated: true) { }
}
Also dismiss it when cancel happens
func imagePickerControllerDidCancel(UIIPC) {
picker.presentingViewController?.dismiss(animated: true) { }
}
23. CS193p
Fall 2017-18
UIImagePickerController
What is in that info dictionary?
UIImagePickerControllerMediaType // kUTTypeImage, kUTTypeMovie
UIImagePickerControllerOriginalImage // UIImage
UIImagePickerControllerEditedImage // UIImage
UIImagePickerControllerImageURL // URL (in a temp location, so move it to keep it)
UIImagePickerControllerCropRect // CGRect (in an NSValue)
UIImagePickerControllerMediaMetadata // Dictionary of info about the image
UIImagePickerControllerLivePhoto // a PHLivePhoto
UIImagePickerControllerPHAsset // a PHAsset (see PHPhotoLibrary)
UIImagePickerControllerMediaURL // URL of the video if kUTTypeMovie
24. CS193p
Fall 2017-18
UIImagePickerController
Saving taken images or video into the device’s photo library
You can save to the user’s Camera Roll …
func UIImageWriteToSavedPhotosAlbum(
_ image: UIImage,
_ target: Any?, // the object to send selector to when finished writing
_ selector: Selector? // selector to send to target when finished writing
_ context: UnsafeMutableRawPointer? // passed to the selector
)
It’s a bummer that this isn’t closure-based, but it is what it is.
This is a very simple and convenient way to do this.
But this only makes sense if the user only occasionally would want to save an image.
Otherwise, you’ll want to integrate with the Photos application: checkout PHPhotoLibrary.
Of course, you could also save the image into your own sandbox.
You’d do that if the captured images only make sense inside your own app.
25. CS193p
Fall 2017-18
UIImagePickerController
In general, much more sophisticated media capture is available
This UIImagePickerController API is pretty simple, but more powerful API exists.
Check out both PHPhotoLibrary and AVCaptureDevice.
26. CS193p
Fall 2017-18
UIImagePickerController
Overlay View
var cameraOverlayView: UIView
Be sure to set this view’s frame properly.
Camera is always full screen, so use UIScreen.main’s bounds property.
If you use the built-in controls at the bottom, you might want your view to be smaller.
Hiding the normal camera controls (at the bottom)
var showsCameraControls: Bool
Will leave a blank area at the bottom of the screen (camera’s aspect 4:3, not same as screen’s).
With no controls, you’ll need an overlay view with a “take picture” (at least) button.
That button should send takePicture() or (startVideoCapture()) to the picker.
Don’t forget to dismiss when you are done taking pictures.
You can zoom or translate the image while capturing
var cameraViewTransform: CGAffineTransform
For example, you might want to scale the image up to full screen (some of it will get clipped).
27. CS193p
Fall 2017-18
Core Image and Vision
Processing Images
Core Image is a powerful and efficient framework for applying filters to your images.
Has a couple of hundred filters to choose from (blur, depth, comparison, colors, smoothing, etc.).
Vision framework provides powerful feature detection in images (e.g. faces, barcodes, etc.).
Core Image also has some feature detection API.
Check out Core Image and Vision in the documentation.