• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Image Processing and Computer Vision in iPhone and iPad
 

Image Processing and Computer Vision in iPhone and iPad

on

  • 16,874 views

Mini-course presented at WVC 2011, Curitiba (Brazil), May 22, 2011.

Mini-course presented at WVC 2011, Curitiba (Brazil), May 22, 2011.

Statistics

Views

Total Views
16,874
Views on SlideShare
16,838
Embed Views
36

Actions

Likes
32
Downloads
0
Comments
5

8 Embeds 36

https://twimg0-a.akamaihd.net 15
http://faculty.eng.fau.edu 13
http://www.verious.com 3
http://a0.twimg.com 1
http://www.m.techgig.com 1
http://www.pinterest.com 1
https://twitter.com 1
http://www.techgig.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

15 of 5 previous next Post a comment

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • hi sir ,
    Can you please mail me this presentation it would be helpful to me

    Thanks
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi Sir, so what have more advantage for iPhone image processing? OpenCV or OpenGLES?
    Are you sure you want to
    Your message goes here
    Processing…
  • nice presentation
    Are you sure you want to
    Your message goes here
    Processing…
  • HI, how can I create own classifier, with objective-C and Xcode
    Are you sure you want to
    Your message goes here
    Processing…
  • hi sir! how can mixing the two images into one image with respecting scaling,zooming,touches and how can save into albums.please sir help me any logic...
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Image Processing and Computer Vision in iPhone and iPad Image Processing and Computer Vision in iPhone and iPad Presentation Transcript

    • Image  Processing  and  Computer   Vision  in  iPhone  and  iPad   Oge Marques, PhD omarques@fau.edu VII Workshop de Visão Computacional (WVC) Curitiba, PR - Brazil May 2011
    • Take-home message •  Mobile image processing and computer vision applications are coming of age. •  In this mini-course you will learn suggestions, ideas, and resources for developing iPhone and iPad apps in this field.
    • Disclaimer •  I’m a teacher, researcher, graduate advisor, author, … •  … not a developer
    • Background and Motivation Two  sides  of  the  coin  •  The maturity and •  The unprecedented success popularity of image of the iPhone, iPod Touch, processing and computer and – more recently – the vision techniques and iPad algorithms
    • Motivation •  Rich capabilities of iPhone for image and video processing •  Apple support for image and multimedia: frameworks, libraries, etc. •  Third-party support for iPhone-based development: open APIs, OpenCV, etc. •  Success stories and ever-growing market
    • iPhone 4 technical specifications 1  GHz  ARM  Cortex-­‐A8   processor,  PowerVR   SGX535GPU,    Apple  A4   chipset  
    • iPhone photo apps •  Huge growth in mobile apps, including image-rich iPhone apps: –  200+ photography-related apps available in iTunes store –  Entire sites for reviews, discussions, etc. –  Subcategories include: •  Camera enhancements •  Image editing and processing (yes, Photoshop on the phone!) •  Image sharing (Instagram: 1 M users in less than 2 months!) •  Image printing, wireless transfer, etc.
    • Smart phone market
    • Smart phone market Source: http://www.cellular-news.com/story/48647.php?s=h
    • Smart phone market •  According to the American Academy of Cosmetic Dentistry: –  5.1 billion people have a cell phone –  4.2 billion people have a toothbrush Source: http://www.softcity.com/contribution/AO4UTMwADN/view/more-cell-phone-owners-than-toothbrush-owners
    • Outline •  Part I – Fundamentals of iOS development – Core concepts, techniques, and tools •  Part II – OpenCV and iOS – Functionality and portability issues •  Part III – Examples and case studies – Commercial apps and students’ projects
    • Part IFundamentals of iOS development
    • Brief survey •  Have worked with Object Oriented Programming? •  Have installed the iOS SDK? •  Have developed apps for iPhone / iPod touch / iPad? •  Have submitted apps to the App Store? •  Have developed image processing / computer vision software? •  Have worked with OpenCV?
    • Checklist •  What you’ll need: –  Intel-based Macintosh running Snow Leopard (OS X 10.6.5 or later) –  Sign up to become a registered iOS developer •  Apple requires this step before you’re allowed to download the iOS SDK and many other goodies from the iOS Dev Center –  xCode 4 and latest (4.3) version of iOS SDK –  iOS simulator –  iPhone, iPod Touch, or iPad (optional)
    • Welcome to the jungle •  Only one active app •  Only one (fixed-size) window •  Limited access (“sandbox”) •  Limited response time •  Limited screen size –  640 × 960: retina display devices (iPhone 4, 4th-gen iPod touch) –  320 × 480: older devices –  1024 × 768: iPad •  Limited system resources •  No garbage collection •  No (physical) keyboard or mouse
    • App Bundle •  Your code •  Any frameworks you’re linking to •  Nib files (interface builder) •  Resources (images, sound, etc.) •  PLIST files (app settings)
    • Tools and documentation •  Xcode •  Interface Builder •  iOS Simulator •  Instruments •  Apple documentation
    • Xcode 4: what you need to know •  Workspace window •  How to open an existing project •  How to navigate through the project’s contents •  How to organize views, subwindows, etc. •  How to use the debugging options •  How to create a new project •  How to edit targets and build settings •  Taking advantage of code completion and other sources of immediate help •  How to find help and integrate with Apple documentation
    • Xcode 4 •  Workspace window
    • Interface Builder •  Workspace window •  How to create a new NIB file •  How to add views and objects to the UI •  How to make connections between IB objects and source code
    • Interface Builder •  Workspace window
    • iOS Simulator •  How it integrates with Xcode •  What it can do –  Simulate different devices and generations (iPhone before and after retina display, iPad) –  Basic device behavior (home button, etc.) –  Selected built-in apps (Safari, Contacts, etc.) –  Simulate rotation, low memory warnings, etc. •  What it cannot do –  Provide a good measure of app performance on the actual device –  Simulate camera, accelerometer, etc.
    • Instruments •  Very rich tool (integrated with Xcode) for app performance and behavior evaluation
    • Example •  Hello, World!
    • iOS technology layers Our primary interest Most iOS courses, books, etc. (2D/3D graphics,  image, audio, video)
    • Object Oriented Programming •  Class vs. Instance •  Methods •  Instance Variables (properties)
    • 3 Pillars of OOP •  Encapsulation - hide the details from the outside world •  Polymorphism - different objects, same interface •  Inheritance - acquire the features of the parent and add onto its capabilities
    • Inheritance in Cocoa Touch
    • What is Objective C? •  Superset of C –  Mixing ObjC and C, or even ObjC and C++ –  Syntax a little different than C –  Strong or loosely typed (your choice)
    • Additions from C •  Types –  Anonymous object –  Class –  Selectors •  Class definition •  Sending messages
    • File extensions •  Objective-C is a superset of ANSI C and supports the same basic syntax as C. •  Objective-C files use the extensions below: Based on “Learning Objective-C: A Primer”
    • #import vs. #include•  When you want to include header files in your source code, you typically use a #import directive. •  This is like #include, except that it makes sure that the same file is never included more than once. Based on “Learning Objective-C: A Primer”
    • Classes & Instances •  A class is... –  a blueprint to creating an instance –  an object too (type Class )
    • Classes & Objects •  Classes declare state and behavior for a type •  An instance of a class maintains its state through instance variables •  Behavior is implemented through methods •  Instance variables (iVars) are typically hidden –  Usually accessible via getter and setter methods, or the handy property
    • Classes in ObjC •  Specification of a class in ObjC: –  Interface: class declaration, instance variables (ivars), and methods associated with the class. Usually a .h file. –  Implementation: actual code for the methods of the class. Usually a .m file. Based on “Learning Objective-C: A Primer”
    • Class declaration Based on “Learning Objective-C: A Primer”
    • Header File •  Example:
    • Implementation File •  All implementation goes between @implementation and @end •  use #pragma mark to block groups of code together
    • Strong and weak types •  ObjC supports both strong and weak typing for variables containing objects. –  Strongly typed variables include the class name in the variable type declaration. –  Weakly typed variables use the type id for the object instead. Typically used for things such as collection classes, where the exact type of the objects in a collection may be unknown. Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  A class in ObjC can declare two types of methods: –  instance methods: execution is scoped to a particular instance of the class. •  Before you call an instance method, you must first create an instance of the class. –  class methods: do not require you to create an instance. •  These methods must be called on the class. Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  Method declaration syntax Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  When you want to call a method, you do so by messaging an object. •  A message is the method signature, along with the parameter information the method needs. •  All messages you send to an object are dispatched dynamically, thus facilitating the polymorphic behavior of Objective-C classes. Based on “Learning Objective-C: A Primer”
    • Messaging •  When you call a method, its referred to as sending that message to the receiving object. •  Message Expression [receiver method:argument]; •  Example: [myCar accelerate]; will send the accelerate message to object myCar
    • Message (cont.) •  Named arguments [myCar turnOnRadio]; [myCar turnOnRadioWithStation:93.1]; [myCar turnOnRadioWithStation:93.1 volume:0.8]; •  Definitions of these methods: - (void)turnOnRadio; - (void)turnOnRadioWithStation:(double)station; - (void)turnOnRadioWithStation:(double)station volume:(double)volume;
    • Instances, methods, and messaging •  Messaging example: –  To send the insertObject:atIndex: message to an object in the myArray variable, you would use the following syntax: [myArray insertObject:anObject atIndex:0]; Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  Messages can be nested. –  Example: •  If you had another object called myAppObject that had methods for accessing the array object and the object to insert into the array, you could write the preceding example to look something like the following: [[myAppObject theArray] insertObject:[myAppObject objectToInsert] atIndex:0]; Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  ObjC also provides a dot syntax for invoking accessor/mutator (getter/setter) methods. •  Using dot syntax, you could rewrite the previous example as: [myAppObject.theArray insertObject: [myAppObject objectToInsert] atIndex:0]; Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  You can also use dot syntax for assignment. •  Example: myAppObject.theArray = aNewArray; is simply a different syntax for writing [myAppObject setTheArray:aNewArray]; Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  To use the dot syntax, your getter and setter methods must have a specific method signature: - (NSString*)myString; //getter - (void)setMyString:(NSString*)aString //setter
    • Instances, methods, and messaging •  Class methods –  Typically used as factory methods to create new instances of the class or for accessing some piece of shared information associated with the class. –  The syntax for a class method declaration is identical to that of an instance method, with one exception. •  Instead of using a minus sign for the method type identifier, you use a plus (+) sign. Based on “Learning Objective-C: A Primer”
    • Instances, methods, and messaging •  Class methods –  Example: •  In this case, the array method is a class method on the NSArray class - and inherited by NSMutableArray - that allocates and initializes a new instance of the class and returns it to your code. NSMutableArray *myArray = nil; // nil is essentially the same as NULL // Create a new array and assign it to the myArray variable. myArray = [NSMutableArray array]; Based on “Learning Objective-C: A Primer”
    • Properties •  Properties are a shortcut for creating getter/ setter methods •  Can still use dot notation because getter/setter methods are being used under the hood double speed = myCar.speed; •  Attributes of the property are declared with the property itself: – read-only/read-write access – Memory management policy
    • Properties (.h file) @interface Car : NSObject { double speed_; } @property (nonatomic, assign) double speed; @end
    • Properties (.m file) •  @synthesize - creates the getter/setter methods for you based on declaration •  @dynamic - allows you to create the getter/ setter methods
    • Properties (.m file) •  Example: @implementation Car @synthesize speed = speed_; //rest of your class implementation @end
    • read only/read-write @property double speed; // read-write by default @property (readonly) double speed;
    • Initializing Instances •  2 steps: –  Allocate –  Initialize •  Example: Car *myCar = [[Car alloc] init]; –  the myCar var is a pointer, but that s all you need to know about pointers •  Some classes have additional initialization methods Car *myCar = [[Car alloc] initWithPassengers:2];
    • Types •  id - a reference to any object –  Car *myCar; (static typing) –  id myCar; (notice no *)(dynamic typing) •  NSObject vs. id –  id is a placeholder for anything –  NSObject is the root class –  No compile-time warning when sending messages to id –  Messages sent to an NSObject will be checked
    • nil •  nil represents nothing or null if (myCar == nil) { do something } if (!myCar) { do something } Car *myCar = nil; •  Messages CAN be sent to nil: Car *myCar = nil; [myCar accelerate];
    • BOOL • Assigned values of YES or NO BOOL isCarOn = NO; if (isCarOn == YES) if (isCarOn) if (!isCarOn) isCarOn = 1;
    • The Class object •  An object that gives details about a given class (i.e. name, inheritance, etc.) Class *mustangClass = [Mustang class]; [mustang isSubclassOfClass:[Car class]]; •  Introspection - asking a class about itself if ([myObject isKindOfClass:[Car class]]) { // do something, since we know it’s a Car Car *myCar = (Car*)myObject; }
    • Identity & Equality •  Identity - if two pointers point to the same memory if (objectA == objectB) { //objectA and objectB are the same instance } •  Equality - if contents are semantically equal if ([objectA isEqual:objectB]) { //different instances, but conceptually equal }
    • NSObject •  Root class •  Built-in implementation that all sub-classes inherit –  Memory management –  Introspection –  Object equality
    • NSObject’s description method •  NSObject defines the description method: - (NSString*)description; –  By default, returns memory location and type –  You can override this to show more meaningful info
    • NSString •  Abstracts unicode characters •  Always use this instead of char* (unless you absolutely need to) –  If you re invoking a C method that requires a char*, you can get char* from an NSString through the UTF8String method NSString *myString = @”Hello, World!”;
    • NSString (cont.) •  Formatting strings NSString *name = @”Bob” NSString *hello = [NSString stringWithFormat:@”Hello, %@”, name]; –  uses standard printf formatting •  Logging strings NSLog([myObject description]); NSLog(@”Hello, %@”, name); •  Lots of other useful methods on NSString (see documentation)
    • NSString (cont.) •  NSString is immutable •  @”hello” is an NSString –  ex, if a method exists on Logger class: - (void)logMessage:(NSString*)message; –  calling it with a string: Logger *logger = [[Logger alloc] init]; [logger logMessage:@”Hello, World!”]; –  No need to instantiate an NSString object, just pass the string in directly
    • Collections •  NSArray - an ordered collection of objects •  NSDictionary - collection of key-value pairs •  NSSet - unordered collection of unique objects –  These are all immutable collections. •  Mutable versions: NSMutableArray, NSMutableDictionary, NSMutableSet –  Careful with exposing these as properties
    • NSNumber •  A wrapper class that abstracts the concept of a number (int, double, float, etc). NSNumber *myInt = [NSNumber numberWithInt:10]; NSNumber *myDouble = [NSNumber numberWithDouble: 5.2]; int a = [myInt intValue]; double b = [myDouble doubleValue]; •  Useful when you need to store a number in a dictionary (because dictionaries only accept NSObjects).
    • Example •  Car
    • Memory management in ObjC •  Referencing Counting –  Every object has a retain count (how many other objects are interested in it) •  As long as retain count > 0, object is alive •  When retain count drops to 0, memory is freed
    • Referencing Counting •  [myObject retain] will increment retain count •  [myObject release] will decrement retain count •  Once retain count = 0, object is no longer available, no turning back.
    • Referencing Counting •  Calling alloc sets retain count to 1 Car *myCar = [[Car alloc] init]; –  Retain count on myCar = 1
    • Dealloc •  The dealloc method deallocates memory for an object •  dealloc is never explicitly called by your code!!! NSObject handles this call for you. •  In your dealloc method, you release any objects you have a reference to
    • Dealloc You override dealloc when you need to clean up (release) your own iVars - (void)dealloc { //clean up any objects referenced [super dealloc]; } -  (void)dealloc { [name_ release]; //clean up any objects referenced [super dealloc]; }
    • Ownership You will typically use properties, which generate getter/ setter methods automatically.  Internally, this is what they look like (or how you would code it if you did it yourself). - (void)setCar:(Car*)car { - (void)setName:(NSString*)name { [car_ autorelease]; [name_ autorelease]; car_ = [car retain]; name_ = [name copy]; } }
    • Autorelease •  Solves the problem where you need to return a newly created object before releasing it. •  Example: + (NSString*)fullNameWithFirstName:(NSString*)first lastName:(NSString*)last { NSString *full = [[NSString alloc] initWithFormat:@”%@ %@”, first, last]; [full autorelease]; return full; }
    • Autorelease •  Use the same release rules as before •  Even if retain count goes to 0, object will stick around for a while longer •  System will automatically get rid of it –  Q: What manages auto-released objects? –  A: Auto release pools
    • Autorelease Pools •  You typically don t create them yourself (except for threading cases) •  When an autorelease pool is drained,” any object that was sent the message “autorelease” will get released •  On the main thread, this is done automatically –  At the end of each run loop execution •  Avoid autoreleasing if possible. Better to alloc/ release (frees memory right away).
    • Rules & Conventions •  When do you release an object? •  Methods that contain the words alloc or copy return a retained object - you must release it •  Any other method is returning an autoreleased object. •  You should follow this convention when defining your own methods!
    • Design Patterns •  A design pattern is usually defined as “a template for a design that solves a general, recurring problem in a particular context” •  Use them! (don’t go against them) •  Design patterns used often in iOS development: –  Model-View-Controller (MVC) –  Delegate –  Target-Action
    • MVC •  Model - The entity that models the data •  View - The user interface •  Controller - Manages the connection between view and model
    • MVC - Model •  The model should only be concerned with the data. •  Nothing to do with visual representation •  Can be persisted •  Models should be reusable
    • MVC - View •  The presentation of the data (or model) •  Allows user to interact with the data •  Does not store data, except for caching •  Easily reusable
    • MVC - Controller •  Brings the Model and View together •  Receives events from View, and acts on the Model •  Updates the View when the Model changes •  This is where app logic is typically written
    • Delegation •  Delegation is a design pattern in which one object in a program acts on behalf of, or in coordination with, another object. •  The delegating object keeps a reference to the other object (the delegate) and at the appropriate time sends a message to it. •  The message informs the delegate of an event that the delegating object is about to handle or has just handled. •  The delegate may respond to the message by updating the appearance or state of itself or other objects in the application, and in some cases it can return a value that affects how an impending event is handled.
    • Delegation •  The main value of delegation: it allows you to easily customize the behavior of several objects in one central object. •  This concept is closely related to the concept of a callback function, used, for example, in Ajax-based web development. –  A callback is a function that is triggered when an event occurs. –  Since we usually don’t know when the event will occur, we set a callback that acts as a listener for that event. –  In Cocoa Touch, callbacks are implemented using delegation. •  Most iOS applications will include delegates for some of their classes.
    • Delegation •  The concept of delegation is closely related to the Objective-C construct known as protocol. •  A protocol is simply a list of method declarations, unattached to a class definition. •  Protocols define an interface that other objects are responsible for implementing. •  When you implement the methods of a protocol in one of your classes, your class is said to conform to that protocol. •  Protocols are used to specify the interface for delegate objects.
    • Target-Action •  Allows you to handle actions or events by a UI control •  Some events: –  touchDown –  touchUpInside –  valueChanged
    • Target-Action •  3 things are defined: –  Target myObject –  Action @selector(convertFtoC:) –  Event UIControlEventTouchUpInside
    • Application event loop
    • MVC in Cocoa Touch •  Create your View components using Interface Builder –  Sometimes you ll create / modify your interface from code. –  You might also subclass existing views and controls. •  Model = Objective-C classes designed to hold your application s data or data model using Core Data •  Controller = typically composed of classes that you create and that are specific to your app. –  Controllers can be completely custom classes (NSObject subclasses), but more often, they will be subclasses of one of several existing generic controller classes from the UIKit framework, such as UIViewController. –  By subclassing one of these existing classes, you will get a lot of functionality for free and won t need to spend time recoding the wheel, so to speak.
    • Outlets •  Outlets –  Our controller class can refer to objects in the nib file by using a special kind of instance variable called an outlet. –  Think of an outlet as a pointer that points to an object within the nib. •  Example: suppose you created a text label in Interface Builder and wanted to change the label s text from within your code. By declaring an outlet and connecting that outlet to the label object, you could use the outlet from within your code to change the text displayed by the label.
    • Actions •  Actions –  Going in the opposite direction, interface objects in our nib file can be set up to trigger special methods in our controller class. –  These special methods are known as action methods. •  Example: you can tell Interface Builder that when the user touches up (pulls a finger off the screen) within a button, a specific action method within your code should be called.
    • Outlets •  More on Outlets –  Outlets are instance variables that are declared using the keyword IBOutlet. •  A declaration of an outlet in your controller s header file might look like this: @property (nonatomic, retain) IBOutlet UIButton *myButton;
    • Outlets •  More on Outlets –  IBOutletdoes absolutely nothing as far as the compiler is concerned. •  Its sole purpose is to act as a hint to tell Interface Builder that this is an instance variable that we re going to connect to an object in a nib file. •  Any instance variable that you create and want to connect to an object in a nib file must be preceded by the IBOutlet keyword
    • Outlets •  Recent changes –  Before iOS 4: IBOutlet UIButton *myButton; –  Since then, Apple s sample code has been moving toward placing the IBOutlet keyword in the property declaration: @property (nonatomic, retain) IBOutlet UIButton *myButton; –  Both mechanisms are supported.
    • Actions •  Actions are methods that are part of your controller class. –  They are also declared with a special keyword, IBAction, which tells Interface Builder that this method is an action and can be triggered by a control. –  Typically, the declaration for an action method will look like this: - (IBAction)doSomething:(id)sender; –  The actual name of the method can be anything you want, but it must have a return type of IBAction, which is the same as declaring a return type of void.
    • Actions •  If you don t need to know which control called your method, you can also define action methods without a sender parameter: - (IBAction)doSomething;
    • Example •  Button Fun –  Creating the View Controller •  ButtonFun_ViewController.h
    • Example ButtonFun_ViewController.m
    • Example ButtonFun_ViewController.m
    • Example •  Button Fun –  Creating the App Delegate Button_FunAppDelegate.h
    • Example •  Button Fun –  The UIApplicationDelegate protocol
    • Example Button_FunAppDelegate.m
    • Example Button_FunAppDelegate.m
    • Example •  Button Fun –  Editing MainWindow.xib
    • Example •  Button Fun –  Editing ButtonFun_ViewController.xib
    • Example •  Button Fun –  Screenshots
    • Another Example •  SayMyName –  Screenshots
    • UI design aspects •  The iOS development environment adheres to human interface guidelines, strategies and best practices. –  See http://tinyurl.com/3yj7b5y •  Each iPhone or iPad app consists of familiar UI elements, organized in a somewhat predictable way, and associated with equally predictable actions. •  UI elements can be selected, configured, and customized interactively using Interface Builder. –  They can also be created and manipulated programmatically.
    • A guided tour of UI elements
    • Example •  Control Fun 
    • Camera, photo library, image manipulation •  The iPhone, iPod touch, and second-generation iPad each have one or two built-in cameras and a built-in application called Photos that allows you to manage the device’s photo and video library. •  Due to the “sandboxed” nature of iOS, an application cannot have direct access to photographs or any other data that live outside of their own sandboxes. –  This limitation has been circumvented by exposing the camera and the media library to other applications by way of an image picker mechanism.
    • Camera, photo library, image manipulation •  The main classes that you need to understand in order to develop basic applications involving images, camera, and photo library for the iPhone are: –  UIImageView –  UIImagePickerController •  (and its associated protocol, UIImagePickerControllerDelegate).
    • UIImageView •  An image view object provides a view-based container for displaying either a single image or for animating a series of images. •  New image view objects are configured to disregard user events by default. –  If you want to handle events in a custom subclass of UIImageView, you must explicitly change the value of the userInteractionEnabled property to YES after initializing the object. •  When a UIImageView object displays one of its images, the actual behavior is based on the properties of the image and the view. –  If either of the image’s leftCapWidth or topCapHeight properties are non-zero, then the image is stretched according to the values in those properties. –  Otherwise, the image is scaled, sized to fit, or positioned in the image view according to the contentMode property of the view.
    • UIImageView •  Apple recommends that you use images that are all the same size. –  If the images are different sizes, each will be adjusted to fit separately based on that mode. •  All images associated with a UIImageView object should use the same scale. –  If your application uses images with different scales, they may render incorrectly.
    • UIImagePickerController •  The UIImagePickerController class provides basic, customizable user interfaces for taking pictures and movies and for giving the user some simple editing capability for newly-captured media. •  The role and appearance of an image picker controller depend on the source type you assign to it before you present it: –  A sourceType of UIImagePickerControllerSourceTypeCamera provides a user interface for taking a new picture or movie (on devices that support media capture). –  A sourceType of UIImagePickerControllerSourceTypePhotoLibrary or UIImagePickerControllerSourceTypeSavedPhotosAlbum provides a user interface for choosing among saved pictures and movies.
    • UIImagePickerControllerDelegate•  The UIImagePickerControllerDelegate protocol defines methods that your delegate object must implement to interact with the image picker interface. •  The methods of this protocol notify your delegate when the user either picks an image or movie, or cancels the picker operation. •  The delegate methods are responsible for dismissing the picker when the operation completes.
    • The UIImage class •  A UIImage object is a high-level way to display image data. •  Images can be created from files, from Quartz image objects, or from raw image data received by the application. •  The UIImage class also offers several options for drawing images to the current graphics context using different blend modes and opacity values. •  The UIImage class supports the most popular image file formats.
    • The UIImage class •  Recommendations and warnings: –  Image objects are immutable, i.e., you cannot change their properties after creation. •  This means that you generally specify an image’s properties at initialization time or rely on the image’s metadata to provide the property value. •  Because image objects are immutable, they also do not provide direct access to their underlying image data. –  In low-memory situations, image data may be purged from a UIImage object to free up memory on the system. •  This purging behavior affects only the image data stored internally by the UIImage object and not the object itself. –  Avoid creating UIImage objects that are greater than 1024 × 1024 in size.
    • Example •  Camera –  Illustrates the image picker mechanism
    • Example •  Camera
    • The Media Layer •  The Media Layer of the iOS architecture contains the graphics, audio, and video technologies that provide multimedia support on a mobile device. •  The simplest (and most efficient) way to create an application is to use pre-rendered images together with the standard views and controls of the UIKit framework and let the system do the drawing. –  However, there may be situations where additional technologies are needed to manage the application’s graphical content.
    • The Media Layer •  Core Graphics framework: contains the interfaces for the Quartz 2D drawing API, which handles native 2D vector- and image-based rendering. •  Core Animation (part of the Quartz Core framework): provides advanced support for animating views and other content. •  OpenGL ES framework: provides support for 2D and 3D rendering using hardware-accelerated interfaces.
    • The Media Layer •  Core Text framework: contains a set of simple, high- performance C-based interfaces for laying out text and handling fonts. •  Image I/O: provides interfaces for reading and writing most image formats and associated image metadata. –  This framework makes use of the Core Graphics data types and functions and supports all of the standard image types available in iOS. •  Assets Library framework: provides a query-based interface for retrieving photos and videos from the user’s device.
    • Quartz 2D •  Advanced, two-dimensional drawing engine available for iOS application development. •  Provides low-level, lightweight 2D rendering with unmatched output fidelity regardless of display or printing device. •  Resolution- and device-independent. •  The Quartz 2D application programming interface (API) is easy to use and provides access to powerful features. –  The Quartz 2D API is part of the Core Graphics framework, so you may see Quartz referred to as Core Graphics or, simply, CG. •  In iOS, Quartz 2D works with all available graphics and animation technologies, such as Core Animation, OpenGL ES, and the UIKit classes.
    • Quartz 2D •  The painter’s model
    • Quartz 2D •  Graphics context –  A graphics context represents a drawing destination. •  It contains drawing parameters and all device-specific information that the drawing system needs to perform any subsequent drawing commands. –  A graphics context defines basic drawing attributes such as the colors to use when drawing, the clipping area, line width and style information, font information, compositing options, etc. –  A graphics context is represented in code by the data type CGContextRef. •  After you obtain a graphics context, you can use Quartz 2D functions to draw to the context, perform operations (such as translations) on the context, and change graphics state parameters, such as line width and fill color.
    • Quartz 2D •  Bitmap graphics context –  Allows you to paint RGB colors, CMYK colors, or grayscale into a bitmap. •  A bitmap is a rectangular array (or raster) of pixels, each pixel representing a point in an image. •  Bitmap images are also called sampled images. –  A bitmap graphics context accepts a pointer to a memory buffer that contains storage space for the bitmap. •  When you paint into the bitmap graphics context, the buffer is updated. •  After you release the graphics context, you have a fully updated bitmap in the pixel format you specify.
    • Creating a bitmap graphics context
    • Drawing to a bitmap graphics context
    • Bitmap images and image masks •  A bitmap image is an array of pixels, where each pixel represents a single point in the image. •  Bitmap images are restricted to rectangular shapes, but with the use of the alpha component, they can appear to take on a variety of shapes and can be rotated and clipped. •  Each sample in a bitmap contains one or more color components in a specified color space, plus one additional component that specifies the alpha value to indicate transparency. –  Each component can be from 1 to as many as 32 bits.
    • Bitmap images and image masks •  When you create and work with Quartz images (which use the CGImageRef data type), you will notice that some Quartz image-creation functions require that you specify all this information, while other functions require a subset of this information. –  What you provide depends on the encoding used for the bitmap data, and whether the bitmap represents an image or an image mask.
    • Functions for creating images
    • Image masks •  An image mask is a bitmap that specifies an area to paint, but not the color. •  A Quartz bitmap image mask is used the same way an artist uses a silkscreen. •  A bitmap image mask determines how color is transferred, not which colors are used. •  Each sample value in the image mask specifies the amount that the current fill color is masked at a specific location. •  The sample value specifies the opacity of the mask. –  Larger values represent greater opacity and specify locations where Quartz paints less color.
    • Image masks •  The function CGImageMaskCreate creates a Quartz image mask from bitmap image information that you supply.
    • Image I/O •  The Image I/O programming interface allows applications to read and write most image file formats. –  Originally part of the Core Graphics framework, Image I/O now resides in its own framework to allow developers to use it independently of Core Graphics (Quartz 2D). •  Image I/O provides the definitive way to access image data because it is highly efficient, allows easy access to metadata, and provides color management. •  The Image I/O framework provides opaque data types for reading image data from a source (CGImageSourceRef) and writing image data to a destination (CGImageDestinationRef). •  The Image I/O framework understands most of the common image file formats.
    • iPad •  Different considerations must be made when building an iPad app •  Because of the larger screen, users expect a more immersive experience •  You do NOT want to simply auto-size your iPhone app to stretch out to the new dimensions –  This is a horrible waste of space, and makes for a poor UX
    • iPad •  Some new controls –  UISplitViewController –  Popovers •  Other differences from iPhone –  Due to the physical design of the iPad, it is much easier to turn the device to a different orientation. Therefore, it is assumed that iPad apps support all orientations •  This is not the case with iPhone
    • UISplitViewController
    • Popovers
    • iPad •  Two ways to build for iPad: –  Create an iPad-only project •  All code will be specific for iPad •  It is more difficult to reuse code this way •  Good if you only plan on releasing an iPad app –  Create a universal app •  Will support both iPhone and iPad in one download •  Allows you to easily share code
    • Building an iPad-only project •  Use the same concepts as you would with an iPhone app –  MVC –  Delegation –  Etc. •  The only real difference is how views are laid out on the screen (be sure to use the screen real estate appropriately).
    • Building a universal app •  Still use the same patterns from iPhone development •  The project will consist of 2 different versions of each of your views –  If your views are built in NIB files, then you would have 2 NIB files for each view: one for iPhone, and another for iPad –  If you are subclassing UIView for any of your views, you will have 2 subclasses: one for iPhone, and another for iPad
    • Building a universal app •  Your project will consist of 2 versions of each of your view controllers –  The problem here is that the view controllers typically contain business logic (business rules, logical flow of the app, etc.) –  You do not want to duplicate business logic –  The answer: •  Create a base class for each view controller •  This base class contains shared code (anything that is reusable between iPhone/iPad) •  You then create 2 subclasses of this base class: one for iPhone, and another for iPad •  These subclasses contain the details that only pertain to that form.
    • Building a universal app •  Creating subclasses is fine, but how does the app know which one to create at runtime? –  It all starts from the main NIB file –  In a universal app, there are 2 main NIB files: one for iPhone, and another for iPad –  The PLIST configuration file settings indicate which NIB gets loaded for a specific form factor –  Your job: Customize these NIB files so that it starts loading the proper classes –  This creates a chain reaction
    • Building a universal app •  What about just having an IF statement that checks for the type of device? if  (device  ==  iPad)    do  something  for  iPad   else    do  something  for  iPhone  •  NO!!! This is bad! –  Conceptually this may work (and it would), however the logic in your view controllers becomes cluttered with IF-ELSE statements –  Very difficult to maintain •  Apple’s recommended method: create subclasses instead of IF-ELSE logic
    • Part II OpenCV and iOS
    • OpenCV •  OpenCV (Open Source Computer Vision) is a library of programming functions for real- time computer vision. •  OpenCV is released under a BSD license; it is free for both academic and commercial use. •  Goal: to provide a simple-to-use computer vision infrastructure that helps people build fairly sophisticated vision applications quickly. •  The library has 2000+ optimized algorithms. –  It is used around the world, has >2M downloads and >40K people in the user group.
    • OpenCV •  5 main components: 1.  CV: basic image processing and higher-level computer vision algorithms 2.  ML: machine learning algorithms 3.  HighGUI: I/O routines and functions for storing and loading video and images 4.  CXCore: basic data structures and content upon which the three components above rely 5.  CvAux: defunct areas + experimental algorithms; not well-documented.
    • OpenCV •  Image representation: IplImage –  Basic structure used to encode [grayscale, color, or four-channel (RGB + alpha)] images. •  Moreover, each channel may contain any of several types of integer or floating-point numbers, which makes for a very flexible representation. •  For practical purposes, we can say that an IplImage is derived from another class, CvMat, which can be though of as inherited from a third, abstract base class, CvArr.
    • OpenCV •  Image processing and computer vision •  Machine learning –  Filters –  K-means clustering algorithm –  Morphological operators –  Naive Bayes classifier –  Geometric transforms –  Binary decision trees –  Edge detection –  Boosting algorithms, e.g., AdaBoost –  Hough transform –  Viola-Jones classifier and its application –  DFT, DCT to face detection –  Histogram-based matching –  Expectation maximization (EM) –  Template matching clustering –  Freeman codes –  K-nearest neighbors (KNN) classifier –  Shape matching –  Support vector machine (SVM) –  Foreground-background segmentation –  etc. –  Inpainting –  Corner and interest point detection –  Optical flow –  Object tracking in video sequences –  Extensive support for stereo imaging –  etc.
    • OpenCV and iOS •  Since OpenCV is not available as a bundled library in Apple’s iOS framework, we must compile and bring in the OpenCV libraries and headers ourselves. •  Step-by-step setup described in detail at http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en –  Follow the instructions and look for possible updates. •  Once you have completed the setup process, you should have access to both compiled versions of OpenCV: –  one used when building our applications to test on the iOS Simulator; –  one needed for final deployment to the actual device.
    • OpenCV and iOS •  When working with OpenCV in the iOS SDK: 1.  Convert the UIImage object to an IplImage structure; 2.  Apply OpenCV library function(s); 3.  Convert IplImage back to a UIImage before attempting to display it on the screen.
    • OpenCV and iOS •  Code to convert from UIImage to IplImage –  Note that IplImage also needs to be released by using cvReleaseImage.
    • OpenCV and iOS •  Code to convert from IplImage to UIImage
    • OpenCV and iOS •  Edge detection code snippet
    • OpenCV and iOS •  Example: OpenCV sample app –  Based on code from Yoshimasa Niwa –  Demonstrates two functions: •  Edge detection •  Face detection
    • OpenCV and iOS •  Main methods –  opencvFaceDetect: iOS-compatible wrapper around the OpenCV library function that implements the Haar classifier (cvHaarDetectObjects). –  opencvEdgeDetect: iOS-compatible wrapper around the OpenCV library function that implements the Canny edge detector (cvCanny).
    • OpenCV and iOS DEMO
    • Part III Examples and case studies
    • OpenGL ES •  OpenGL is “the most widely adopted graphics standard in the computer graphics and gaming industry”. •  The OpenGL ES framework (OpenGLES.framework) provides tools for drawing 2D and 3D content. •  It’s a C-based framework that works closely with the device hardware to provide high frame rates for full-screen game-style applications.
    • OpenGL ES •  Example: GLImageProcessing sample app –  Demonstrates how to implement simple image processing filters (Brightness, Contrast, Saturation, Hue rotation, Sharpness) using OpenGL ES1.1. –  Also shows how to create simple procedural button icons using CoreGraphics. –  By looking at the code youll see how to set up an OpenGL ES view and use it for applying a filter to a texture. •  The application creates a texture from an image loaded from disk. –  Use the slider to control the current filter. •  Only a single filter is applied at a time. •  Result cannot be saved.
    • OpenGL ES •  Screenshots
    • OpenGL ES DEMO
    • Image tagging using the IQEngines API •  IQEngines is the brain behind the oMoby iPhone app)
    • Image tagging using the IQEngines API
    • The IQEngines API •  Query API: allows the user to send an image to IQ Engines’ server to be processed by IQ Engines’ image labeling engine. –  It expects an HTTP callback as one of its arguments; as soon as the images are labeled, the results will be posted to that URL. •  Update API: a long polling request that returns the results for one or more images as soon as they have been labeled. •  Training API: allows you to upload images to the IQ Engines’ object search database (alpha trials, by request only). •  Result API: returns the labels for a specific image if it has been successfully processed, or informs that the image is still being processed. •  Crowdsource API: “coming soon”.
    • The IQEngines API demo app •  Screenshots
    • The IQEngines API demo app •  XML-formatted response
    • The IQEngines API demo app DEMO
    • Other third-party APIs •  Successful photo-based iPhone app developers are starting to release APIs for third-party developers. •  Recent examples: –  PicPlz has made their API (+ examples) available earlier this year –  Instagram appears to have followed suit a few days later
    • iTranslatAR •  By Julie Carmigniani [Florida Atlantic University] •  An app for translating pictures of text into any language supported by Google Translate. •  The picture’s text is converted into a string using the Tesseract OCR software (now available at Google Code)
    • iTranslatAR •  The app currently has three views in a tab controller: –  a view for translating text through pictures; –  one for translating text through typing; and –  one for changing the target language.
    • iTranslatAR •  Screenshots
    • iTranslatAR DEMO
    • iTranslatAR •  Will be extended to include (near) real-time camera support inspired by Word Lens and augmented reality (the AR in iTranslatAR) capabilities.
    • An integrated app: Photastic •  By Asif Rahman  [Florida Atlantic University]
    • An integrated app: Photastic DEMO
    • Concluding thoughts •  Mobile image processing, image search, and computer vision-based apps have a promising future. •  There is a great need for good solutions to specific problems. •  I hope this mini-course has provided a good starting point and many useful pointers. •  I look forward to working with some of you!
    • Thank You •  Questions? •  For additional information: omarques@fau.edu