Building a Native Camera Access Library - Part III - Transcript.pdf
1. Building Native Camera Access - Part III
The iOS port is a steep climb. Unlike the Android version which maps almost directly to the native code.
Still, one of the advantages in iOS programming is the cleaner underlying API that often simplifies common use cases. Due to this I chose to skip 3rd party libraries and
try to implement the functionality of Camera Kit directly on the native iOS API's.
2. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
When we use Generate Native Stubs the iOS stubs include two files an h and an m file. Lets review the h file first, I'll look at what was generated in both before we begin.
This is a standard objective-c import statement that adds the basic Apple iOS API, imports in Objective-C are more like C includes than Java imports.
3. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the class definition for the native interface notice that NSObject is the common base class here
4. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The method signatures should be pretty readable as they map directly to the Java equivalents
5. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
getView expects a peer component on the Java side, from this side we'll return a UIView instance
11. #import <AVFoundation/AVFoundation.h>
const int FACING_BACK = 0;
const int FACING_FRONT = 1;
const int FLASH_OFF = 0;
const int FLASH_ON = 1;
const int FLASH_AUTO = 2;
const int FLASH_TORCH = 3;
const int FOCUS_OFF = 0;
const int FOCUS_CONTINUOUS = 1;
const int FOCUS_TAP = 2;
const int FOCUS_TAP_WITH_MARKER = 3;
const int METHOD_STANDARD = 0;
const int METHOD_STILL = 1;
const int VIDEO_QUALITY_480P = 0;
const int VIDEO_QUALITY_720P = 1;
const int VIDEO_QUALITY_1080P = 2;
const int VIDEO_QUALITY_2160P = 3;
const int VIDEO_QUALITY_HIGHEST = 4;
const int VIDEO_QUALITY_LOWEST = 5;
const int VIDEO_QUALITY_QVGA = 6;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
We'll start by bringing in the constants from the Java Constants interface. In the Java implementation we could ignore their values in the native side because the
implementation already used the exact same values. We don't have that privilege. I'll also add an import to the native AVFoundation (Audio Video Foundation) which is
the iOS API for media.
These are copied directly from the Java code with the public static final portion replaced to const. This will make coding the rest easier.
12. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
So now that we have a sense of the scope lets start implementing the important methods one by one. The natural place to start is the start method.
For this to work we first need to define the firstTimeCamerKitLaunch variable
13. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The first time start is invoked we initialize the default values of the various constants to identical values we have in the Android version
14. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This method initializes the camera view the first time around notice that self is the equivalent of this
15. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
dispatch_sync is the iOS equivalent of callSeriallyAndWait we want the block below to execute on the native iOS thread & we want to wait until it's finished
16. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The capture session is stopped on the stop call so if this isn't the first time around we need to restart the capture session
17. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Before we proceed to the lazy init method lets look at the variables we added into the header file.
The direction the camera is facing... Front or Back
18. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Whether flash is on/off or auto-flash mode
19. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Focus can be based on point or automatic
20. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Allows for several modes of capture, I didn't implement this for now
21. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
A set of constants indicating the resolution for recorded video
22. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Current camera zoom value
23. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
I store YES here if the app was given permission to access the camera
24. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Since some callbacks for video and photo might be similar I set this to to indicate what I'm capturing
25. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the actual UI element we will see on the screen a UIView is the iOS parallel to Component, I'll discuss this soon
26. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Notice that I imported the CameraKitView class here. It’s a class I added and I’ll cover it soon…
27. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the native capture device representing the camera. A different device instance is used when we flip between the back & front cameras
28. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
A session encapsulates the capture process, we need to acquire access to the camera with a session and relinquish it in stop
29. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The preview layer is where the video from the camera is drawn, this is a CALayer which is a graphics surface that we can assign to a UIView. I'll discuss this when
covering CameraKitView
30. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This class is responsible for capturing a movie and saving it to a file
31. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The last two entries handle photos, the former works on iOS 10 and newer devices and the latter works on older devices/OS.
That's a lot to digest but we are just getting started…
32. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Lets move right into the lazyInit method. When I started with the code I thought the camera will initialize lazily but that didn't fit the rest of the API so I abandoned that
approach and initialized on start. I didn't bother changing the name since it isn't user visible anyway.
The content of the following block runs on the native iOS thread synchronously. The method won't return until the code is finished
33. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is the equivalent of new Object() in Objective-C we allocate the object and invoke its init method which is sort of a constructor
34. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We're asking the AVCaptureDevice whether we have permission to use the media device, this can result in one of 4 outcomes
35. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Not determined means we need to ask for permission
36. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
So we ask which should prompt the user with a permission dialog that he can accept or reject
37. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If he accepted we move to the second phase of authorization in lazyInitPostAuthorization
38. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If permission was denied in some way there isn't much we can do... The user will see a blank view
39. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If authorization was already granted previously we move on. Notice that for this to work we need the ios.NSCameraUsageDescription constant I discussed before.
Without that build hint permission is denied automatically
40. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Lets move on to the lazyInitPostAuthorization method. This is a complex method so I'll divide it into 2 parts for simplicity. The first part of the method deals with detection
of the "device" meaning picking the right camera.
This checks if a specific class exists, iOS 10 deprecated an API and introduced a new one. If the new API doesn't exist we'll fallback to the old API
41. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If we reached this block we are in iOS 10 or newer
42. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
You will notice that getting a device in iOS 10 is one method with the only difference being the position argument value. Notice Objective-C method invocations use
argument names as part of the invocation.
Notice I referred to Objective-C messages as methods. There is a difference between the two but it's not something you need to understand as a casual Objective-C
user.
43. mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
}
} else {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionBack) {
device = d;
break;
}
}
}
}
// ... common device code ...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This code is running on a device with an OS prior to iOS 10, here we loop over all the devices within AVCaptureDevice
44. mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
}
} else {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionBack) {
device = d;
break;
}
}
}
}
// ... common device code ...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If a device is in the right position we update the device value and exit the loop
45. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The bottom portion of the method is common to both iOS 10+ and prior.
Objective-C often accepts pointers to error variables which it assigns in case of an error, I didn't check for error here which I should.
46. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The input object can be created from the device, we need it to start a session and don't need it after that at this time
47. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We allocate a new capture session and set the input value
48. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Preview shows the video of the session in our view. It's a CALayer which we can't add directly to the screen
49. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
setLayer is a method I added to CameraKitView, I'll discuss that when covering CameraKitView
50. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is how you show a CALayer within a UIView
51. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
These methods allow us to keep common code between the initialization code and the setter methods. That means a call to setFlash will trigger updateFlash internally I'll
cover all 4 methods soon.
52. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The final line starts the capture session. This seems like a lot and it is a lot. We went through the "heavy lifting" portion of the code and as you can see it might not be
trivial but it isn't hard. I didn't know half of these methods when I started out but that's the great thing about being a programmer in this day and age: we can google it.