Building a Native Camera Access Library - Part V - Transcript.pdf
1. Building Native Camera Access - Part V
In this final part we’ll finish the iOS portion implementation
2. -(void*)getView{
return container;
}
-(int)getFlash{
return flash;
}
-(int)getFacing{
return direction;
}
-(BOOL)isFacingFront{
return direction == FACING_FRONT;
}
-(BOOL)isFacingBack{
return direction == FACING_BACK;
}
-(int)getPreviewWidth{
return (int) previewLayer.frame.size.width;
}
-(int)getPreviewHeight{
return (int) previewLayer.frame.size.height;
}
-(BOOL)isSupported{
return YES;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
There are some methods that I've skipped entirely so I won't discuss them here but there are a lot of simple methods like we had in the Android port which just delegate
onwards. The getters are the simplest as they don't even need to enter the iOS thread.
This returns the camera view which should be initialized once start() is invoked
3. -(void*)getView{
return container;
}
-(int)getFlash{
return flash;
}
-(int)getFacing{
return direction;
}
-(BOOL)isFacingFront{
return direction == FACING_FRONT;
}
-(BOOL)isFacingBack{
return direction == FACING_BACK;
}
-(int)getPreviewWidth{
return (int) previewLayer.frame.size.width;
}
-(int)getPreviewHeight{
return (int) previewLayer.frame.size.height;
}
-(BOOL)isSupported{
return YES;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is a standard method in every native interface. Everything in between is trivial.
4. -(void)setZoom:(float)param{
if(zoom != param) {
zoom = param;
dispatch_async(dispatch_get_main_queue(), ^{
[self updateZoom];
});
}
}
-(void)setFocus:(int)param{
if(focus != param) {
focus = param;
dispatch_async(dispatch_get_main_queue(), ^{
[self updateFocus];
});
}
}
-(void)setFlash:(int)param{
// same code...
}
-(void)setFacing:(int)param{
// same code...
}
-(void)setVideoQuality:(int)param{
// same code...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The setters are a bit more verbose but not by much, they are all practically identical so I'll only focus on the first.
I don't want to call update if the value didn't actually change
5. -(void)setZoom:(float)param{
if(zoom != param) {
zoom = param;
dispatch_async(dispatch_get_main_queue(), ^{
[self updateZoom];
});
}
}
-(void)setFocus:(int)param{
if(focus != param) {
focus = param;
dispatch_async(dispatch_get_main_queue(), ^{
[self updateFocus];
});
}
}
-(void)setFlash:(int)param{
// same code...
}
-(void)setFacing:(int)param{
// same code...
}
-(void)setVideoQuality:(int)param{
// same code...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We update on the main thread by invoking the update methods from before. Notice I use the async call. As a rule of thumb always use the async call unless you MUST
use the sync call. It's faster and has a lower chance of a deadlock. In this case I don't need the action to happen within this millisecond so async will work just fine.
6. -(float)getVerticalViewingAngle{
__block float fov = 0;
dispatch_sync(dispatch_get_main_queue(), ^{
fov = device.activeFormat.videoFieldOfView / 16.0 * 9;
});
return fov;
}
-(float)getHorizontalViewingAngle{
__block float fov = 0;
dispatch_sync(dispatch_get_main_queue(), ^{
fov = device.activeFormat.videoFieldOfView;
});
return fov;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
I also have two special getters that need to run on the event dispatch thread. Normally this wouldn't be a problem but when we need to to return a value that's a bit
challenging.
The __block keyword allows us to mark the field as a field we can change from the lambda expression.
7. -(float)getVerticalViewingAngle{
__block float fov = 0;
dispatch_sync(dispatch_get_main_queue(), ^{
fov = device.activeFormat.videoFieldOfView / 16.0 * 9;
});
return fov;
}
-(float)getHorizontalViewingAngle{
__block float fov = 0;
dispatch_sync(dispatch_get_main_queue(), ^{
fov = device.activeFormat.videoFieldOfView;
});
return fov;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Here is a case where we MUST use dispatch_sync. If we used async the return statement would have happened before we assign the value.
Now all that is left is the actual capture methods!
8. -(void)captureImage{
dispatch_async(dispatch_get_main_queue(), ^{
capturingVideo = NO;
if([AVCapturePhotoOutput class]) {
if(photoOutput == nil) {
photoOutput = [[AVCapturePhotoOutput alloc] init];
}
AVCapturePhotoSettings* settings =
[AVCapturePhotoSettings photoSettings];
[photoOutput capturePhotoWithSettings:settings
delegate:self];
} else {
// ... Code for iOS 9 compatibility
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The capture methods are a bit more complicated. Surprisingly capturing a still image is a bit harder than capturing a video. Since the captureImage method is a bit long
I've split it into two like before. This is a similar case of iOS 10 requiring a new API.
All capture methods must run on the native UI thread as they deal with the native UI. Failing to do this leads to weird crashes.
9. -(void)captureImage{
dispatch_async(dispatch_get_main_queue(), ^{
capturingVideo = NO;
if([AVCapturePhotoOutput class]) {
if(photoOutput == nil) {
photoOutput = [[AVCapturePhotoOutput alloc] init];
}
AVCapturePhotoSettings* settings =
[AVCapturePhotoSettings photoSettings];
[photoOutput capturePhotoWithSettings:settings
delegate:self];
} else {
// ... Code for iOS 9 compatibility
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is the new iOS 10+ API, if it isn't here we'll execute the iOS 9 compatible code
10. -(void)captureImage{
dispatch_async(dispatch_get_main_queue(), ^{
capturingVideo = NO;
if([AVCapturePhotoOutput class]) {
if(photoOutput == nil) {
photoOutput = [[AVCapturePhotoOutput alloc] init];
}
AVCapturePhotoSettings* settings =
[AVCapturePhotoSettings photoSettings];
[photoOutput capturePhotoWithSettings:settings
delegate:self];
} else {
// ... Code for iOS 9 compatibility
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Up to this point everything is pretty trivial. Delegate is a special concept in iOS similar to Java's interfaces this means the current class implements the delegate
11. @interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject<
AVCapturePhotoCaptureDelegate,
AVCaptureFileOutputRecordingDelegate> {
// everything here is unchanged
}
// everything here is unchanged
@end
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Before I go to the iOS 9 code within this method lets look at the delegate... In order to implement the delegate we need to make a small change to the header file. The
delegates are declared in a syntax that's reminiscent of the Java generics syntax. I also added the delegate needed for video recording while I'm here already so that's
two delegates.
18. #include "com_codename1_camerakit_impl_CameraCallbacks.h"
extern JAVA_OBJECT nsDataToByteArr(NSData *data);
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
For the code to compile we need to add these declarations below the #import statements. The VM generates C code so we import it with include.
The nsDataToByteArr method is a method from the iOS port. I could have just included the whole header but there is no need in this case. It makes the process of
converting an NSData much simpler
19. if(stillImageOutput == nil) {
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc]
initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
}
[captureSession addOutput:stillImageOutput];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection){
break;
}
}
[stillImageOutput
captureStillImageAsynchronouslyFromConnection:videoConnection
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Now that all this is out of the way lets go back to the captureImage class and the iOS 9 compatibility block. This is effectively code I got from stackoverflow
I needed this since most samples are for iOS 10+ now and ignore the legacy but I still have quite a few iOS 9 devices that can't upgrade to 10. So I'm assuming there is
still some market for compatibility:
Most this code is boilerplate code for detecting the camera, no wonder it was reworked
20. for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection){
break;
}
}
[stillImageOutput
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer,
NSError *error) {
NSData *d = [AVCaptureStillImageOutput
jpegStillImageNSDataRepresentation:imageSampleBuffer];
JAVA_OBJECT byteArray = nsDataToByteArr(d);
com_codename1_camerakit_impl_CameraCallbacks_onImage___byte_1ARRAY(
getThreadLocalData(), byteArray);
[captureSession removeOutput:stillImageOutput];
}];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The capture image process is asynchronous and invokes the lambda expression below to process the resulting image
21. for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection){
break;
}
}
[stillImageOutput
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer,
NSError *error) {
NSData *d = [AVCaptureStillImageOutput
jpegStillImageNSDataRepresentation:imageSampleBuffer];
JAVA_OBJECT byteArray = nsDataToByteArr(d);
com_codename1_camerakit_impl_CameraCallbacks_onImage___byte_1ARRAY(
getThreadLocalData(), byteArray);
[captureSession removeOutput:stillImageOutput];
}];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The capture image process is asynchronous and invokes the lambda expression below to process the resulting image. The rest of the code should be very familiar as it's
almost identical to the one we used in the iOS 10 version. We just callback into Java.
With this image capture should now work for both old and new devices. All the basics should work with the exception of video!
22. -(void)captureVideo{
NSURL *furl = [NSURL fileURLWithPath:[NSTemporaryDirectory()
stringByAppendingPathComponent:@"temp.mov"]];
[self captureVideoFile:[furl absoluteString]];
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Capturing video is surprisingly easier than capturing still images. First the simplest method is captureVideo. The no-arguments version of this method saves to a
temporary file name. We invoke the version that accepts the file with a “temp.mov” file path.
23. -(void)captureVideoFile:(NSString*)param{
dispatch_async(dispatch_get_main_queue(), ^{
if(movieOutput != nil) {
[captureSession removeOutput:movieOutput];
[movieOutput stopRecording];
[movieOutput release];
}
capturingVideo = YES;
movieOutput = [[AVCaptureMovieFileOutput alloc] init];
[captureSession addOutput:movieOutput];
[movieOutput startRecordingToOutputFileURL:
[NSURL URLWithString:param] recordingDelegate:self];
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The version that accepts a path isn't much harder.
The AVCaptureMovieFileOutput class represents the recording process, we lazily initialize it
24. -(void)captureVideoFile:(NSString*)param{
dispatch_async(dispatch_get_main_queue(), ^{
if(movieOutput != nil) {
[captureSession removeOutput:movieOutput];
[movieOutput stopRecording];
[movieOutput release];
}
capturingVideo = YES;
movieOutput = [[AVCaptureMovieFileOutput alloc] init];
[captureSession addOutput:movieOutput];
[movieOutput startRecordingToOutputFileURL:
[NSURL URLWithString:param] recordingDelegate:self];
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We bind the output value to the capture session
25. -(void)captureVideoFile:(NSString*)param{
dispatch_async(dispatch_get_main_queue(), ^{
if(movieOutput != nil) {
[captureSession removeOutput:movieOutput];
[movieOutput stopRecording];
[movieOutput release];
}
capturingVideo = YES;
movieOutput = [[AVCaptureMovieFileOutput alloc] init];
[captureSession addOutput:movieOutput];
[movieOutput startRecordingToOutputFileURL:
[NSURL URLWithString:param] recordingDelegate:self];
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Now we can provide a URL from the file argument and start recording, again we use `self` as the delegate
26. - (void)captureOutput:(AVCaptureFileOutput *)output
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray<AVCaptureConnection *> *)connections
error:(NSError *)error {
if(capturingVideo && outputFileURL != nil) {
NSString* url = [outputFileURL absoluteString];
struct ThreadLocalData* d = getThreadLocalData();
com_codename1_camerakit_impl_CameraCallbacks_onVideo___java_lang_String
(d, fromNSString(d, url));
return;
}
if(error != nil) {
struct ThreadLocalData* d = getThreadLocalData();
com_codename1_camerakit_impl_CameraCallbacks_onError___java_lang_String_java_lang_String_java_lang_String
(d, nil, nil, nil);
return;
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Which obviously leads us to the delegate method. Most of this should be very familiar now that we went through the image capture code.
27. - (void)captureOutput:(AVCaptureFileOutput *)output
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray<AVCaptureConnection *> *)connections
error:(NSError *)error {
if(capturingVideo && outputFileURL != nil) {
NSString* url = [outputFileURL absoluteString];
struct ThreadLocalData* d = getThreadLocalData();
com_codename1_camerakit_impl_CameraCallbacks_onVideo___java_lang_String
(d, fromNSString(d, url));
return;
}
if(error != nil) {
struct ThreadLocalData* d = getThreadLocalData();
com_codename1_camerakit_impl_CameraCallbacks_onError___java_lang_String_java_lang_String_java_lang_String
(d, nil, nil, nil);
return;
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This callback accepts a path String which we can translate from NSString using the fromNSString API call
28. -(void)stopVideo{
dispatch_async(dispatch_get_main_queue(), ^{
[movieOutput stopRecording];
[captureSession removeOutput:movieOutput];
[movieOutput release];
movieOutput = nil;
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The last remaining piece is the stop video method. This is a trivial method if you've been keeping up. We just stop the recording and cleanup. And with that last bit of
code we are DONE!
I hope you found this useful, there is a lot of code to go through but most of it is damn trivial once you look it over. You can create native interfaces that do just about
anything if you have the patience to debug and google the native API's.