Building a Native Camera Access Library - Part I.pdfShaiAlmog1
The document discusses building native camera access in Codename One. It describes surveying existing solutions and choosing the Camera Kit Android API. It then details wrapping the native CameraNativeAccess interface in a high-level CameraKit class to abstract away the native implementation and allow changing the underlying native API. The CameraKit class implements common camera functions and allows adding listeners to handle camera events like errors, images and videos.
Getting Started with OpenCV provides an overview of OpenCV and demonstrates a basic OpenCV program. It discusses OpenCV's structure, loading and saving images, creating windows and trackbars, and using OpenCV with Intel's Integrated Performance Primitives for accelerated computer vision functions. The document contains code samples and explains how to compile, build, and run OpenCV programs on Windows and Linux.
Building a Native Camera Access Library - Part I.pdfShaiAlmog1
The document discusses building native camera access in Codename One. It describes surveying existing solutions and choosing the Camera Kit Android API. It then details wrapping the native CameraNativeAccess interface in a high-level CameraKit class to abstract away the native implementation and allow changing the underlying native API. The CameraKit class implements common camera functions and allows adding listeners to handle camera events like errors, images and videos.
Getting Started with OpenCV provides an overview of OpenCV and demonstrates a basic OpenCV program. It discusses OpenCV's structure, loading and saving images, creating windows and trackbars, and using OpenCV with Intel's Integrated Performance Primitives for accelerated computer vision functions. The document contains code samples and explains how to compile, build, and run OpenCV programs on Windows and Linux.
Building a Native Camera Access Library - Part II.pdfShaiAlmog1
The document discusses implementing native camera access for an Android application using Codename One. It provides code samples for initializing a CameraView, setting camera properties and methods, and handling callbacks for images, videos, and errors. The CameraNativeAccessImpl class is used to access the camera from Codename One code by running operations on the Android UI thread. Hints are also provided for adding dependencies to build.gradle and configuring ProGuard.
Creating a single page application is an iterative process, where we should aim for the "good enough" and continuously improve it based on the growing requirements. The current Frontend ecosystem gives us multiple tools that we can employ based on the use cases we might discover on the way. In this presentation, we explain our Vue adventure and development approach with this framework.
How React Native, Appium and me made each other shine @Frontmania 16-11-2018Wim Selles
The document discusses various techniques for optimizing mobile test automation, including:
1) Mocking API responses and disabling animations to speed up test execution times significantly.
2) Implementing shortcuts like unique IDs and headers to scripts tests once across platforms.
3) Saving time by parallelizing test execution and preventing inconsistent test data issues.
The document discusses an image processing pipeline for mobile devices that uses the webcam on a browser to capture images and upload them to a GPU-enabled server for preprocessing and neural network processing before returning results to the browser. It covers using WebRTC and JavaScript Canvas to capture images, uploading images to the server via WebSocket, preprocessing images on both the client and server sides, using Tornado on the server to handle the WebSocket connection and processing images with queues and multiprocessing, and various neural networks that could be used for processing.
This document provides an overview of topics to be covered in an iOS training course on basics. It will cover getting started with Xcode, understanding app execution flow, an introduction to Objective-C, writing a first iPhone application, using Interface Builder for designing interfaces, connecting interface elements to code using outlets and actions, using storyboards, and testing apps using the iOS simulator. It also includes some facts about iOS and the app store ecosystem. The document outlines the various sections and provides more details on topics like Xcode, app lifecycle, Objective-C syntax and concepts like classes, objects, methods.
This document discusses various techniques for working with multimedia in Android applications, including detecting device capabilities, loading images from local storage and remote URLs, playing audio files from assets and raw resources, and improving performance through caching and asynchronous loading. It provides code examples for checking if a device has a front-facing camera, loading images while avoiding out of memory errors, playing audio files from assets, and using an AsyncTask to load images asynchronously to avoid blocking the UI. It also discusses potential memory leak issues and strategies for building an image cache.
How to Implement Basic Angular Routing and Nested Routing With Params in Angu...Katy Slemon
Here’s a step-by-step guide to developing an Angular application from scratch with Basic Angular Routing and Nested Routing with params in Angular v11.
The document discusses software driven verification using Xilinx's xsim simulator. It describes using the Xilinx Simulator Interface (XSI) which allows a C/C++ program to act as a testbench for an HDL design in xsim. It provides details on how to use XSI functions like getting port numbers and values, running simulation, and controlling simulation from C++. It also discusses calling XSI functions through dynamic linking and using SystemVerilog DPI to directly access the DUT from C++.
This document provides an overview of PhoneGap/Cordova, a framework for developing hybrid mobile apps. It discusses how PhoneGap works by using web technologies like HTML, CSS, and JavaScript wrapped in a native container. It also covers creating a PhoneGap project, adding plugins to access device capabilities, and testing apps locally or building them for app stores. Examples of PhoneGap apps are provided.
303 TANSTAAFL: Using Open Source iPhone UI Codejonmarimba
This document discusses modifications made to improve the animation and behavior of an open source cover flow library called OpenFlow. The author hacked the code to have a scroll view handle animation instead of core animation for better control. Touch handling was also hijacked to directly control selection instead of relying on scroll view callbacks. Friction was reduced and reflection rendering was adjusted to better match Apple's implementation.
The document provides instructions for setting up the file structure and dependencies needed to run WebRTC on iOS. It involves creating a disk image, checking out various plugins and dependencies, and modifying files in specific WebRTC modules and third party libraries to add iOS compatibility and address iOS-specific issues like threading and input handling. Key changes include adding iOS-compatible audio and video capture, modifying tests to work without stdin, and addressing OpenGL rendering differences between iOS and other platforms.
Skinning Android for Embedded ApplicationsVIA Embedded
This presentation given by Jack Liu, VIA Embedded Senior Software Manager, looks at some of the most commonly requested modifications we receive in order to make Android achieve the required behavior, look and feel for an embedded scenario, including changing the start-up screen image with a custom logo or animation, how to directly boot into an application, and removal of system bars to achieve full screen display mode behavior.
Want to squeeze every last bit of performance out of your apps? I will show you how to let go of using Interface Builder to create better performing, more optimized, and leaner apps. I'll walk you through why it's better, how to create and move projects off of IB, building your UI in code, and how to gain a better understanding of how your code works from the ground up.
Non Conventional Android Programming Enguest9bcef2f
Learn as you can developing software for mobile devices using only html, css and javascript and how you can use Spring Framework in software for mobile devices
Non Conventional Android Programming (English)Davide Cerbo
Learn as you can developing software for mobile devices using only html, css and javascript and how you can use Spring Framework in software for mobile devices
For a number of years now we have been hearing about all of the benefits that automated unit testing provides like increasing our quality, catching errors earlier, ensuring that all developers are testing in the same manner and deploying updates with high confidence that nothing will break. Testing a Web UI though was difficult and fragile which meant that typically we had no automated unit test for our Web UI. This is no longer the case with the latest release of Angular. Unit testing is now a first class citizen in Angular.
Out of the box, the project generated by the Angular CLI has unit testing setup with Karma and Jasmine and includes sample tests. Generating new components, services, and pipes includes the unit test Spec file already wired up. Thus allowing you to focus on writing your unit tests and not on the infrastructure needed to get them running. The barriers to writing unit test have been destroyed.
This talk will walk through getting started unit testing your Angular components, services, and pipes. Along the way I will share the tips and tricks that I have learned as I have implemented unit testing on my Angular projects at a Fortune 100 company. You will walk away ready to immediately implement unit testing on your Angular project.
The Mobile Vision API provides a framework for recognizing objects in photos and videos. The framework includes detectors, which locate and describe visual objects in images or video frames, and an event-driven API that tracks the position of those objects in video.
The Duck Teaches Learn to debug from the masters. Local to production- kill ...ShaiAlmog1
The document outlines an agenda for a workshop on debugging techniques. The workshop covers installing tools, flow and breakpoints debugging, watching variables, Kubernetes debugging, and developer observability. Key techniques discussed include tracepoints, memory debugging, exception breakpoints, object marking, and logs, snapshots, and metrics for observability. The goal is to teach practical debugging skills that can be applied at scale in production environments like Kubernetes.
Building a Native Camera Access Library - Part II.pdfShaiAlmog1
The document discusses implementing native camera access for an Android application using Codename One. It provides code samples for initializing a CameraView, setting camera properties and methods, and handling callbacks for images, videos, and errors. The CameraNativeAccessImpl class is used to access the camera from Codename One code by running operations on the Android UI thread. Hints are also provided for adding dependencies to build.gradle and configuring ProGuard.
Creating a single page application is an iterative process, where we should aim for the "good enough" and continuously improve it based on the growing requirements. The current Frontend ecosystem gives us multiple tools that we can employ based on the use cases we might discover on the way. In this presentation, we explain our Vue adventure and development approach with this framework.
How React Native, Appium and me made each other shine @Frontmania 16-11-2018Wim Selles
The document discusses various techniques for optimizing mobile test automation, including:
1) Mocking API responses and disabling animations to speed up test execution times significantly.
2) Implementing shortcuts like unique IDs and headers to scripts tests once across platforms.
3) Saving time by parallelizing test execution and preventing inconsistent test data issues.
The document discusses an image processing pipeline for mobile devices that uses the webcam on a browser to capture images and upload them to a GPU-enabled server for preprocessing and neural network processing before returning results to the browser. It covers using WebRTC and JavaScript Canvas to capture images, uploading images to the server via WebSocket, preprocessing images on both the client and server sides, using Tornado on the server to handle the WebSocket connection and processing images with queues and multiprocessing, and various neural networks that could be used for processing.
This document provides an overview of topics to be covered in an iOS training course on basics. It will cover getting started with Xcode, understanding app execution flow, an introduction to Objective-C, writing a first iPhone application, using Interface Builder for designing interfaces, connecting interface elements to code using outlets and actions, using storyboards, and testing apps using the iOS simulator. It also includes some facts about iOS and the app store ecosystem. The document outlines the various sections and provides more details on topics like Xcode, app lifecycle, Objective-C syntax and concepts like classes, objects, methods.
This document discusses various techniques for working with multimedia in Android applications, including detecting device capabilities, loading images from local storage and remote URLs, playing audio files from assets and raw resources, and improving performance through caching and asynchronous loading. It provides code examples for checking if a device has a front-facing camera, loading images while avoiding out of memory errors, playing audio files from assets, and using an AsyncTask to load images asynchronously to avoid blocking the UI. It also discusses potential memory leak issues and strategies for building an image cache.
How to Implement Basic Angular Routing and Nested Routing With Params in Angu...Katy Slemon
Here’s a step-by-step guide to developing an Angular application from scratch with Basic Angular Routing and Nested Routing with params in Angular v11.
The document discusses software driven verification using Xilinx's xsim simulator. It describes using the Xilinx Simulator Interface (XSI) which allows a C/C++ program to act as a testbench for an HDL design in xsim. It provides details on how to use XSI functions like getting port numbers and values, running simulation, and controlling simulation from C++. It also discusses calling XSI functions through dynamic linking and using SystemVerilog DPI to directly access the DUT from C++.
This document provides an overview of PhoneGap/Cordova, a framework for developing hybrid mobile apps. It discusses how PhoneGap works by using web technologies like HTML, CSS, and JavaScript wrapped in a native container. It also covers creating a PhoneGap project, adding plugins to access device capabilities, and testing apps locally or building them for app stores. Examples of PhoneGap apps are provided.
303 TANSTAAFL: Using Open Source iPhone UI Codejonmarimba
This document discusses modifications made to improve the animation and behavior of an open source cover flow library called OpenFlow. The author hacked the code to have a scroll view handle animation instead of core animation for better control. Touch handling was also hijacked to directly control selection instead of relying on scroll view callbacks. Friction was reduced and reflection rendering was adjusted to better match Apple's implementation.
The document provides instructions for setting up the file structure and dependencies needed to run WebRTC on iOS. It involves creating a disk image, checking out various plugins and dependencies, and modifying files in specific WebRTC modules and third party libraries to add iOS compatibility and address iOS-specific issues like threading and input handling. Key changes include adding iOS-compatible audio and video capture, modifying tests to work without stdin, and addressing OpenGL rendering differences between iOS and other platforms.
Skinning Android for Embedded ApplicationsVIA Embedded
This presentation given by Jack Liu, VIA Embedded Senior Software Manager, looks at some of the most commonly requested modifications we receive in order to make Android achieve the required behavior, look and feel for an embedded scenario, including changing the start-up screen image with a custom logo or animation, how to directly boot into an application, and removal of system bars to achieve full screen display mode behavior.
Want to squeeze every last bit of performance out of your apps? I will show you how to let go of using Interface Builder to create better performing, more optimized, and leaner apps. I'll walk you through why it's better, how to create and move projects off of IB, building your UI in code, and how to gain a better understanding of how your code works from the ground up.
Non Conventional Android Programming Enguest9bcef2f
Learn as you can developing software for mobile devices using only html, css and javascript and how you can use Spring Framework in software for mobile devices
Non Conventional Android Programming (English)Davide Cerbo
Learn as you can developing software for mobile devices using only html, css and javascript and how you can use Spring Framework in software for mobile devices
For a number of years now we have been hearing about all of the benefits that automated unit testing provides like increasing our quality, catching errors earlier, ensuring that all developers are testing in the same manner and deploying updates with high confidence that nothing will break. Testing a Web UI though was difficult and fragile which meant that typically we had no automated unit test for our Web UI. This is no longer the case with the latest release of Angular. Unit testing is now a first class citizen in Angular.
Out of the box, the project generated by the Angular CLI has unit testing setup with Karma and Jasmine and includes sample tests. Generating new components, services, and pipes includes the unit test Spec file already wired up. Thus allowing you to focus on writing your unit tests and not on the infrastructure needed to get them running. The barriers to writing unit test have been destroyed.
This talk will walk through getting started unit testing your Angular components, services, and pipes. Along the way I will share the tips and tricks that I have learned as I have implemented unit testing on my Angular projects at a Fortune 100 company. You will walk away ready to immediately implement unit testing on your Angular project.
The Mobile Vision API provides a framework for recognizing objects in photos and videos. The framework includes detectors, which locate and describe visual objects in images or video frames, and an event-driven API that tracks the position of those objects in video.
Similar to Building a Native Camera Access Library - Part III - Transcript.pdf (20)
The Duck Teaches Learn to debug from the masters. Local to production- kill ...ShaiAlmog1
The document outlines an agenda for a workshop on debugging techniques. The workshop covers installing tools, flow and breakpoints debugging, watching variables, Kubernetes debugging, and developer observability. Key techniques discussed include tracepoints, memory debugging, exception breakpoints, object marking, and logs, snapshots, and metrics for observability. The goal is to teach practical debugging skills that can be applied at scale in production environments like Kubernetes.
The document describes code for implementing the server-side functionality of a WhatsApp clone. It includes classes for representing users, messages, and server connections. The Server class initializes user and message data from files, handles login/signup, and establishes a websocket connection for real-time messaging. It can send and receive messages when connected, or queue messages when offline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Building a Native Camera Access Library - Part III - Transcript.pdf
1. Building Native Camera Access - Part III
The iOS port is a steep climb. Unlike the Android version which maps almost directly to the native code.
Still, one of the advantages in iOS programming is the cleaner underlying API that often simplifies common use cases. Due to this I chose to skip 3rd party libraries and
try to implement the functionality of Camera Kit directly on the native iOS API's.
2. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
When we use Generate Native Stubs the iOS stubs include two files an h and an m file. Lets review the h file first, I'll look at what was generated in both before we begin.
This is a standard objective-c import statement that adds the basic Apple iOS API, imports in Objective-C are more like C includes than Java imports.
3. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the class definition for the native interface notice that NSObject is the common base class here
4. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The method signatures should be pretty readable as they map directly to the Java equivalents
5. #import <Foundation/Foundation.h>
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
}
-(void)start;
-(void)stop;
-(void)setVideoBitRate:(int)param;
-(int)getPreviewWidth;
-(BOOL)isStarted;
-(void)setMethod:(int)param;
-(void*)getView;
-(void)setPermissions:(int)param;
-(int)getFacing;
-(void)setZoom:(float)param;
-(int)toggleFacing;
-(int)getCaptureWidth;
-(float)getHorizontalViewingAngle;
-(void)setJpegQuality:(int)param;
-(void)stopVideo;
-(BOOL)isFacingBack;
-(int)getFlash;
-(void)captureImage;
-(int)getPreviewHeight;
-(void)captureVideoFile:(NSString*)param;
-(void)setLockVideoAspectRatio:(BOOL)param;
-(void)setFocus:(int)param;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
getView expects a peer component on the Java side, from this side we'll return a UIView instance
11. #import <AVFoundation/AVFoundation.h>
const int FACING_BACK = 0;
const int FACING_FRONT = 1;
const int FLASH_OFF = 0;
const int FLASH_ON = 1;
const int FLASH_AUTO = 2;
const int FLASH_TORCH = 3;
const int FOCUS_OFF = 0;
const int FOCUS_CONTINUOUS = 1;
const int FOCUS_TAP = 2;
const int FOCUS_TAP_WITH_MARKER = 3;
const int METHOD_STANDARD = 0;
const int METHOD_STILL = 1;
const int VIDEO_QUALITY_480P = 0;
const int VIDEO_QUALITY_720P = 1;
const int VIDEO_QUALITY_1080P = 2;
const int VIDEO_QUALITY_2160P = 3;
const int VIDEO_QUALITY_HIGHEST = 4;
const int VIDEO_QUALITY_LOWEST = 5;
const int VIDEO_QUALITY_QVGA = 6;
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
We'll start by bringing in the constants from the Java Constants interface. In the Java implementation we could ignore their values in the native side because the
implementation already used the exact same values. We don't have that privilege. I'll also add an import to the native AVFoundation (Audio Video Foundation) which is
the iOS API for media.
These are copied directly from the Java code with the public static final portion replaced to const. This will make coding the rest easier.
12. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
So now that we have a sense of the scope lets start implementing the important methods one by one. The natural place to start is the start method.
For this to work we first need to define the firstTimeCamerKitLaunch variable
13. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The first time start is invoked we initialize the default values of the various constants to identical values we have in the Android version
14. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This method initializes the camera view the first time around notice that self is the equivalent of this
15. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
dispatch_sync is the iOS equivalent of callSeriallyAndWait we want the block below to execute on the native iOS thread & we want to wait until it's finished
16. BOOL firstTimeCameraKitLaunch = YES;
@implementation com_codename1_camerakit_impl_CameraNativeAccessImpl
-(void)start{
if(firstTimeCameraKitLaunch) {
direction = FACING_BACK;
flash = FLASH_OFF;
focus = FOCUS_CONTINUOUS;
method = METHOD_STANDARD;
videoQuality = VIDEO_QUALITY_480P;
previewLayer = nil;
device = nil;
photoOutput = nil;
captureSession = nil;
stillImageOutput = nil;
firstTimeCameraKitLaunch = NO;
zoom = 1;
[self lazyInit];
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
[captureSession startRunning];
});
}
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The capture session is stopped on the stop call so if this isn't the first time around we need to restart the capture session
17. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Before we proceed to the lazy init method lets look at the variables we added into the header file.
The direction the camera is facing... Front or Back
18. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Whether flash is on/off or auto-flash mode
19. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Focus can be based on point or automatic
20. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Allows for several modes of capture, I didn't implement this for now
21. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
A set of constants indicating the resolution for recorded video
22. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Current camera zoom value
23. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
I store YES here if the app was given permission to access the camera
24. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Since some callbacks for video and photo might be similar I set this to to indicate what I'm capturing
25. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the actual UI element we will see on the screen a UIView is the iOS parallel to Component, I'll discuss this soon
26. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
Notice that I imported the CameraKitView class here. It’s a class I added and I’ll cover it soon…
27. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This is the native capture device representing the camera. A different device instance is used when we flip between the back & front cameras
28. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
A session encapsulates the capture process, we need to acquire access to the camera with a session and relinquish it in stop
29. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The preview layer is where the video from the camera is drawn, this is a CALayer which is a graphics surface that we can assign to a UIView. I'll discuss this when
covering CameraKitView
30. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
This class is responsible for capturing a movie and saving it to a file
31. #import "CameraKitView.h"
@interface com_codename1_camerakit_impl_CameraNativeAccessImpl :
NSObject {
int direction;
int flash;
int focus;
int method;
int videoQuality;
float zoom;
BOOL authorized;
BOOL capturingVideo;
CameraKitView* container;
AVCaptureDevice* device;
AVCaptureSession* captureSession;
AVCaptureVideoPreviewLayer* previewLayer;
AVCaptureMovieFileOutput* movieOutput;
AVCapturePhotoOutput* photoOutput;
AVCaptureStillImageOutput* stillImageOutput;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.h
The last two entries handle photos, the former works on iOS 10 and newer devices and the latter works on older devices/OS.
That's a lot to digest but we are just getting started…
32. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Lets move right into the lazyInit method. When I started with the code I thought the camera will initialize lazily but that didn't fit the rest of the API so I abandoned that
approach and initialized on start. I didn't bother changing the name since it isn't user visible anyway.
The content of the following block runs on the native iOS thread synchronously. The method won't return until the code is finished
33. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is the equivalent of new Object() in Objective-C we allocate the object and invoke its init method which is sort of a constructor
34. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We're asking the AVCaptureDevice whether we have permission to use the media device, this can result in one of 4 outcomes
35. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Not determined means we need to ask for permission
36. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
So we ask which should prompt the user with a permission dialog that he can accept or reject
37. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If he accepted we move to the second phase of authorization in lazyInitPostAuthorization
38. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If permission was denied in some way there isn't much we can do... The user will see a blank view
39. -(void)lazyInit {
dispatch_sync(dispatch_get_main_queue(), ^{
container = [[CameraKitView alloc] init];
switch ([AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo]) {
case AVAuthorizationStatusNotDetermined:
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo
completionHandler:^( BOOL granted ) {
if ( ! granted ) {
authorized = NO;
return;
}
authorized = YES;
[self lazyInitPostAuthorization];
}];
break;
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
authorized = NO;
break;
case AVAuthorizationStatusAuthorized:
authorized = YES;
[self lazyInitPostAuthorization];
break;
}
});
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If authorization was already granted previously we move on. Notice that for this to work we need the ios.NSCameraUsageDescription constant I discussed before.
Without that build hint permission is denied automatically
40. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Lets move on to the lazyInitPostAuthorization method. This is a complex method so I'll divide it into 2 parts for simplicity. The first part of the method deals with detection
of the "device" meaning picking the right camera.
This checks if a specific class exists, iOS 10 deprecated an API and introduced a new one. If the new API doesn't exist we'll fallback to the old API
41. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If we reached this block we are in iOS 10 or newer
42. -(void)lazyInitPostAuthorization {
if ([AVCaptureDeviceDiscoverySession class]) {
if(direction == FACING_FRONT) {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
} else {
device = [AVCaptureDevice
defaultDeviceWithDeviceType:
AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
You will notice that getting a device in iOS 10 is one method with the only difference being the position argument value. Notice Objective-C method invocations use
argument names as part of the invocation.
Notice I referred to Objective-C messages as methods. There is a difference between the two but it's not something you need to understand as a casual Objective-C
user.
43. mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
}
} else {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionBack) {
device = d;
break;
}
}
}
}
// ... common device code ...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This code is running on a device with an OS prior to iOS 10, here we loop over all the devices within AVCaptureDevice
44. mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
}
} else {
if(direction == FACING_FRONT) {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionFront) {
device = d;
break;
}
}
} else {
for(AVCaptureDevice* d in [AVCaptureDevice devices]) {
if(d.position == AVCaptureDevicePositionBack) {
device = d;
break;
}
}
}
}
// ... common device code ...
}
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
If a device is in the right position we update the device value and exit the loop
45. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The bottom portion of the method is common to both iOS 10+ and prior.
Objective-C often accepts pointers to error variables which it assigns in case of an error, I didn't check for error here which I should.
46. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The input object can be created from the device, we need it to start a session and don't need it after that at this time
47. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
We allocate a new capture session and set the input value
48. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
Preview shows the video of the session in our view. It's a CALayer which we can't add directly to the screen
49. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
setLayer is a method I added to CameraKitView, I'll discuss that when covering CameraKitView
50. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
This is how you show a CALayer within a UIView
51. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
These methods allow us to keep common code between the initialization code and the setter methods. That means a call to setFlash will trigger updateFlash internally I'll
cover all 4 methods soon.
52. NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput
deviceInputWithDevice:device error:&error];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:input];
previewLayer = [AVCaptureVideoPreviewLayer
layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[container setLayer:previewLayer];
[container.layer addSublayer:previewLayer];
[self updateFlash];
[self updateZoom];
[self updateFocus];
[self updateVideoQuality];
[captureSession startRunning];
com_codename1_camerakit_impl_CameraNativeAccessImpl.m
The final line starts the capture session. This seems like a lot and it is a lot. We went through the "heavy lifting" portion of the code and as you can see it might not be
trivial but it isn't hard. I didn't know half of these methods when I started out but that's the great thing about being a programmer in this day and age: we can google it.