CameraX is a new camera library that provides a simpler way to access camera features on Android. It consolidates the camera APIs and works on older Android versions back to L. CameraX includes use cases like camera preview, image analysis and image capture. It can also be used with ML Kit for on-device image labeling with just a few lines of code. While still in alpha/beta stage, CameraX and ML Kit make integrating machine learning into camera apps on Android much easier.
Comment développer une application mobile en 8 semaines - Meetup PAUG 24-01-2023Nicolas HAAN
À l'automne dernier, nous avons eu la chance de développer une nouvelle app pour un de nos clients en partant de zéro.
L'objectif ? Créer une application minimale à mettre entre les mains de dizaines de beta testeurs, en 8 semaines et avec 2 développeurs. Partant d'une feuille blanche, nous avons pu mettre en œuvre les dernières avancées de la stack Android sans être contraints par l'existant.
Développeurs débutants comme expérimentés, vous repartirez de ce talk avec nos apprentissages clés sur l'architecture ainsi que sur les bibliothèques et astuces pour faciliter la maintenance et la stabilité de l'application. En bonus, nous répondrons à la question : "Une app full-compose, est-ce que c'est cool ?"
ML Kit is the SDK that makes machine learning features more straightforward for the mobile developers to incorporate in their apps. Since ML Kit has been made a core ingredient of Firebase ,its robust ML APIs can be implemented seamlessly in both android & IOS apps like it’s any other feature such as analytics, crashlytics etc.
The flexibility provided by ML Kit for mobile developers to use On-device or On-Cloud ML Apis as per the use case is quite significant . In this session we will take deep dive in the different ready to use APIs of ML Kit encapsulating the features of Mobile Vision, Google Cloud Vision API, TensorFlow Lite and Neural Network API.
Until now camera development has been very painful within android development. Although Camera2 API solved some of the problems in the original Camera API, however there were still lots of difficulties existed to write camera features. With the recent launch of JetPack CameraX support library, it aims to make camera app development easier by providing consistency and easy-to-use API that works on devices running Lollipop API-21 or above. In this talk, we will review main uses cases of CameraX Api which are preview, image analysis and image capture. We will also explore device-specific extensions such as portrait, HDR, night and beauty mode
Comment développer une application mobile en 8 semaines - Meetup PAUG 24-01-2023Nicolas HAAN
À l'automne dernier, nous avons eu la chance de développer une nouvelle app pour un de nos clients en partant de zéro.
L'objectif ? Créer une application minimale à mettre entre les mains de dizaines de beta testeurs, en 8 semaines et avec 2 développeurs. Partant d'une feuille blanche, nous avons pu mettre en œuvre les dernières avancées de la stack Android sans être contraints par l'existant.
Développeurs débutants comme expérimentés, vous repartirez de ce talk avec nos apprentissages clés sur l'architecture ainsi que sur les bibliothèques et astuces pour faciliter la maintenance et la stabilité de l'application. En bonus, nous répondrons à la question : "Une app full-compose, est-ce que c'est cool ?"
ML Kit is the SDK that makes machine learning features more straightforward for the mobile developers to incorporate in their apps. Since ML Kit has been made a core ingredient of Firebase ,its robust ML APIs can be implemented seamlessly in both android & IOS apps like it’s any other feature such as analytics, crashlytics etc.
The flexibility provided by ML Kit for mobile developers to use On-device or On-Cloud ML Apis as per the use case is quite significant . In this session we will take deep dive in the different ready to use APIs of ML Kit encapsulating the features of Mobile Vision, Google Cloud Vision API, TensorFlow Lite and Neural Network API.
Until now camera development has been very painful within android development. Although Camera2 API solved some of the problems in the original Camera API, however there were still lots of difficulties existed to write camera features. With the recent launch of JetPack CameraX support library, it aims to make camera app development easier by providing consistency and easy-to-use API that works on devices running Lollipop API-21 or above. In this talk, we will review main uses cases of CameraX Api which are preview, image analysis and image capture. We will also explore device-specific extensions such as portrait, HDR, night and beauty mode
Create an image classifier with azure custom vision net sdkLuis Beltran
Azure Custom Vision allows you to build an image classifier that adjusts to your needs without needing an AI/ML background.
In this session we will learn about Custom Vision and how you can train an image classifier model using its .NET SDK. A published model can then be exported to a variety of formats, such as TensorFlow and CoreML, which can perfectly be integrated into Android & iOS mobile apps with object recognition capabilities.
In this session, the Custom Vision service will be explored and demonstrated with: a) a trained model using the COIL-100 image dataset and the Custom Vision .NET SDK. b) A mobile image classifier app which makes use of the model in a) in both online and offline scenarios
Slides of a talk of a seminars series I gave at WebRatio in January 2014.
I implemented many best practices and advices in this presentation in a generic app template available here: https://github.com/iivanoo/cordovaboilerplate
Get to know the Wikitude SDK in its details. Learn what the Wikitude SDK is offering and how it is structured. See the advantages of the Native API vs. the JavaScript API.
Augmented World Expo (AWE) is back for its seventh year in our largest conference and expo featuring technologies giving us superpowers: augmented reality (AR), virtual reality (VR) and wearable tech. Join over 4,000 attendees from all over the world including a mix of CEOs, CTOs, designers, developers, creative agencies, futurists, analysts, investors, and top press in a fantastic opportunity to learn, inspire, partner, and experience first hand the most exciting industry of our times. See more at http://AugmentedWorldExpo.com
Developing an object detector solution with Azure Custom Vision .NET SDKLuis Beltran
Azure Custom Vision is a cognitive service that lets you build, deploy, and improve image classifiers that adapt to your needs without a background in AI advanced techniques.
One of Custom Vision functionalities is Object Detection, which both identifies a target element in a picture and returns its location (coordinates) in the image. This is particularly useful in scenarios where there are several objects but only one of them is relevant.
In this presentation, the Custom Vision service will be described, with focus on how to deliver an object detection model by using the .NET SDK. This model can be accessed by applications either in online (by using the Prediction API) or offline modes (by exporting it to a platform, such as TensorFlow), and it will be demonstrated in a mobile application as well.
Have you ever worked on a website that displayed data dynamically? Are you tired of putting HTML markup inside Javascript?
Chances are you’ve looked into using what’s known as an MV* framework. These kind of frameworks aim to help reduce the effort in binding Javascript and JSON data to the user’s screen.
Learn more about 3 of the most popular frameworks being talked about in the developer community: Knockout (by a Microsoft employee), Angular (by Google), and Ember (by a Rails and jQuery contributor).
We’ll compare and contrast these frameworks and how they ranked against our criteria in building a client-side heavy application that updates its data in real time.
Infinum Android Talks #14 - Data binding to the rescue... or not (?) by Krist...Infinum
We're checking out new data binding lib announced on the last Google I/O. We'll go in depth of data binding - goals, benefits and drawbacks. Less code should mean less bugs - in theory.
Exploring Google (Cloud) APIs with Python & JavaScriptwesley chun
Half-hour tech talk given at user groups or technical conferences to introducing developers to integrating with Google (Cloud) APIs from Python or JavaScript.
ABSTRACT
Want to integrate Google technologies into the web+mobile apps that you build? Google has various open source libraries & developer tools that help you do exactly that. Users who have run into roadblocks like authentication or found our APIs confusing/challenging, are welcome to come and make these non-issues moving forward. Learn how to leverage the power of Google technologies in the next apps you build!!
Building an Android app with Jetpack Compose and FirebaseMarina Coelho
This presentation covers how to build an Android app with Jetpack Compose and Firebase products, using a real To-do application as an example. The topics are:
- Quick overview of what Compose and Firebase are;
- Setting up MVVM architecture in a Compose application;
- Handling async methods provided by Firebase in a Compose application;
- Using Firebase Authentication to manage users;
- Using Firestore to store data remotely.
A brief description and short example of Android Data Binding from version 6 and above, along with a brief description of Butterknife (by Jake Wharton).
Philipp Nagele (CTO, Wikitude) An Insider Deep-Dive into the Wikitude SDK AugmentedWorldExpo
Philipp Nagele (CTO, Wikitude GmbH) gives an Insider Deep-Dive into the Wikitude SDK
An introduction into the many options of the Wikitude SDK with a deeper look into advanced features like Plugins API and how to combine third party libraries with the Wikitude SDK. We will look into the general architecture of the SDK and deep-dive into a few outstanding (and maybe not so well-known) features of the SDK.
In order to avoid tightly coupled layers each other, we are likely to adopt architectures for apps. This talk covers about simple layered architecture especially for data and repository layer, which comes before other layer UI points. e.g. ViewModel
More Related Content
Similar to CameraX, MLKit and AutoML at DevFest Songdo 2019
Create an image classifier with azure custom vision net sdkLuis Beltran
Azure Custom Vision allows you to build an image classifier that adjusts to your needs without needing an AI/ML background.
In this session we will learn about Custom Vision and how you can train an image classifier model using its .NET SDK. A published model can then be exported to a variety of formats, such as TensorFlow and CoreML, which can perfectly be integrated into Android & iOS mobile apps with object recognition capabilities.
In this session, the Custom Vision service will be explored and demonstrated with: a) a trained model using the COIL-100 image dataset and the Custom Vision .NET SDK. b) A mobile image classifier app which makes use of the model in a) in both online and offline scenarios
Slides of a talk of a seminars series I gave at WebRatio in January 2014.
I implemented many best practices and advices in this presentation in a generic app template available here: https://github.com/iivanoo/cordovaboilerplate
Get to know the Wikitude SDK in its details. Learn what the Wikitude SDK is offering and how it is structured. See the advantages of the Native API vs. the JavaScript API.
Augmented World Expo (AWE) is back for its seventh year in our largest conference and expo featuring technologies giving us superpowers: augmented reality (AR), virtual reality (VR) and wearable tech. Join over 4,000 attendees from all over the world including a mix of CEOs, CTOs, designers, developers, creative agencies, futurists, analysts, investors, and top press in a fantastic opportunity to learn, inspire, partner, and experience first hand the most exciting industry of our times. See more at http://AugmentedWorldExpo.com
Developing an object detector solution with Azure Custom Vision .NET SDKLuis Beltran
Azure Custom Vision is a cognitive service that lets you build, deploy, and improve image classifiers that adapt to your needs without a background in AI advanced techniques.
One of Custom Vision functionalities is Object Detection, which both identifies a target element in a picture and returns its location (coordinates) in the image. This is particularly useful in scenarios where there are several objects but only one of them is relevant.
In this presentation, the Custom Vision service will be described, with focus on how to deliver an object detection model by using the .NET SDK. This model can be accessed by applications either in online (by using the Prediction API) or offline modes (by exporting it to a platform, such as TensorFlow), and it will be demonstrated in a mobile application as well.
Have you ever worked on a website that displayed data dynamically? Are you tired of putting HTML markup inside Javascript?
Chances are you’ve looked into using what’s known as an MV* framework. These kind of frameworks aim to help reduce the effort in binding Javascript and JSON data to the user’s screen.
Learn more about 3 of the most popular frameworks being talked about in the developer community: Knockout (by a Microsoft employee), Angular (by Google), and Ember (by a Rails and jQuery contributor).
We’ll compare and contrast these frameworks and how they ranked against our criteria in building a client-side heavy application that updates its data in real time.
Infinum Android Talks #14 - Data binding to the rescue... or not (?) by Krist...Infinum
We're checking out new data binding lib announced on the last Google I/O. We'll go in depth of data binding - goals, benefits and drawbacks. Less code should mean less bugs - in theory.
Exploring Google (Cloud) APIs with Python & JavaScriptwesley chun
Half-hour tech talk given at user groups or technical conferences to introducing developers to integrating with Google (Cloud) APIs from Python or JavaScript.
ABSTRACT
Want to integrate Google technologies into the web+mobile apps that you build? Google has various open source libraries & developer tools that help you do exactly that. Users who have run into roadblocks like authentication or found our APIs confusing/challenging, are welcome to come and make these non-issues moving forward. Learn how to leverage the power of Google technologies in the next apps you build!!
Building an Android app with Jetpack Compose and FirebaseMarina Coelho
This presentation covers how to build an Android app with Jetpack Compose and Firebase products, using a real To-do application as an example. The topics are:
- Quick overview of what Compose and Firebase are;
- Setting up MVVM architecture in a Compose application;
- Handling async methods provided by Firebase in a Compose application;
- Using Firebase Authentication to manage users;
- Using Firestore to store data remotely.
A brief description and short example of Android Data Binding from version 6 and above, along with a brief description of Butterknife (by Jake Wharton).
Philipp Nagele (CTO, Wikitude) An Insider Deep-Dive into the Wikitude SDK AugmentedWorldExpo
Philipp Nagele (CTO, Wikitude GmbH) gives an Insider Deep-Dive into the Wikitude SDK
An introduction into the many options of the Wikitude SDK with a deeper look into advanced features like Plugins API and how to combine third party libraries with the Wikitude SDK. We will look into the general architecture of the SDK and deep-dive into a few outstanding (and maybe not so well-known) features of the SDK.
Similar to CameraX, MLKit and AutoML at DevFest Songdo 2019 (20)
In order to avoid tightly coupled layers each other, we are likely to adopt architectures for apps. This talk covers about simple layered architecture especially for data and repository layer, which comes before other layer UI points. e.g. ViewModel
This slide covers several topics, such as app startup, hilt, navigation, and datastore, which have been released this year, through Android 11 weeks.
Not only suggesting overview but also giving simple use cases.