Introducing iOS Core
ML
chunpinglai0211@gmail.com

2019/08/22
What is Core ML?
• Apple framework, support 

• iOS 11.0+

• macOS 10.13+

• Core ML provides a unified representation for all machine learning
models.

• Official website
How Core ML works?
Core ML Version
• Core ML1 support iOS11+

• Core ML2 support iOS12+

• 88% users

• Core ML3 support iOS13+ (Beta)

• On-device Training

• Support for 100+ model layer types
Vision.framework
Vision
• Apple framework, iOS 11.0+

• Functions 

• Rectangle

• Face

• Barcode

• Text 

• Horizon

• Official website
Vision - Functions
Text
Rectangle
Horizon
Vision - Face
• Rectangles

- Finds faces within an image.

• Landmarks

- Finds facial features (such as the
leftEye, rightEye and nose) in an image.

• Implement sample
Models
Core ML Resources
• Turi Create (Apple)

• Build your own custom machine learning models with Turi Create. This open-
sourced Python library supports model training on macOS and Linux.

• TensorFlow

• tf-coreml: TensorFlow (TF) to CoreML converter.

• Core ML Tools

• Convert existing models to .mlmodel format from popular machine learning tools
including Keras, Caffe, scikit-learn, libsvm, and XGBoost.

• ONNX, Apache MXNet, IBM Watson Services…etc.

• Official website
Apple provides Models
• Apple provides models

• DepthPrediction, MobileNetV2, YOLOv3, DeeplabV3 …etc.
Create ML
• Functions

• Build models 

• Train models

• Preview test data results

• Official website
Create ML - Usage
• Usage

• Playground

• Mac App (New)

- macOS Catalina or later.
Create ML - Training type
• Image

• Text (Natural language)

• Sound (Beta)

• Activity (Beta) 

- to classify activities based on motion sensor data

• Official website
Core ML Usage
Image Classifier
• Use Playground

• Prediction Results

• Label name

• Confidence
Recognizing Objects in Live
Capture
• Identify objects in real-time video. 

• iOS12+

• Vision.framework

• Sample code

• Use VNCoreMLRequest to recognize custom
model

• Results

• VNRecognizedObjectObservation

- boundingBox

- identifier (Classification label)

- confidence (The level of confidence in the
observation's accuracy, normalized
to [0, 1.0].)
Tracking the User’s Face in
Real Time
• Detect and track faces from the selfie cam feed
in real time.

• iOS12+

• Vision.framework

• Sample code

• Use

• VNDetectFaceRectanglesRequest

• VNDetectFaceLandmarksRequest 

• Results

• VNFaceObservation

- landmarks

- boundingBox
Tracking Multiple Objects or
Rectangles in Video
• Apply Vision algorithms to
track objects or rectangles
throughout a video.

• iOS12+

• Vision.framework

• Sample code

• Use

• VNTrackObjectRequest

• VNTrackRectangleRequest
Image Segmentation
DeepLab
Deeplab MobileNetv2
• Sample Code from github

• Test Results
DeeplabV3.mlmodel
• Use Apple provide models , iOS12+

• DeepLab is a state-of-art deep learning model for semantic image segmentation, where the goal
is to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image.

★Inputs

• Image (Color 513x513)

★Outputs

• Array (Int32 513x513) 

- Array of integers of the same size as the input image, where each value represents the
class of the corresponding pixel.

- [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,
8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,0,0,0,0,0,0,0,0,0,0,0,0
Update model
How to update models?
1. Download .mlmodel file from url, Official website

(1) The model definition file (.mlmodel) must be on the
device before it's compiled.

(2) Compile. (The model has the same capabilities as a
model bundled with the app.)

2. MLUpdateTask
MLUpdateTask
• MLUpdateTask - A task that updates a model with additional training data.

• iOS13+ 

• Run in background

• On-device training

• Personalize

• Privacy

• No server

• Official website , Implement sample
MLUpdateTask Sample
• WWDC2019 Demo - (On-device training) Personalize
MLParameterKey
• Update training model with MLParameterKey 

• .epochs

• .learningRate

• .eps

• .miniBatchSize

• .momentum

• .beta1

• .beta2
Reference
• Turi Create(Python 套件)

• Turi Create Intro 

https://www.appcoda.com.tw/coreml-turi-create/

• Turi Create Deploying to Core ML 

https://apple.github.io/turicreate/docs/userguide/object_detection/export-
coreml.html

• CoreMLTools(python套件) Converter tools for Core ML

https://www.appcoda.com.tw/core-ml-tools-conversion/

• Custom layer - Convert Your Network to Core ML

https://developer.apple.com/documentation/coreml/core_ml_api/
creating_a_custom_layer

• What’s new in Core ML 3

https://heartbeat.fritz.ai/whats-new-in-core-ml-3-d108d352e50a
End

Introducing ios core ml

  • 1.
  • 2.
    What is CoreML? • Apple framework, support • iOS 11.0+ • macOS 10.13+ • Core ML provides a unified representation for all machine learning models. • Official website
  • 3.
  • 4.
    Core ML Version •Core ML1 support iOS11+ • Core ML2 support iOS12+ • 88% users • Core ML3 support iOS13+ (Beta) • On-device Training • Support for 100+ model layer types
  • 5.
  • 6.
    Vision • Apple framework,iOS 11.0+ • Functions • Rectangle • Face • Barcode • Text • Horizon • Official website
  • 7.
  • 8.
    Vision - Face •Rectangles - Finds faces within an image. • Landmarks - Finds facial features (such as the leftEye, rightEye and nose) in an image. • Implement sample
  • 9.
  • 10.
    Core ML Resources •Turi Create (Apple) • Build your own custom machine learning models with Turi Create. This open- sourced Python library supports model training on macOS and Linux. • TensorFlow • tf-coreml: TensorFlow (TF) to CoreML converter. • Core ML Tools • Convert existing models to .mlmodel format from popular machine learning tools including Keras, Caffe, scikit-learn, libsvm, and XGBoost. • ONNX, Apache MXNet, IBM Watson Services…etc. • Official website
  • 11.
    Apple provides Models • Appleprovides models • DepthPrediction, MobileNetV2, YOLOv3, DeeplabV3 …etc.
  • 12.
    Create ML • Functions •Build models • Train models • Preview test data results • Official website
  • 13.
    Create ML -Usage • Usage • Playground • Mac App (New) - macOS Catalina or later.
  • 14.
    Create ML -Training type • Image • Text (Natural language) • Sound (Beta) • Activity (Beta) - to classify activities based on motion sensor data • Official website
  • 15.
  • 16.
    Image Classifier • UsePlayground • Prediction Results • Label name • Confidence
  • 17.
    Recognizing Objects inLive Capture • Identify objects in real-time video. • iOS12+ • Vision.framework • Sample code • Use VNCoreMLRequest to recognize custom model • Results • VNRecognizedObjectObservation - boundingBox - identifier (Classification label) - confidence (The level of confidence in the observation's accuracy, normalized to [0, 1.0].)
  • 18.
    Tracking the User’sFace in Real Time • Detect and track faces from the selfie cam feed in real time. • iOS12+ • Vision.framework • Sample code • Use • VNDetectFaceRectanglesRequest • VNDetectFaceLandmarksRequest • Results • VNFaceObservation - landmarks - boundingBox
  • 19.
    Tracking Multiple Objectsor Rectangles in Video • Apply Vision algorithms to track objects or rectangles throughout a video. • iOS12+ • Vision.framework • Sample code • Use • VNTrackObjectRequest • VNTrackRectangleRequest
  • 20.
  • 21.
    Deeplab MobileNetv2 • SampleCode from github • Test Results
  • 22.
    DeeplabV3.mlmodel • Use Appleprovide models , iOS12+ • DeepLab is a state-of-art deep learning model for semantic image segmentation, where the goal is to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image. ★Inputs • Image (Color 513x513) ★Outputs • Array (Int32 513x513) - Array of integers of the same size as the input image, where each value represents the class of the corresponding pixel. - [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,0,0,0,0,0,0,0,0,0,0,0,0
  • 23.
  • 24.
    How to updatemodels? 1. Download .mlmodel file from url, Official website (1) The model definition file (.mlmodel) must be on the device before it's compiled. (2) Compile. (The model has the same capabilities as a model bundled with the app.) 2. MLUpdateTask
  • 25.
    MLUpdateTask • MLUpdateTask -A task that updates a model with additional training data. • iOS13+ • Run in background • On-device training • Personalize • Privacy • No server • Official website , Implement sample
  • 26.
    MLUpdateTask Sample • WWDC2019Demo - (On-device training) Personalize
  • 27.
    MLParameterKey • Update trainingmodel with MLParameterKey • .epochs • .learningRate • .eps • .miniBatchSize • .momentum • .beta1 • .beta2
  • 28.
    Reference • Turi Create(Python套件) • Turi Create Intro 
 https://www.appcoda.com.tw/coreml-turi-create/ • Turi Create Deploying to Core ML 
 https://apple.github.io/turicreate/docs/userguide/object_detection/export- coreml.html • CoreMLTools(python套件) Converter tools for Core ML
 https://www.appcoda.com.tw/core-ml-tools-conversion/ • Custom layer - Convert Your Network to Core ML
 https://developer.apple.com/documentation/coreml/core_ml_api/ creating_a_custom_layer • What’s new in Core ML 3
 https://heartbeat.fritz.ai/whats-new-in-core-ml-3-d108d352e50a
  • 29.