This document provides guidance on creating a workout tracking app for the Apple Watch using HealthKit. It outlines how to use the HealthKit framework to access fitness data, start and stop workout sessions, save workout data, and handle common issues like debugging on device and infrequent data updates. The main steps are initializing HealthKit, requesting authorization, querying for data during workout sessions, and saving workout objects on completion. Challenges discussed include debugging directly on the Apple Watch, keeping the app in the foreground during workouts, and managing battery usage.
The document describes a hackathon project called "Company Scouter" that calculates the overall "power level" of a company based on the aggregated Klout scores of its members. The iOS app allows users to input a company name and see the resulting power level calculated from data retrieved via APIs from Crunchbase and Klout. It also notes that four open source repositories were created during the hackathon to support the app.
Core ML 3 was announced at WWDC with new features for the Core ML API. The Core ML framework allows importing machine learning models in the .mlmodel format, including updates to supported model types and operations defined in protobuf files. Core ML Tools was also updated with over 3500 new models and operations supported for conversion between Core ML and other frameworks like TensorFlow and PyTorch.
1. The presenter compared the graphics rendering performance of Metal to UIImageView to learn about GPU usage.
2. Metal was initially 10-20x faster than UIImageView for rendering images but was found to be slower after further analysis and optimization of the measurement code.
3. Two key problems were identified with the Metal implementation: processing on the CPU was blocking the GPU, and texture loading was a bottleneck.
4. Optimizations including combining operations, caching textures, and ensuring resources were in GPU memory improved the Metal performance.
This document discusses implementing deep learning on iOS using various frameworks. It provides an overview of Metal Performance Shaders (MPSCNN), Accelerate (BNNS), Core ML, and Vision. It then details the 3 step process to implement a deep learning model with MPSCNN: 1) create the model, 2) implement the network, and 3) perform inference. Examples of logo detection and increased performance are shown. Core ML and Vision provide easier implementations compared to needing Metal knowledge for MPSCNN. BNNS may be better for small networks due to reduced CPU-GPU communication costs.
The document discusses client-side deep learning and introduces MPSCNN, a library that allows running convolutional neural networks on iOS devices using Metal Performance Shaders. MPSCNN can import trained models from frameworks like TensorFlow and run them to perform tasks like object detection on images at 60 times per second. Client-side deep learning could enable new mobile applications for areas like self-driving cars, AI assistants, and cancer detection by taking advantage of on-device processing power.
I gave this talk in Jerusalem, Israel, and Palestine in 2016 with following schedule:
- July 25, 2016 Azrieli College, Jerusalem
- July 26, 2016 Google Campus - Tel Aviv, Israel
- July 27, 2016 SigmaLabs - Tel Aviv, Israel
- July 28, 2016 Birzeit University - Palestine
These events were hosted by Embassy of Japan in Israel.
[Description]
While introducing Japanese technologies (products) such as WHILL, Moff, BONX, and etc. which Mr. Tsutsumi was involved in inventing the applications, he will talk about how BLE, a key technology of IoT, is utilized in those products.
Practical Core Bluetooth in IoT & Wearable projects @ AltConf 2016Shuichi Tsutsumi
n recent years, "IoT" or "Wearable" are one of buzzwords, so you might have interests in building hardware products. But learning how to develop electric circuits, mechanical systems or embedded systems etc. from zero is so difficult.
However, iOS developers can contribute to projects of hardware products with the knowledge of Core Bluetooth / Bluetooth Low Energy (BLE), even if they are not familiar with hardware layer.
In this session, you can learn the basics of Core Bluetooth / BLE (what it is, why we use it, and how it works), and practical knowledges to build apps for hardware products (how to design the apps, how to test without actual hardware prototypes, troubleshooting tips, and how the apps can be reviewed by Apple) which I learned through actual IoT/Wearable projects.
This would be interesting & understandable even if you are not familiar with or have no interests in Core Bluetooth because of the actual examples.
Practical Core Bluetooth in IoT & Wearable projects @ UIKonf 2016Shuichi Tsutsumi
In recent years, "IoT" or "Wearable" are one of buzzwords, so many people might have interests in building hardware products. But learning how to develop electric circuits, mechanical systems or embedded systems etc. from zero is so difficult. However, iOS developers can contribute to projects of hardware products with the knowledge of Core Bluetooth / Bluetooth Low Energy (BTLE), even if they are not familiar with hardware layer. In this session, he will introduce BTLE, show easy examples of Core Bluetooth, and share knowledges with his experiences developing more than 10 apps for IoT and Wearable products.
What is Bluetooth Low Energy? Why use this?
Very easy examples of how to communicate using Core Bluetooth
What part was my responsibility in the projects? Communication with firmware engineer.
Designing GATT
Designing the behavior of the app in background
Limitations in background. What are possible and impossible?
State Preservation and Restoration
Develop without prototypes of the hardware
BTLE Module's Developer Kit
Prototyping tools
Build emulator apps
Trouble Shootings
Debugging tools
Each cases: Can't find / connect / send or receive information