Android Things is Googles latest foray into the Internet of Things. Android Things lets you build professional, mass-market products on a trusted platform, without previous knowledge of embedded system design. IoT devices need testing too.
We will talk about how to architect your Android Things applications to enable testing and explore best practices to keep your codebase clean and your IoT devices solid.
The second half of the talk will be a deeper dive into controlling Android Things peripherals. We'll explain what are user drivers, how they work, how to use them, and most importantly: how you can create and test a new driver from scratch that can be used from Android Things to interact with new peripherals.
3. Cameras
Gateways
HVAC Control
Smart Meters
Point of Sale
Inventory Control
Interactive Ads
Vending Machines
Security Systems
Smart Doorbells
Routers
Energy Monitors
Asset Tracking
Fleet Management
Driver Assist
Predictive Service
Ideal for powerful, intelligent devices on the
edge that need to be secure.
10. Peripheral I/O
Communicate your Android Things device with external hardware
components (usually called peripherals)
Various protocols can be used (depends on hardware)
General-Purpose
Input/Output
GPIO
Pulse Width Modulation
PWM
Inter-Integrated
Circuit
I2C
Universal
Asynchronous
Receiver-Transmitter
UART
Serial Peripheral
Interface
SPI
11. Peripheral I/O
Communicate your Android Things device with external hardware
components (usually called peripherals)
Various protocols can be used (depends on hardware)
General-Purpose
Input/Output
GPIO
Pulse Width Modulation
PWM
Inter-Integrated
Circuit
I2C
Universal
Asynchronous
Receiver-Transmitter
UART
Serial Peripheral
Interface
SPI
Native Peripheral Input/Output
NPIO
41. Input peripherals / Sensors
“A sensor detects or measures a physical property and
responds to it.”
42. Output peripherals / Actuators
“An actuator is responsible for moving or controlling
a mechanism or system, or presenting data to the outside
world.”
54. Sensor
“A sensor detects or measures a
physical property and responds to it.”Model
business logic
55. Actuator
“An actuator is responsible for moving or
controlling a mechanism or system, or
presenting data to the outside world.”
View
user interface
Android Things is an extension of the Android platform for IoT and embedded devices. It joins the family of Mobile, Wear, TV, and Auto to bring Android to new form factors. Android Things addresses the following challenges:
Developer resources often sunk into building a stable software stack with the necessary features.
Security is hard, and the scale of IoT makes it critical to get right from the beginning.
Scaling to production volumes can be cost prohibitive for small- to mid-sized companies.
With Android Things, developers can leverage the best-in-class multimedia, connectivity, and rich UI to simplify and accelerate the development of IoT applications.
Target IoT Applications
Android Things is ideal for powerful devices with intelligence at the edge that need to securely connect to cloud services.
Good examples are devices that locally aggregate data from multiple sensors or other edge devices for further processing or display.
Other ideal applications are those that require image or audio processing at the edge, and upload the collected or processed information to a cloud service for analytics or machine learning.
Easy and Secure Deployment
Images are built and signed by Google through the IoT Developer Console.
Google manages the framework and hardware integration layers. Developers only need to provide their apps, drivers, and configuration files.
Devices receive security updates to devices automatically, even if the device manufacturer doesn't provide an updated image or abandons the device.
Devices are protected against corrupted image downloads through verified boot. This blocks the device from booting into an unknown state with an image whose signature and contents cannot be verified. Rollback protection is provided by the A/B update mechanism, which guarantees the system always has a known good state to boot into.
Scaling to Production
Android Things hardware uses a System-on-Module (SoM) architecture. The system is designed around a core computing module that contains the CPU, memory, networking and other core components in a very small package. The SoM is attached to a larger breakout board during prototyping and development to connect I/O.
When moving into production, the breakout board is replaced with a board customized for the application. This reduces costs and simplifies hardware development because the complex hardware design is encapsulated in the SoM. Carrier boards with low-speed I/O are much less expensive to produce in low volumes.
The Google-managed software layers create a stable Board Support Package (BSP) that developer apps can rely on. This layer of separation makes your code portable to other supported hardware platforms if the needs of your design change.
This is the software stack for a traditional Android mobile device.
- Kernel and libraries are primarily focused on enabling hardware driver support.
- Application framework provides a rich services API for apps.
- Applications provide user-facing features for general use cases.
Android Things removes most of the user-facing applications and visual framework components.
The UI toolkit portions of the framework remain available to apps for use but it’s up to you as a developer to decide whether you use it or not.
That means that in Android Things displays are optional so you have to consider alternate UI for the user to interact with your IoT app such as voice commands or some sort remote controler
Even without a display, activities are still the primary component of an Android Things app. This is because the framework delivers all input events to the foreground activity, which has focus. These events cannot be received through any other app component, including a service.
Optional displays also means some APIs are disabled or have modified behavior. Most APIs that include showing a dialog or view to the user, such as authentication and sign-on, are not available. This includes Android system notifications and runtime granted permissions. Requested app permissions are granted at install-time and runtime permission checks will always return true for permissions listed in the app manifest.
------
Android Things supports the same UI toolkit available in other Android form factors. Applications presenting graphical UI have full control over the display; the system status bar and navigation buttons are not present. However, graphical displays are optional in Android Things, and this affects how you build your app on the platform. Even without a display, activities are still the primary component of an Android Things app. This is because the framework delivers all input events to the foreground activity, which has focus. These events cannot be received through any other app component, including a service.
Optional displays also means some APIs are disabled or have modified behavior. Most APIs that include showing a dialog or view to the user, such as authentication and sign-on, are not available. This includes Android system notifications and runtime granted permissions. Requested app permissions are granted at install-time and runtime permission checks will always return true for permissions listed in the app manifest.
Consider alternate forms of user input for your app. Speech recognition, game controllers, and sensors are all great examples of mechanisms to provide user interaction without a traditional touch display. The Android SDK already provides support for all of these.
Now that we’ve seen an overview of Android things, let’s have a look at the technical side of things, we’ll start with the list of peripheral Input/Output protocols that android things supports
Android things has Java apis to communicate with peripherals using GPIO, PWM, SPI, I2C and UART.
From a user perspective it doesn’t matter what protocol is used but it will affect you as a developer, even though some things I’ll explain are implementation details that the Android Things APIs abstract away for you.
We’ll discuss more about this later on
Since the release of android things preview 2 (released last february) all these protocols can also be accessed from the NDK - so this means developers can now write apps using native code (in C/C++) and use APIs to communicate with peripherals using the mentioned protocols
GPIO: This is the simplest way of peripheral communication, you can use it to read from the peripheral (input) and write to the peripheral (output)
Each physical pin represents either an input or an output in your code
And it can take only two values: high and low (or 1 and 0).
Examples of GPIO are:
PIR (movement) sensor (when movement is detected then GPIO pin reads high or 1)
Button (when user presses then reads high)
LEDs (writing ‘high’ or 1 would lit up the LED and would keep illuminated until we write low)
PWM: is Similar to GPIO, in the way it uses one physical pin but the difference is PWM is output-only (we can only send data to a peripheral).
Using PWM lets you control more complex devices that can take a wider range of values, rather than just 1 or 0.
The way it works is you have to set a frequency that defines how often the pulse repeats and then set the duty cycle which is the width of the pulse for each frequency window - which is the value we’ll be writing to our peripheral device.
If you imagine a Single-colour led strip (using PWM to set a duty cycle of 25% would set the brightness of all LEDs to be 25% bright)
Buzzer (set the pitch of the buzzer)
Door lock (control the servo motor using PWM to unlock/lock the door lock)
SPI: is a more complicated protocol than the previous 2.
SPI uses master-slave communication which means the master (in this case the Android Things device) communicates with one or more slaves (each of them being a peripheral)
Needs at least 2 lines: data (which can be read/write/read-write), clock, and can also use multiple chip select lines depending on the number of slaves
Protocol itself (how data is sent) will be different depending on the peripheral you use
RGB LED strip: differs from previous one because: we can address each LED individually and set different colors and brightnesses to each LED
LCD display (set each pixel to on/off)
Read values from various Sensors from a weather station (eg. barometer sensor - pressure) - we’d need a Chip select line for each of them
I2C: at first glance it may resemble a bit to SPI because it also needs 2 pins (data and clock), uses master-slave communication with one or more slaves, and can be used to read and write
differs from SPI because each slave is represented with a different address instead of adding a new line for each slave. Also I2C consists of structured data frames (like the one on screen). The frame includes a byte of data that is sent to the slave and The slave has to acknowledge if it has received the data correctly. This acknowledgement is built-into I2C so you don’t have to worry about that
Read data from Some sensors such as
temperature sensor
electronic compass sensor
Smart Alarm clock
Write data to an LCD display to display the time
Read data from the accelerometer to know whether the user has interacted with it or not
Each of these would be accessed using a different address
Last one is UART: While SPI and I2C are both synchronous interfaces (because they both need a clock line to synchronise), UART is asynchronous. In this context, async means that no data line is used to communicate between 2 devices.
Typical implementations of UART use two pins: one is data in (read) and the other one is data out (write).
Unlike SPI and I2C, UART doesn't support multiple slaves and the data is wrapped in data frames which are simpler than I2C’s frames. And there’s no notion of master and slave.
Gps module (read gps data)
xbee radio (send and receive data) and similar radios
Simple printers used in restaurants and shops to print the receipt (send data to print, read printer status)
Now that we know what protocols are supported in Android Things, let’s see an example of how to make an LED in an strip blink every second using the SPI protocol
Start with an empty activity
First thing we’ll need is an instance of the PeripheralManagerService - which is the component that lets us check how many buses of each type our development board has and lets us open them
Next is to use one of the openXXX methods - in this case openSpiDevice and pass in the device name (will vary depending on the board we’re using)
Note every operation that touches peripheral IO can throw an IOException so we have to catch them and handle them appropriately
For every open operation you want to match it with a call to the close method call in ondestroy to make sure the port is not left open. If you do leave the port open and restart the app, you won’t be able to reopen it until you turn off and on the development board
Now we can proceed to configure the SPI port, this will depend on the requirements of the peripheral we communicate with so you’ll need to go find the vendor documentation and understand what’s necessary.
MODE0= Clock signal idles low, data is transferred on the leading clock edge
Before we start talking to the LED strip we’ll create a handler that uses the main thread’s looper to use for communicating with the peripheral
This will make it asynchronous while still remaining in the main thread (using the android message queue)
As a best practice we’ll start communicating with the peripheral in onstart and stop in on stop to make sure we don’t leave it running when it’s not needed
The last bit is writing data to the SPI bus to toggles the LED on and off every second.
We use a Runnable for that and we toggle between 0 and 123 (which a byte that represents is some RGB colour)
Now, you may say that’s a lot of code only to get an LED blinking, so that’s why Google have created the concept of Drivers
There’s 2 types of drivers:
Driver Libraries
Input drivers
Driver Libraries are reusable components that developers can create in order to make reading from and writing to a peripheral easy peasy.
You can think of driver libraries as a thin layer of abstraction so that you don’t need to worry about the details of how to configure protocol X to use with a peripheral. The same way you use a library for DB usage in a mobile app, you’d use a driver library for talking to a peripheral in an IoT app.
You can find examples of drivers on github (repo by google but also other devs) - and these can be bundled and distributed like any other library (jcenter/maven central)
This is an example of a driver library that simplifies the usage of the WS2801 LED strip over SPI, you just need to pass in the SPI bus name depending on the board you’re using and it takes care of everything else.
Compare 2 lines of code vs the 30+ we wrote using Peripheral IO. + this driver is reusable so you don’t need to rewrite that code again later
On the other hand, input drivers are components that read data from a peripheral device and you can register them with the system
Then the framework can forward the events your input driver reads to the activity running in foreground.
This means an input driver can’t be an Activity - it has to be a Service.
Types of input drivers Android Things supports:
GPS
HID (human interface device) - touchpad, mouse, joystick, etc
Sensor - pressure, temperature, accelerometer, etc.
In this example we have a GPS module and FooService which has been registered as an input driver is reading events from it
These events get forwarded to the android event pipeline and BarActivity who is listening for Location events will receive those events and use them to update an LCD display
So FooService is a separate component that I wrote for example and can be used by BarActivity which is part of Paul’s app and it’s all transparent because it’s handled by the framework
Here’s a code example of how to register your Service as an input driver. In this case we register as a keyboard (type button).
We defile the possible keycodes that our device can send
Then use the UserDriverManager to register as an input driver
And last we emit a ‘key down’ event of the letter T. This event will now be sent to the activity in foreground
Talked about peripheral io & driver creation
We will now talk about best practices in IoT
Peripherals come in different forms: active peripherals do their own thing once they have power, while passive peripherals wait for something to activate them.
There are many hardware examples, like buttons, switches, motion, light or sound detectors, metal detectors, temperature gauges, solar panels, GPS chips, accelerometers, LEDs, screens, speakers, vibrators, lasers, segment displays, Bluetooth WiFi or RFID transmitters, electric motors and many more.
These fall into two categories:
Input peripherals that consume information from the outside world
Output peripherals that produce information for the outside world.
An input peripheral can also be called a sensor. A sensor detects or measures a physical property and responds to it. This means a sensor measures a particular element in the outside world and feeds its knowledge back into the system.
This sounds quite like a description of a temperature gauge, a motion detector or a button.
Another name for an output peripheral is an actuator. An actuator is responsible for moving or controlling a mechanism or system, or presenting data to the outside world. This means an actuator moves or acts to change the outside world with knowledge we’ve shared from the system.
This sounds quite like a description of a speaker, LED screen or electric motor.4
Example of this naming
We have a motion detector
We have a buzzer speaker
Structured naming conventions can increase clarity when you’re reading code and searching for classes.
Also helping communication within your team because everyone understands what an actuator/sensor is, everyone is on the same page during discussions.
Now we’ve talked about class naming
Lets look at class organisation for IoT Android apps
In your codebase, a simple but naive way to structure your architecture is to group your peripherals by type.
This would mean having all the activities in one place, all the models. It would allow you to quickly find the code for type of class.
Or alternatively having all the GPIO peripherals in one place and all the I2C peripherals in another.
It would allow you to quickly find the code for each peripheral once you know the driver protocol.
This is handy for fixing all GPIO peripherals at once, but do you ever need to do that? It’s pretty rare.
This type of grouping is “tidy” but it comes with other disadvantages.
For example, if you added a speaker peripheral and then later wanted to update it, you may not recall that it uses I2C. This protocol driven structure would make it hard to look that up. However you will know the speaker makes beeping noises and is turned on in reaction to some user input. Let’s look at another way to group peripherals.
Alternatively Structuring your architecture to group domain peripherals by the feature that uses them.
This takes slightly more thought and does mean you have to speak to other people in your business.
But in the long run when returning to code later it is much more intuitive.
Sensors in the view, Actuators in the model
Model View Presenter - you separate the business logic and rules away from the user interface and display logic
You then have something in the middle that co-ordinates between the two
If we think back to our naming of Sensors & Actuators, sensors are all about input and gaining knowledge, there is nothing visual about a sensor.
Therefore we can condemn our sensors to the model of our MVP.
With the same idea Actuators are all about output, sharing something, moving, reacting - this is user interface and therefore belongs in our MVP View.
Explain benefits of understanding sensors in the model and actuators in the view.
Now that we’ve seen best practices when coding an IoT application, let’s see some best practices for testing an IoT app. We’ll discuss about driver testing but this can be applied to testing other components as well
Since a driver will have to make use of the peripheral IO classes we have to test the driver behaviour
This means we’ll need to mock framework classes and verify that when we call a certain method in our driver, then the right method gets called in the mock object
One best practise is to extract as many collaborators as you can. These will be reusable and will simplify testing of the driver as they can be tested separately
In this first example we’re testing the top layer which is the actuator
Imagine we have a device that has to flash red light when there’s an error so that the user understands something went wrong, let’s test that
First, the Ws2801 object is the driver
Our test will be that when we call showError then the actuator is calling the `write` method on the mock driver object with Color RED
The layer below would be the Ws2801 driver:
In this case we have to mock the framework class SpiDevice
Our first test ensures the device is configured with the right frequency
And the second test is testing that a call to the driver’s write method is forwarded to the SPI device write method
Now imagine we had some code in the Ws2801 class that was responsible of returning a byte array containing some colour’s RGB components before sending it to the SPI bus. If we extract that code into a method getOrderedRgbBytes then that could be moved to a collaborator (or in this case static method) which can now be easily tested by asserting on outputs rather than relying on mocks and this means we don’t need to test this logic in the driver
Now that you know about the best practices as well as the technical details you’re set to start writing your first IoT app, but before that we wanted to share with you some tips and tricks we think will help you develop your project faster and better.
Match every openXX method call with a call to the close() method. With this you’ll make sure you don’t leave a port open when redeploying an android things app. If a port/bus stays open it can’t be reopened again until you restart the development board which will slow you down
Currently android things only supports remote ADB, meaning you have to adb connect to the device manually before you can debug/run anything - most routers use dynamic IP addresses that expire after a certain amount of time so you’d have to keep checking what’s the IP address of the device.
To avoid that there’s a few things you can do, we suggest:
Set a static IP address in your router for that device
If you’re in the same network and if your router supports it, the android things device can be accessed with the hostname Android.local
This will save you a lot of time too
Because of the lack of display and input devices for your android things board, if your app crashes you’ll have to re-deploy the app or use adb shell AM to restart it - I always try to have that command ready in my terminal just in case
Android Things applications declare in their manifest that they want to be opened on device startup. If you have 2 or more of these there is a clash and the system selects the first app found by default.
When developing you will keep installing a lot of debug apps and come across this problem. Here we wrote a script that will uninstall all of them for you so you can start again. Much faster than doing it manually.
All development boards have different pin names. This means if you ever switch board you have to change your code. Also as a heads up for tutorials out there some of them have classes that will detect the board you use and select the pin names and others won’t.
Threading comes in 3 forms. None - using the main thread. Single threading with async messages. And background threading. The choice is yours but its important that you consider it rather than blindly doing what the examples show.
Thanks - hope this talk gave you some insight on how android things works and how to start writing your first IoT apps on solid foundations
We’ve discussed lots of topics - questions now? Otherwise twitter