Hello, I'm Lance Nanek from New Frontier Nomads, your lean mobile cofounders! I was asked to speak on native development on Google Glass. I'm showing the Google Glass screen up on the projector today. So what’s up there is what is floating in front of my vision.
Native development refers to writing what are basically Android apps as opposed to using a web API for Google Glass called the Mirror API. Google announced at Google IO that they will eventually make a GDK available that will be based on the Android SDK. You can watch the YouTube video of the Google IO session to learn how to put Android apps on your Google Glass via debug mode for development immediately.
Android apps are usually written in Java, although Android NDK apps written in C and apps like OpenCV with NDK libraries work fine as well.. Here’s an example of accessing system sensors. This highlights a difference between the Mirror API and native development. The closest the Mirror API lets you come to this is getting a location every ten minutes and updating the HTML cards in the user’s timeline. Realtime animation is impossible and requires native development.
This is an example of accessing the camera data via the OpenCV library.
The data can be manipulated to give users a zoom view according to touchpad adjustments.
Or zoom inset.
Or more advanced algorithms like edge finding.
The Google IO session teaches you how to install the standard Android Launcher and settings. You can start it from tapping on the settings card in your Google Glass.
The Launcher starts.
And you can then run the full Android Settings app and other apps.
The Settings app is useful for pairing Bluetooth keyboards and touchpads, logging into WiFi that requires a username and password, and setting the screen to turn off after a longer inactivity period.
Google has also published some examples and instructions on their Glass Developer site. This is what turning debug mode on looks like, needed before using Android SDK tools like adb to install apps.
Google has several samples on their site. There is a compass that displays the direction you are facing and rotates through the directions as you turn around.
A level that shows you the Glass’s orientation vs. gravity. Here is a screenshot of being pretty level.
And here is tilting my head.
There’s also a wave form display, illustrating audio processing. You could use this to write karaoke app that always tells you if you are on key or not and which direction to pitch your voice!
I have many samples on my own site as well. For example, this one scrolls an image as you look left and right.
Here it is panned over with a head movement. Entering commands into Glass is even more difficult than typing into mobile phones. So, just like we had to redo all our web apps to require less input and fewer screens with less complex flows when we moved from web to mobile, so to does Glass require new methods accounting for user actions and context without a lot of input.
Here is a more heavyweight example. This is the patient schedule from a large electronic medical records system as seen by a doctor. The doctor can have it show up whenever he looks up or start it like an app. It shows what patient is in what room with what complaint and status. Scrolling is handled by looking up and down. With this the doctor knows exactly when patients are waiting or when he can go to lunch. He knows what they checked in for, if they have unfinished paperwork, and who has seen them already by the color of the bar on the left. Giving doctors these abilities really feels great and it should help patient safety and satisfaction as well.
Currently there is no Google Maps library on Glass. Open Street Map, an open source alternative works fine, however.
One of Google’s Glass Evangelists also published an example of pulling a web map down regularly based on the user’s location. Again, this is the sort of live animation that doesn’t work well with the Mirror API, although the Mirror API does have some basic support for triggering directions to addresses and displaying single static maps and routes.
Some developers have used the power of native apps to start customizing the Glass UI itself. This is an example of a lock screen that shows up whenever Glass is started and needs the right motions on the touchpad to bypass.
Similarly there is a system broadcast sent on winking that can be used to trigger things like taking pictures with the right app installed.
Winky triggers the built-in Glass calibration for this gesture, then lets you take pictures with a wink. Hands free interaction like this is very big for Glass and we’re predicting man situations where it will be invaluable, like in operating rooms where surgeons should limit touches after sterilizing their hands and other busy professionals.
While there is no distribution method or launcher yet for native Glass apps, Launchy is a popular open source launcher used for now. It replaces the settings action on your Glass with an app chooser that can also run the original settings.
There is official support for bootloader unlocking Google Glass and enabling modification of the system software. This tends to not work very well and just lock up the device.
The same genius so popular for unlocking iPhones and getting software out for them has an easy rooting method, however. This still worked for me as of Glass system version XE9.
Once you have root even more comprehensive hacks are available. This example from Mhacks accessed the eye facing sensor not available in the Android APIs.
They used it to turn the screen red and make the device rumble whenever the user fell asleep while driving.
They also used the accelerometer to detect crashes, dial 911, and send video.
Another handy trick after rooting is running a VNC server to remote control your Glass.
We’re also seeing some cutting edge development happening by compiling against the Glass system images. This allows doing things like adding new voice commands to the system menu shown after the OK Glass prompt, reusing the Bluetooth connection between the Glass and the paired phone, and inserting cards into the user’s timeline from a locally running app.
Here’s an example of the glasspay app triggering from a new voice command they added to the OK Glass menu using this technique.
They also use live bar code scanning, another feature not yet available by Mirror API.
Here is an example of scanning a bar code in Crystal Shopper, a price comparison app.
Code is read.
And results are looked up via network call to a product API like Amazon.
Thanks for attending and I hope this overview helped you with your own development efforts. The new ground being broken writing native apps for Glass is really exciting and there are many use cases that can already be fulfilled before Google publishes a GDK and app boutique. Industry and health care uses, for example, can have the device pre-provisioned and loaded with appropriate software. Several other Android wearable devices are sampling now as well, such as the Optinvent ORA and Recon JET.
Native Development on Google Glass presentation at GMIC
Native Development on
-Lance Nanek, New Frontier Nomads
Google Announced GDK Based on
Android SDK at Google IO