Making Sense of Sensors
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Making Sense of Sensors

  • 2,030 views
Uploaded on

My presentation from Future of Mobile 2010

My presentation from Future of Mobile 2010

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,030
On Slideshare
1,975
From Embeds
55
Number of Embeds
4

Actions

Shares
Downloads
12
Comments
0
Likes
2

Embeds 55

http://lanyrd.com 29
http://twitter.com 15
http://paper.li 6
https://twitter.com 5

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Who am I? Who is FP? What do we do? Who are our customers? LM, Orange, BBC, Nokia, Dennis Topic: not html5 vs native sensors, mobile phones, and our attitudes towards personal computing Most of the time I’m presenting work we’ve done, our approach to building apps, that kind of thing. This more far reaching
  • The mental models that we have of ourselves are quite outdated . Here in the West, much of our model is based on the Cartesian Duality - the idea that mind and body are separate things. In effect, there are little people inside our heads, driving our bodies . The homunculus fallacy is another expression of this idea. Now, I’d say that this mental model matters, because it affects the way that we see the world , and how we relate to technology. Let me give you an example: the Japanese are a culture that’s quite obviously comfortable with, and enthused by, robotics. And I’ve heard it said that the Japanese religion, Shinto - which ascribes every object with a spirit , and which, despite its post-war decline, is still a big influence on Japanese culture, is one explanation for this. There’s an implicit mental model that objects which aren’t living, are alive. But in our culture, this model of mind/body dualism is prevalent. And it’s perfectly possible for us to adopt completely different views of ourselves. In Astronomy, Copernicus and Galileo shifted us away from a world-view which has us at the centre of the universe. Or consider biology , where thanks to evolutionary biology we’ve moved from placing the cerebral human being at the top of the food chain... to a view which sees our intelligence as a winning strategy adopted by our genes, to spread themselves. The thing we feel most distinguishes us from the rest of the animal kingdom is just a useful by-product of a survival strategy.
  • Someone once described the personal computer as a “ bicycle for the mind ” - they’re tools, aren’t they, tools for enhance and support our intellects? And we seem to have built these tools to resemble us , in some ways; maybe because we built them in our own image, or maybe because we’ve adopted the computer as a metaphor for the brain - in much the same way that some Victorians built their models of consciousness around the railway, or the way that we’ve adopted networks as a useful model for understanding consciousness more recently. Actually, my favourite mental model is one from the 1900s, in which a neuroscientist called Charles Sherrington likened the brain to an “enchanted loom”. http://www.istockphoto.com/stock-photo-5427239-red-bicycle-leaning-against-wall-on-italian-street.php?st=bd2b8c9
  • I digress. The von Neumann architecture is the basis of most modern computing - here it is, it’s quite straightforward. And look at the emphasis on processing and memory; input and output - that’s, err, the whole universe outside the computer, is a little bit on the side.
  • This blueprint is increasingly irrelevant, and here's why. Tear down a modern mobile and you can see how unimportant processing is . It's that blue highlighted bit - a tiny part of the physical item, compared to the screen, battery, camera. And it's not just physical size. Look at the Bill of Materials for an iPhone. I always thought the display was one of the most expensive bits - nope. Processor? Nope? Not even the RAM and storage - the sensors, bundled together, are more than all the rest. The kinds of uses we put these devices to are driven by our prior expectations of computing, which are in part driven by the mental models we have our ourselves: brain driving body. We don't think of mobiles as primarily sensors, we think of them as things that process stuff, with this I/O tacked onto the edges. And when we start out with a fundamentally broken blueprint for what these things are, we can’t be blamed for not seeing all the opportunities lurking in them. ---- http://www.isuppli.com/Teardowns/News/Pages/iPhone-4-Carries-Bill-of-Materials-of-187-51-According-to-iSuppli.aspx
  • Let me be clear what I mean by sensors. I mean anything that reaches out into the environment and measures something. That’s a broad definition. Obviously a modern smartphone has a microphone, to record sound. And let's presume a touch-screen. And many have a GPS; physical keys might count too. Anything that can measure the strength of a radio signal might count - so that'd be Bluetooth, Wifi, 3G. Don't forget the camera, or increasingly, cameras. Then there's the less obvious ones: my Galaxy S2 has * acceleration, magnetometer, orientation * light, proximity, a gyroscope * a gravity sensor, a linear acceleration and rotation vector sensor. Oh, and soon NFC. And don't forget the second-order use you can put these too. From wi-fi or network cell ID you can derive a physical location. From bluetooth, you an unique identify someone (well, their phone). --- http://www.flickr.com/photos/jesse_sneed/2383953694/sizes/m/in/photostream/
  • The mainstream use of sensors by operating systems tends to be quite subtle. The iPhone uses ambient light, proximity and orientation sensors to influence the display in 3 ways: 1. move between landscape and portrait mode 2. to deactivate the touchscreen when the phone is next to your face, and 3. to save battery power by adjusting the brightness of the display according to ambient light. These uses are so unintrusive as to be nearly unnoticeable, and when you do notice them for the first time they’re slightly magical. I think there might be another reason for this subtlety: to many of us who grew up with the digital and physical well-partitioned, it feels strange to have them be linked too overtly. Have you used any of those apps which insists you shake your phone to do something? Gimmicky at first, a bit annoying after a while, because it’s so forced and unnatural. How many of us spend our days naturally doing this?
  • The hardware has gotten smaller over the last 50 years; but our fingers have obstinately stayed the same size. So the bandwidth, if you like, between screen and finger starts to become a limiting factor for our communication with our software. And so if we are to communicate more expressively with our machines, we will need to look beyond touch-screens. In the last 5 years we’ve moved interfaces from indirect manipulation (commandlines) to direct manipulation (pressing stuff that screams “press me”), so what’s next? You can already see voice getting prominence - through services like Siri, Nuance, various bits of work from Google. Machines are listening to us and building a better understanding of what we’re saying, a problem which has stumped scientists and language experts for decades. And the approach many people are taking now, to voice understanding, is a statistical one involving the harvest of huge amounts of data. Doing it brute-force, or accepting that there’s an intellectual dignity in applying a brute-force approach with great smarts. Maybe voice gives us a good model to work with when it comes to looking at analysis of sensor data, and improving the bandwidth between ourselves and our devices. Academic literature: lots of work around deriving context automatically (EXAMPLE). it’s hard. http://techcrunch.com/2010/08/12/googles-hugo-barra-25-of-android-queries-are-voice-based/ http://www.istockphoto.com/stock-photo-16451072-social-media-apps-on-apple-iphone.php?st=f 614edd
  • One of the things we always found, when we were building J2ME apps, was that little-used APIs or features were the ones where you'd most often find variance between implementations on different devices. It's the complement to "many eyes make all bugs shallow" isn't it, that problems lurk where there's least attention. This is where sensor support sits nowadays. There's lots of variance in the ways that different platforms let you use them - let alone in the components themselves. I don't want to go into a lot of code today, but it's worth looking at the differences between different platforms. You can see the philosophies of their architects play out... http://www.flickr.com/photos/sycamoremoonstudios/3106804710/sizes/o/in/photostream/
  • With iOS, Apple have done what they do elsewhere : broken out some key use cases which are really high-value, packaged them up nicely, and made them easily available. Look at the kind of controlled use cases you can have access to: promixity sensor is there, device orientation - but there are others you can't see. There’s a light sensor in the device, for instance - it’s there, the operating system uses it. Can you use it? No And Apple control these APIs. Anyone remember the Google search app? http://googlemobile.blogspot.com/2008/11/google-mobile-app-for-iphone-now-with.html http://www.flickr.com/photos/sriramtallapragada/3589622647/sizes/o/in/photostream/ http://www.flickr.com/photos/photographerglen/5921485502/sizes/o/in/photostream/
  • So easy: start the app, lift the phone to your ear, it records your voice and searches. This was one of the few places I’ve seen that kind of OS-level magic occur in a third party app. There was a furore around this at the time It was really good. But it used a private, undocumented API, proximityStateChanged . Any app can query the sensor and say “is there something nearby”, proximityStateChanged let the app get alerted when that happens - a subtle difference. You’re not supposed to have it. You can only find it by digging around Apple binaries. People noticed this. Made a bit of a fuss - reasonably, it looked like Google were getting special access to it (this was back in the days when they were pals). Try doing it now. You can't. The feature is there - just as I'm sure there's a way to get to the light sensor - but you can't have it. Not even if you're Google.
  • Google, of course, are all open. Open, open, open - for certain values of open. And their approach demonstrates this - there’s a Service to find out what sensors are available, and you can pull in readings from any of them. Their API currently supports all sorts: light, magnetic fields, pressure, even temperature. It’s all there for you to play with, and to me it feels quite raw: browsing the API documentation caused the same floods of testosterone throughout my body that compiling a kernel used to give me when I was in my 20s.
  • And it’s probably not surprising that much of the really esoteric stuff involving third party hardware ties into Android, as a result. This is the ADK Dash Kit. It supports RFID, Range Finder, Servo, Relay, Temperature and humidity sensors, and something called “Human Existence”. It’s in no way sinister for us to be building robots with sensors like that.
  • And as for access to these kinds of things from web applications: the W3C has a working group, the Device APIs Working Group looking at this since 2009 but their Generic Sensor API , whilst on the roadmap, is listed as exploratory work , roughly equivalent to being on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard". I couldn’t find any actual implementations. So for now, if you want to access sensors from a mobile web application then your best bet is to wrap it up in PhoneGap , which has some explicit support for accelerometer data, compass, camera, geolocation - all those kind of standard use cases, but lacks the more esoteric stuff that you get with Android.
  • So let’s look at a few apps out there that are at the edges. I’m not suggesting these are the future for all applications, or will be huge commercial successes. We’ll kick off with an example that draws from biology ; this is Sonar Ruler. It does exactly what its name suggests: emits clicks, bounces them off objects, and from the speed of the response, determines how far away an object is. Think of it like a dolphin you can keep in your pocket. It does this by processing input from the microphone in real-time, on the iPhone Now it only works in one direction, but it works - up to 60 feet, they say. I tested it myself to a few feet and was quite surprised to see how good it was. If we can do realtime processing of audio to locate objects in real-space, what else could we do with sensors that reach into the radio spectrum, detect vibration, or look at alterations in handset position? http://www.istockphoto.com/stock-photo-15260380-large-flying-fox.php?st=3a20f06
  • GymFu is a bit closer to home; a UK startup that’s been building exercise products for a few years now. Imagine Nike+ for press-ups , and squats, and pull-ups and sit-ups Training tool encouraging you to do more and self-improve Competitive element , challenge other gym-fu-ers A bit more down-to-earth: gathering accelerometer data from iPhones, analysing it on the device, working out what it means, associating that with specific motions of exercise. It’s quite neat, telling me when I’m doing sit-ups too quickly, say.
  • This one I love - it’s a project that came out of the Sony Computer Science Lab in Paris. So this guy standing surrepticiously by the side of the road... he’s a member of the public using his phone to record sound readings. The app on his phone takes a combination of noise levels from the microphone, and manual tags So if he’s standing by something specifically noisy, like a bus, he can say so. What I like about this is that by aggregating data from many different people, over many different times, he’s build up a living view of the city. Noise pollution can vary massively throughout the day, and this is a way of representing that. And they have a nice API f or submitting data, or pulling it down - so you can map or visualise it yourself. It’s just a research project, but not dissimilar to OpenSignalMaps (IMAGE?). NoiseTube is also unusual because they support so many different types of device - in an effort to get the public using it, they’re on iOS, Android, even old Java phones.
  • A genuinely empowering app. Map aspects of the earth that don’t bother most of us, and expose them in a completely unpatronising way to those that need them. http://www.hillsareevil.com http://www.watershed.co.uk/ dshed/hills-are-evil
  • I spoke to the ladies and gentlemen behind a few of these projects, to find out a bit about what they learned doing this stuff in the real world. Much of what I asked about, and learned about, concerned the problems with doing sensor stuff, but something which half-surprised me was their attitude towards on-device processing of data. Maybe I’m a bit old, but I had imagined this stuff would be complicated or lend itself to some sort of server magic... one thing they all mentioned was that this wasn’t the case, and that doing things on device was the way to go. CPUs in modern mobiles pack enough of a punch to allow this sort of thing.
  • One problem both the apps and the OS face is that the real world isn’t a sterile laboratory . When you have a technique which works well in a controlled environment and bring it out into people pockets, you face difficulties; and if you look around for academic papers around analysis of context from accelerometer data, say, you’ll see lots of folks who get reasonable accuracy in the lab, and then report in the last paragraphs that that things aren’t so good outside, actually, and tail off... This is because the world is full of noise; there’s lots going on ; and if you were trying to get the most value from your sensors, you wouldn’t pack them together so closely inside the casing of a mobile phone . Sometimes they interfere with one another; GymFu found that the case design of the second iPod touch had speaker vibrations feeding into the accelerometer. Also remember that these are frequently not general-purpose components. The audio is optimised for speech; Ellie D’Hondt of NoiseTube noticed that “ as smartphones become more specialised towards speech, they will in general be less adequate for noise monitoring”
  • Similar devices using similar components can behave very differently. Here’s the proximity sensor in the iphone 4 compared to that in the iphone 3. Look at the clear differences in behaviour.
  • Apple actually had quite well-publicised problems with proximity sensors on some versions not working properly. Steve had to get up and apologise for them. Sad Steve. From the perspective of an application, it doesn’t matter whether this is software, or hardware, or a mix of the two - maybe in hardware but fixable in software. And hardware often varies between builds of the same device . I remember many years back a Nokia PM advised me to buy a new handset early - because when a device comes out it can be sold at a premium; over time it’s no longer the newest piece of kit on the shop shelf, and there’s price pressure on the vendor, so they start using cheaper components. I’m sure this happens elsewhere. But with devices being built by different manufacturers, from different components - like, say, Android phones - we can expect the problem to be worse. “ The HTC Desire HD is not usable, as it cuts off at around 78dB” Ellie D’Hondt, NoiseTube
  • And I don’t want to dwell on this one because you all know it - but you can’t get away from fundamentals of physics - in the case of mobile, we know we’re working with limited battery life. If you’re using a sensor, you’re not just reading data - you’re most likely doing more than that. Your analysing it, storing it, transmitting it. All takes power.
  • I’d like to close by talking about a few places where I think we can usefully look for inspiration about all this stuff. The first one is literature, and specifically in the Dark Materials trilogy of Philip Pullman, who invents the concept of Daemons. For those of you who haven’t read the books, daemons are animal-like manifestations of souls ; in some of the universes in his books, every human has one. They simultaneously represent their owners personality, whilst being capable of acting on their behalf - and to be separated from ones daemon causes massive emotional and physical pain. Now if that’s not an analogy for the mobile phone , I don’t know what is. I’ve illustrated this slide with a picture of the stage adaptation of His Dark Materials, because I think that gives us a second analogy to work with: puppetry. There was a great book back in the 1990s called The Media Equation, by Nass and Reeves, which demonstrated that people react to software as though it were another human being: so, for instance, if it deals with you politely, you like it more. Nass and Reeves showed that everyone does this - even people who are deeply technical and understand “it’s just software” - like you lot. So here we are, building software that we’re trying to humanise and make friendlier, trying to make our machines more like us. Who’s been crossing this gap for hundreds of years, convincing us that machines are in fact human, inviting us to join in with a willing suspension of disbelief: puppeteers , that’s who. I think that’s a talk for another day.
  • The second place is in art, and specifically at artists who play with the senses. I’m thinking particularly of projects like this one: Animal Superpowers, by Kenichi Okada and Chris Woebken , which gives children the senses of animals. So on the left you can see Ant, a helmet which cuts off normal eyesight and replaces it with two cameras mounted in “feelers” in those giant red paws. The cameras are set to 50x magnification - the child gets to feel their way around the environment, looking at it close up. On the right you can see Giraffe - a periscope which effectively raises their eye-level a couple of feet and thereby gives them an adults view of the world. Obvious analogies for this sort of thing are augmented reality - which I’m personally a bit sceptical of the value of - and I’d recommend you watch the excellent Kevin Slavin talk on this top. I could imagine an app that visualises seismic vibrations on-screen quite usefully, say. Or aggregates noise volume over the course of your day, for health reasons.
  • And finally, you have the artists who are helping us visualise the world our normal senses can’t see: frequencies beyond visible light, stretching into radio spectrum. This picture, which I’m sure you’ll have seen before, is part of a series by Timo Arnell, plotting wi-fi signal strength around Oslo using long-exposure photography. See how the signal peaks and troughs. Now imagine how useful it would be for your phone to tell you: “walk 10 feet over there to a patch of signal”, or “there’s an open hotspot just through the door on your right”. Or think of an app which records who you are near, day in, day out and politely connects you to folks who share your daily routine?
  • Who owns your pulse and heart rate over the last 6 months? What rights do the police have to inspect the rhythmic motion of your trouser pockets ? What about the information derived from this raw data, now or in the future - tell-tale vibrations that reveal you’re experiencing withdrawal symptoms from alcohol? Am I allowed to see the signals that spill out of your laptop into the public space around you? Don’t laugh - there is a discipline, telehaemotology , which has folks diagnosing diseases from cameraphone photos of blood. We can derive all sorts of information from unusual sources, and our ability to do so grows over time. Look at how the US military has quite publicly been gleaning intelligence about al qaeda from mobile call patterns - the trails we leave today can be analysed with the technology of tomorrow. A lot of the work that’s been done here right now is voluntary - if you’re downloading NoiseTube then it’s clear that you’re happy to give up some location and audio data. None of the folks I spoke to behind these apps had heard any complaints from end-users about the rights to or storage of this data, but if Flurry Analytics was recording your physical movements along with keypresses - and I can imagine situations in which that would be useful - then perhaps the story would be different.
  • So: I wanted to give you a general introduction to sensors, and I’m done. To sum up: the mental models we have for computers don’t fit the devices we have today, which can reach much further out into the real world and do stuff - whether it be useful or frivolous. We need to think about our devices differently to really get all the possible applications, but a few people are starting to do this. Different platforms let you do this in different ways, and standardisation is rare - either in software or hardware. And there’s a pile of interesting practical and ethical problems just around the corner, waiting for us. Feedback please! Thank you.

Transcript

  • 1. Making sense of sensors Re-thinking personal computing Re-thinking personal computing @twhume
  • 2. Our mental models for ourselves are dated. Descartes and homunculi have left their mark
  • 3. The tools we have built to support our minds fit (or are fitted to) these models...
  • 4. ...and the blueprint for our tools hasn’t changed (much) Arithmetic & logic unit Control unit Memory (instructions & data) Results of operations Instructions & data I/O
  • 5. But this blueprint, and these models, are increasingly irrelevant. Modern computing devices are more about I/O than processing $32 Sensors/touchscreen/GPS $27 16GB Flash memory $25 RF components $14 4GB DRAM memory $11 A4 processor $10 display
  • 6. There’s no shortage of sensors. microphone touch screen GPS physical keys Bluetooth wifi 3G camera (s) accelerometer magnometer compass light proximity gyroscope gravity sensor linear acceleration rotation vector NFC
  • 7. Their mainstream uses tend to be subtle. Perhaps because we resent the digital intruding upon the physical?
  • 8. We need them: finger-to-screen we’re out of bandwidth. So must look for other ways to express communicate with our devices Flickr quinn.anya
  • 9. Different platforms expose sensors differently. You can see assumptions and philosophies play out Flickr sycamoremoonstudios
  • 10. iOS is like Disneyland... Carefully curated and packaged use cases Find better image - scary Mickey? Jack & Jill Magazine
  • 11. ...a beautiful but gated kingdom.
  • 12. Android is the chocolate factory... Warner
  • 13. Android is the chocolate factory...
  • 14. ... and the web is working on it. PhoneGap is where the action is today, in practice Sony Pictures
  • 15. Sensors can help apps mimic nature... Sonar Ruler, working echolocation for the iPhone
  • 16. ...derive meaning from movement... GymFu, a personal training app for iPhone
  • 17. ...observe our environment en masse.. . NoiseTube: a collaborative mapping project from Sony Paris
  • 18. ...or map it in specialist, meaningful ways. Hills Are Evil: maps that matter for people with restricted mobility
  • 19. On-device processing is very feasible... Your mileage may vary, terms and conditions apply
  • 20. ...but the real world isn’t a sterile lab... It’s full of noise, and packing sensors together doesn’t help Portal 2 Valve
  • 21. ...components can vary in performance...
  • 22. ...components can vary in performance...
  • 23. ...and you can’t change physics. The ever-present consideration of power consumption
  • 24. We can look for inspiration from literature... Daemons in the Dark Materials trilogy, envisaged on-stage as puppets
  • 25. ...to artists who give us new ways to look at a familiar world... Animal Superpowers by Kenichi Okada & Chris Woebken
  • 26. ...or to those showing us the unseen all around us. Immaterials: light painting WiFi film by Timo Arnall
  • 27. Think of image. What symbolises paranoia, the state - Brazil? Things to keep us awake at night. We’re leaving trails for future technology to pick up
  • 28. Thank you for not heckling (unless you did) ...and thank you to Trevor May, Dan Williams, Timo Arnall, Jof Arnold, Ellie D’Hondt, Usman Haque, Gabor Paller, Sterling Udell, Martyn Davies, Daniele Pietrobelli, Andy Piper and Jakub Czaplicki for all their help with this presentation. I have been @twhume Please tell me what you thought of this, in the bar or at http://bit.ly/twh_fom futureplatforms.com