The document discusses mobile phone sensing applications and some of the key challenges. It focuses on how sensing capabilities on phones can enable inferencing about users' activities, social contexts and conversations. It also discusses the architectural design of mobile sensing systems including splitting classification between phones and servers, implementing duty cycles to save energy, and ensuring privacy through local processing and control over data sharing. Maintaining user privacy remains an important open challenge, especially as sensors can capture information about nearby third parties.
In this project, we describe a unique architecture for indoor navigation that integrates behavior recognition, multisensory indoor localization, and path-planning in order to pro-actively provide directions without direct input from users. To our knowledge, this is the first architecture that attempts to integrate the core navigation components of path planning and localization with intent prediction towards a more refined navigation solution. The system comprises of three core components: augmented reality, map representation and route planning, and plan recognition.
To achieve effective localization, we provide pre-built maps using QR code scanning distributed at various places of the indoor location. We are using Augmented Reality to make an intuitive and user friendly interface which uses QR codes for identification of various maps that are pre uploaded in the QR codes for the ease of users.
In this project, we describe a unique architecture for indoor navigation that integrates behavior recognition, multisensory indoor localization, and path-planning in order to pro-actively provide directions without direct input from users. To our knowledge, this is the first architecture that attempts to integrate the core navigation components of path planning and localization with intent prediction towards a more refined navigation solution. The system comprises of three core components: augmented reality, map representation and route planning, and plan recognition.
To achieve effective localization, we provide pre-built maps using QR code scanning distributed at various places of the indoor location. We are using Augmented Reality to make an intuitive and user friendly interface which uses QR codes for identification of various maps that are pre uploaded in the QR codes for the ease of users.
Speech is the most important way of communication for people. Using the speech as the interface for processes became more important with the improvements of artificial intelligence. In this project, it is implemented to control a wheelchair with speech comment. Speech commends were taken to the computer by the microphone, the features were extracted with The Mel Frequency Spectral Coefficients algorithms and they were recognized by the help of Artificial Neural Networks. Finally, the comments have converted the form in which the wheel chair can recognize and move accordingly. Our proposed system aim at a robotic vehicle operated by human speech commands. The system operates with the use of an android device which transmits voice commands to raspberry pi to achieve this functionality. The transmitter consists of the android phone Bluetooth device. The voice commands recognized by the module are transmitted through the Bluetooth transmitter. These commands are detected by the wheel chair in order to move it in left, right, backward and front directions. The Bluetooth receiver mounted on raspberry pi is used to recognize the transmitted commands and decode them. The controller then drives the vehicle motors to move it accordingly. This is done with the use of a driver IC used to control the motor movements. The Bluetooth technology used to transmit and receive data allows for remotely operating the system within a good range. Voice operated robot is used for one moving object is developed such that it is moved as per commands are given by the voice recognition module and that command is received by robot and robot is matched the given command with stored program and then set the command as per voice using wireless communication.
Blue Eyes Technology gives Perceptional Abilities To a Computer Using Bluetooth,Eye Gaze Tacker,Emotion Recognizing Mouse,There By making it to interact with human Being.
Smartphone and tablet apps for people with disabilities jemsshep07
This presentation explains assistive technology, accessibility and universal design with regard to smartphones and tablets. It also presents a list of recommended apps for case managers and people with disabilities.
Enhancing Academic Event Participation with Context-aware and Social Recommen...Dejan Kovachev
The plethora of talks and presentations taking place at academic conferences makes it difficult, especially for young researchers to attend the
right talks or discuss with participants and potential collaborators with similar interests. Participants may not have a priori knowledge that allows
them to select the right talks or informal interactions with other participants. In this paper we present the context-aware mobile
recommendation services (CAMRS) based on the current context (whereabouts at the venue, popularity and activities of talks and presentations)
sensed at the conference venue. Additionally, we augment the current context with the academic community context of conference participants
which is inferred by using social network analysis and link prediction on large-scale co-authorship and citation networks of participants. By
combining the dynamic and social context of participants, we are able to recommend talks and people that may be interesting to a particular
participant. We evaluated CAMRS using data from two large digital libraries - the DBLP and CiteSeerX, and participants from two conferences -
ICWL 2010 and EC-TEL 2011. The result shows that the new approach can recommend novel talks and helps participants in establishing new
connections at conference venue.
Speech is the most important way of communication for people. Using the speech as the interface for processes became more important with the improvements of artificial intelligence. In this project, it is implemented to control a wheelchair with speech comment. Speech commends were taken to the computer by the microphone, the features were extracted with The Mel Frequency Spectral Coefficients algorithms and they were recognized by the help of Artificial Neural Networks. Finally, the comments have converted the form in which the wheel chair can recognize and move accordingly. Our proposed system aim at a robotic vehicle operated by human speech commands. The system operates with the use of an android device which transmits voice commands to raspberry pi to achieve this functionality. The transmitter consists of the android phone Bluetooth device. The voice commands recognized by the module are transmitted through the Bluetooth transmitter. These commands are detected by the wheel chair in order to move it in left, right, backward and front directions. The Bluetooth receiver mounted on raspberry pi is used to recognize the transmitted commands and decode them. The controller then drives the vehicle motors to move it accordingly. This is done with the use of a driver IC used to control the motor movements. The Bluetooth technology used to transmit and receive data allows for remotely operating the system within a good range. Voice operated robot is used for one moving object is developed such that it is moved as per commands are given by the voice recognition module and that command is received by robot and robot is matched the given command with stored program and then set the command as per voice using wireless communication.
Blue Eyes Technology gives Perceptional Abilities To a Computer Using Bluetooth,Eye Gaze Tacker,Emotion Recognizing Mouse,There By making it to interact with human Being.
Smartphone and tablet apps for people with disabilities jemsshep07
This presentation explains assistive technology, accessibility and universal design with regard to smartphones and tablets. It also presents a list of recommended apps for case managers and people with disabilities.
Enhancing Academic Event Participation with Context-aware and Social Recommen...Dejan Kovachev
The plethora of talks and presentations taking place at academic conferences makes it difficult, especially for young researchers to attend the
right talks or discuss with participants and potential collaborators with similar interests. Participants may not have a priori knowledge that allows
them to select the right talks or informal interactions with other participants. In this paper we present the context-aware mobile
recommendation services (CAMRS) based on the current context (whereabouts at the venue, popularity and activities of talks and presentations)
sensed at the conference venue. Additionally, we augment the current context with the academic community context of conference participants
which is inferred by using social network analysis and link prediction on large-scale co-authorship and citation networks of participants. By
combining the dynamic and social context of participants, we are able to recommend talks and people that may be interesting to a particular
participant. We evaluated CAMRS using data from two large digital libraries - the DBLP and CiteSeerX, and participants from two conferences -
ICWL 2010 and EC-TEL 2011. The result shows that the new approach can recommend novel talks and helps participants in establishing new
connections at conference venue.
Mainstream mobile devices are being loaded with sensors. These devices can be used to create experiences that are tailored, adaptive and responsive to the way people live and work. Location-awareness allows devices to respond to place, networked address books enable socially rich communication experiences, and motion and gestural sensors empower designers to respond to context of use. All these elements are creating a ’sensitive ecosystem’; mobile devices that adapt gracefully to context and use.
This presentation will explore some of the design and technology trends that are shaping design for mobile devices, show examples of devices and services that are starting to take advantage of these trends, then explain how designers need to rethink design problems to take advantage of this technological ground-shift.
Presented at Web Direction South '08.
Extend Material Design with mobile sensorsSnapbackLabs
Android 5.0 Lollipop is coming up with a the new Material Design, that defines a new language for designing interfaces, giving thickness at UI elements, as if they were concrete blocks that could be manipulated. But this is not the only new feature of 5.0: sensors management also had a deep review that gives developer more control and opens the way to new paradigms of human-machine interface. Material design and mobile sensors could be combined to provide an innovative and yet better User Experience.
During the talk, will be presented Android sensor types and management, both for app developers and AOSP developers, along with a brief overview of Material Design. Then will be discussed how to extend the possibilities of Material Design using mobile sensors, with an eye on battery consumption.
These are the slides we use to demonstrate the trend toward contextual computing and the future of mobile services and its effect on the world.
What is this? When sensors, wearable computers, big data, social network data, and location data are fused together to make a new kind of computing possible.
Sherlock: Monitoring sensor broadcasted data to optimize mobile environmentijsrd.com
Rich-sensor Smartphones have made possible the recent birth of the mobile sensing research area as part of ubiquitous sensing which integrates other areas such as wireless sensor networks and web sensing. The object of sensing can be people-centered or environment-centered. The sensing domain can be home, urban, vehicular etc. Now there are barriers that limit the social acceptance of mobile sensing systems. Several technical barriers are phone energy savings and the variety of sensors and software for their management. In this article, we design and implement Sherlock technology, which captures a micro-environment through sensors and automatically records sensor hints and optimize the micro-environment of smartphones. We refer to such immediate surroundings as micro-environment, usually several to a dozen of centimeters, around a phone. The platform runs as a daemon process on a smartphone and provides finer-grained environment information to upper layer applications via programming interfaces. Sherlock is a unified framework covering the major cases of phone usage, placement, attitude, and interaction in practical uses with complicated user habits. The main objective is to save battery in mobile sensing systems and provides security.
The proposed System for Indoor Location TrackingEditor IJCATR
Indoor location tracking systems are used to locate people or certain objects in buildings and in closed areas. For example,
finding co-workers in a large office building, locating customers within a shopping mall and locating patients in the hospital are a few
applications of indoor location tracking systems. Indoor tracking capability opens up multiple possibilities. To address this need, this
paper describes the implementation of a Bluetooth-based indoor location tracking system that utilizes the integrated Bluetooth modules
in any today's mobile phones to specify and display the location of the individuals in a certain building. The proposed system aims for
location tracking/monitoring and marketing applications for whom want to locate individuals carrying mobile phones and advertise
products and services.
Introduction to Mobile Computing Concept of Mobile Communication, Different generations of wireless technology, Basics of cell, cluster and frequency reuse concept, Noise and its effects on mobile, Understanding GSM and CDMA, Basics of GSM architecture and services like voice call, SMS, MMS, LBS, VAS, Different modes used for Mobile Communication, Architecture of Mobile Computing(3 tier)
OFFLINE CONTEXT AWARE COMPUTING FOR PROVIDING USER SPECIFIC RESULTSJournal For Research
In today’s developing world various technologies are coming up, which aims in reducing the user’s work by providing specific results which is made possible by keeping track of the various activities performed by the user throughout the entire time. Context aware computing is one of the emerging fields that help in the process of providing accurate and specific results. Context aware computing refers to a general class of mobile systems that has the ability to sense the physical environment in which the user is currently in and adapt their behavior accordingly. Context awareness is a property of mobile devices that is defined in compliment to location awareness. Systems that use context aware computing provide the user’s preference based on the various environmental factors like location, time and other conditions. Researchers have been working on this domain for about a decade and various applications were developed for the purpose of demonstrating the capability of the context-aware systems, until recently this technology has been put in heavy use since nowadays whichever applications are made by companies for end-users have a requirement of providing more accurate and specific results. Recently context-aware systems combined with pervasive computing, which means always connected and available, are a growing trend which are applied on devices. The dependence on internet for providing results, introduces various complexities because in some places the extent of the internet is limited. Therefore the context aware technology should be made to work offline using the power of smart devices like smartphones which is owned by the majority of the world. In this way the technology can be provided on to various smart devices which would result in a much more enriched user experience.
162. What components form the fundamental structure of mobile computing - Quo...Mr.Service Academy
A complex interplay of several components is necessary for mobile computing, a dynamic and developing area, to deliver the seamless experience we've been accustomed to from our smartphones and other portable gadgets. Hence Mr. Service Academy provides Smartphone repairing courses in Chennai.
Io t and cloud based computational framework, evolutionary approach in health...owatheowais
The new Internet of Things paradigm allows for small devices with sensing, processing and communication capabilities to be designed, which enable the development of sensors, embedded devices and other ‘things’ ready to understand the environment. In this paper, a distributed framework based on the internet of things paradigm is proposed for monitoring human biomedical signals in activities involving physical exertion. The main advantages and novelties of the proposed system is the flexibility in computing the health application by using resources from available devices inside the body area network of the user. This proposed framework can be applied to other mobile environments, especially those where intensive data acquisition and high processing needs take place. Finally, we present a case study in order to validate our proposal that consists in monitoring footballers’ heart rates during a football match. The real-time data acquired by these devices presents a clear social objective of being able to predict not only situations of sudden death but also possible injuries.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
26. What goes on the phone? The app runs on mobile phones and integrates with backend infrastructure hosted on server machines The software on the phones senses, classifies raw sensed data, produce primitives, presents people’s presence on the phone, and uploads of the primitives to the backend servers
35. Backend Classifiers http://www.youtube.com/watch?v=aAKplAaPAHE&NR=1 Conversation Classifier Binary classifier - to determine whether a person is in a conversation or not, taking as input the audio primitives from the phone. Social Context The output of this classifier is the social context fact, which is derived from multiple primitives and facts provided by the phone and other backend classifiers. This could be: neighborhood conditions, determines if there are any buddies in a person’s surrounding area or not. Social Status Builds on the output of the conversation and activity classifiers, and detected neighboring buddies to determine if a person is gathered with buddies, talking (for example at a meeting or restaurant), alone, or at a party Social status also includes the classification of partying and dancing.
Touch screen technology is a great first step towards enabling ease of human to phone interaction. For example, by using the phone’s sensors it is possible to drive the phone in more intuitive ways. Using a combination of the onboard accelerometer, light sensor, compass and gyroscope, simple gestures could be used to activate/deactivate phones features. The phone’s front camera could, for example, detect eye movement to select applications – e.g., wink to activate an application the user focuses on re: EyePhone project, neuro-phone, darwin phone.
Mobile phone sensing is effective across multiple scales, including: a single individual (e.g., UbitFit Garden, groups such as social networks or special interest groups (e.g., Garbage Watch [23]), and entire communities/ population of a city (e.g., Participatory Urbanism [20]). UbiFit Garden explores how on-body sensing and personal displays can encourage people to incorporate physical activity into everyday life. • Physical activities are inferred from on-body sensors with the Intel Mobile Sensing Platform (MSP) • A mobile journal allows the individual to add activities that the MSP does not infer, as well as edit & annotate inferred activities The MSP infers walking, running, cycling, elliptical trainer, & stair machine The individual journals activities such as yoga, weight training, & swimming • As the individual performs activities, a garden blooms on his/her mobile phone, providing key information at-a-glance, such as: • if s/he is having an active or inactive week • has incorporated variety into his/her routine • has met his/her weekly goal, and • has met his/her goal recently Millions of people participate regularly within online social networks. The Dartmouth CenceMe project is investigating the use of sensors in the phone to automatically classify events in people’s lives, called sensing presence, and selectively share this presence using online social networks such as Twitter, Facebook, and MySpace, replacing manual actions people now perform daily. Garbage Watch is interested in investigating what gets thrown away on campus in garbage bins. By analyzing this data, it helps facilities figure out where recycle bins could be added and hoe waste can be reduced by identifying the main sources. Users submit photos of what is in trash bins across campuses.
Carbon Monoxide readings across Accra, Ghana. Colors represent individual taxicabs, patch size indicate the intensity reading of carbon monoxide during a single day across the capital city of Ghana. Note the variation across the city and within small neighborhoods. Heat-map visualization of Carbon Monoxide readings across Accra, Ghana. Colors represent individual the intensity reading of carbon monoxide during a single day across the capital city of Ghana. Note the variation across the city, within small neighborhoods, and on the approach to the international airport What happens when individual mobile devices are augmented with novel sensing technologies such as noise pollution, air quality, UV levels, water quality, etc . We claim that it will shatter our understanding of these devices as simply communication tools (a.k.a. phones) and celebrates them in their new role as measurement instruments. We envision a wide range of novel physical sensors attached to mobile devices, empowering everyday non-experts with new “super-senses” and abilities. It radically alters the current models of civic government as sole data gatherer and decision maker by empowering everyday citizens to collectively participate in super-sampling their life, city, and environment. Integrating simple air quality sensors into networked mobile phones promotes everyday citizens to uncover, visualize, and collectively share real-time air quality measurements from their own everyday urban lifestyles. This rich people-driven sensor data leverages community power imbalances, and can increase agency and decision maker understanding of a community's claims, thereby potentially increasing public trust. This detailed local knowledge informs environmental health research and environmental policy making – persuading both individuals and civic government towards positive improvements in air quality and environmental change. Conventional ways of measuring and reporting environmental pollution rely on aggregate statistics that apply to a community or an entire city. The University of California at Los Angeles (UCLA) PEIR project uses sensors in phones to build a system that enables personalized environmental impact reports, which track how the actions of individuals affect both their exposure and their contribution to problems such as carbon emissions Most examples of community sensing only become useful once they have a large number of people participating; for example, tracking the spread of disease across a city, the migration patterns of birds, congestion patterns across city roads, or a noise map of a city. These applications represent large-scale data collection, analysis, and sharing for the good of the community. To achieve scale implicitly requires the cooperation of strangers who will not trust each other. This increases the need for community sensing systems with strong privacy protection and low commitment levels from users. The impact of scaling sensing applications from personal to population scale is unknown. Many issues related to information sharing, privacy, data mining, and closing the loop by providing useful feedback to an individual, group, community, and population remain open. Today, we only have limited experience in building scalable sensing systems Envisioned Scenarios Mary suffers extreme problems with her asthma when she is in her yard gardening. But the air and pollen counts for her city are consistently reported as below normal. However, Mary’s particulate matter sensor on her mobile phone reports dangerous levels of exposure from wood burning pollutants. Checking a selection of sensor data from people who also take measurements on her block, perhaps her neighbors, Mary is able to see that the problem is concentrated around a single cul-de-sac near her. Mary soon notices that several homes in that area are continuously using their fireplaces and generating excessive airborne pollutants, which are forbidden by local ordinance. Mary forwards the measurement collection to her local air quality measurement district where action is taken to enforce the clean air policy Smell This A strong odor suddenly overwhelmed residents of New York City. Emergency crews were unable to pinpoint any gas leaks or other causes. The uncertainty caused anxiety and fear in citizens even after the odor dissipated the next day. After searching 140 industrial facilities officials declare that hey are giving up hope of finding the source of the mysterious odor.(This is what really happened on 8 January 2007 in New York. What follows is where Participatory Urbanism makes a difference. ) However, the odor left trace nitrogen dioxide measurements as logged by millions of New Yorkers mobile phones and a simple mashup of the data set on google maps identifies the culprit – a dangerous incinerator that is not suppose to be in use. City officials shutdown the plant immediately. CleanCook Tyler lives in Lagos, Nigeria where his family often cooks indoor using charcoal. Tyler’s son is suffers nearly every two weeks from respiratory problems. However, the government just received another award for outstanding regional air quality. Tyler checks his mobile phone’s sulfur dioxide sensor and realizes that several hazardous level measurements were taken about two weeks ago. Tyler compares his measurements to others shared online and realizes the problem occurs during indoor cooking using freshly cut wood. Tyler alerts others with similar measurements to the problem and successfully petitions the government to provide new cleaner sources of cooking fuel based on his and others reported measurements. TEAM: Eric Paulos (Intel Research Berkeley) Ian Smith (Intel Research Seattle) RJ Honicky (UC Berkeley) Related Work A collection of several inspirational projects: Urban Sensing (CENS / UCLA) SensorPlanet (Nokia) AIR (Preemptive Media) SenseWeb (Microsoft) The Urban Pollution Monitoring Project (Equator UK)
Wireless sensor networks use battery-operated computing and sensing devices. A network of these devices will collaborate for a common application such as (say) environmental monitoring. The expectation is that sensor networks will be deployed in an ad hoc fashion, with individual nodes remaining largely inactive for long periods of time, but then becoming suddenly active when something is detected. These characteristics of sensor networks and applications motivate a MAC that is different from traditional wireless MACs such as IEEE 802.11 in almost every way: energy conservation and self-configuration are primary goals, while per-node fairness and latency are less important. Sensor MAC typically uses techniques to reduce energy consumption and support self-configuration. To reduce energy consumption in listening to an idle channel, nodes periodically sleep. Neighboring nodes could form virtual clusters to auto-synchronize on sleep schedules. They could also set the radio to sleep during transmissions of other nodes. Finally, S-MAC applies message passing to reduce contention latency for sensor-network applications that require store-and-forward processing as data move through the network. On a source node, an 802.11-like MAC typically consumes 2--6 times more energy than sensor MAC for traffic load with messages sent every 1--10s. Multihop Sensor Networks (MSN): It is widely accepted that Wireless Sensor Networks are inherently Multihop in nature, due to the limited transmission range of resource-constrained sensor nodes. Advances have been made in Multibit distributed data aggregation schemes, which minimizes in-network communication, for event detection applications. -Multihop Cellular Networks (MCN) as compared to the existing single-hop cellular networks, provide higher throughput and capacity at lower transmission power requirements by effective spectral re-use. Many advances in route discovery and resilience protocols, as well as a probability of error based link scheduling algorithms have been made. -Multihop Cellular Sensor Networks (MCSN): Cell phones empowered with sensing capabilities have resulted in the emergence of Cellular Sensor Networks which can impact urban sensing applications in a profound sense. Multihopping in Cellular Sensor Networks has enormous utility in a moving event localization applications. Novel data aggregation and routing protocols take into consideration the underlying mobility model and time-varying connectivity in MCSN.
The CenceMe system infers “facts” of various types (e.g., activity, social setting), which collectively compose the sensing presence of a person, an enhancement over conventional, largely textual forms of presence information often used in IM clients (e.g., “I am away”). CenceMe allows a user to: (i) automatically export enriched forms of presence information to members of her social network (e.g., publish status messages in Facebook), and (ii) support historical analysis of his activity (e.g., how often did I go to the gym this week?). CenceMe users install a sensing daemon on their phone that is designed not to disturb the normal user experience. The daemon pipes data sampled from the available sensors on the phone through resource-aware classifiers to produce facts about the user. Facts are buffered locally on the phone and opportunistically transmitted (e.g., via GPRS,WiFi) to CenceMe backend servers. Backend classifiers draw crossuser inferences and inferences requiring more facts than are feasibly stored on the phone. Ultimately, facts stored in the backend servers are made available (filtered for privacy) via a standard CenceMe API supporting synchronous and subscription retrieval to applications such as web portals (e.g., Facebook, the CenceMe portal) and VOIP clients like Skype. IMPLEMENTATION: The system runs on any Symbian-based cell phone that include JVM support (e.g., Nokia N95, N80). The software architecture of the sensing daemon is split into modules written in C++ and Java to maintain portability where possible while addressing limitations of the JVM system APIs. Fact bundles are pushed to the backend servers via XMLRPC/WebService calls over either WiFi or GPRS. A web-service-based API is offered from the backend servers to external systems. We have built: (i) a number of CenceMe widgets for Facebook (see Figure 1(a)), and (ii) a web portal that offers a broader and deeper user experience than the widgets alone can provide (see Figure 1(b)). For cell phones without the suite of sensors found on highend models (e.g., accelerometers), a CenceMe key ring attachment is available which provides the CenceMe daemon on the phone Bluetooth access to GPS and a 3-axis accelerometer. Future: Expand the current focus on consumer-driven social networking, and apply CenceMe technology to public health initiatives, domain specific sensing (e.g., skiing) and supporting logistics and production line efficiency in the commercial setting. The Bluetooth interface is also used to discover neighbors to infer the social setting of the user. We plan to demo all the functionalities of the CenceMe system both on the mobile phone clients and the Web portal. LESSONS FROM IPHONE CENCEME CenceMe is a social sensing application which integrates with popular social networking platforms such as Facebook,MySpace, and Twitter to augment a person’s context status by using the phone’s onboard sensors. By running machine learning algorithms on the mobile phone itself over the sensor data collected by the phone’s accelerometer, microphone, bluetooth/wifi/GPS, and camera and fusion techniques on backend machines, CenceMe infers a person’s activity (e.g., sitting, walking, running) and social context (e.g., dancing, with friends, at a restaurant, listening to music) in an automated way. This information is shared within the person’s social circle automatically giving the user the ability to customize their privacy settings to regulate what and where to publish the sensing-based presence. In what follows, we describe the architecture of the iPhone CenceMe client (in order to meet usability, classifiers resiliency, and preserve the phone user experience in terms of battery life for example) and backend (designed to be robust against failures, bursty user access, etc). Client: The iPhone CenceMe client is implemented using a combination of Objective-C and legacy ANSI C code. Objective-C is mainly used for the user interface implementation, to access the low level sensors, the internal sqlite database, and to respect the model-view-controller principle of iPhone OS. C is adopted to implement the activity recognition classifier (which relies on a decision tree algorithm), and for the audio classifier (that determines the surrounding audio level - noise, quiet - or if a person is in a conversation). The audio classifier is a support vector machine (SVM) technique using the LIBSVM C library [3]. The client is responsible for: i) operating the person’s presence inference over the sensor data by locally running the inference algorithms; ii) communicating the inference labels to the backend; iii) displaying the user’s and their buddies sensing presence (activity, social context, location), the privacy configurations, and various other interface views that allow, for example, a user to post short messages on their preferred social network account. The classifiers are trained offline in a supervised manner, i.e., taking large collections of labeled data for both the audio and accelerometer modalities and using that data to train the classification models which are later deployed on the phone. Although earlier versions of the iPhone OS did not support multitasking (i.e., the capability to run applications in the background) the CenceMe client is designed to properly duty-cycle the sensing, inference routines, and communication rendezvous with the backend to limit the battery drain of the phone when the application is active. Reducing the battery drain is critical to avoid rapid battery discharges when the application is used, a condition that would negatively impact the phone user experience. Cloud The iPhone CenceMe backend, which is implemented on the Amazon Web Service cloud infrastructure, is comprised by a series of different virtual machine images. Each machine is an Amazon EC2 virtual machine instance running Linux which provides a series of PHP and Python based REST web service allowing multiple machines to be composed together. Each image performs a different role in the backend infrastructure and has been designed to be initialized and composed together to offer different operating Figure 1. The “deploy-use-refine” model adopted in CenceMe. points of cost and performance. This allows us to temporarily initialize different numbers of machines of different types depending on the existing or expected user workload. It also allows us to manage the cost of running the CenceMe service so that we can provision additional machines (which incur additional costs) only when user demand requires it (for example, when a new model of the Apple iPhone is released and temporarily many users try out our application, which causes us to reconfigure our system). The server side system is responsible for: i) storing user sensor presence information and allowing other CenceMe clients restricted access to this data; ii) publishing this sensor presence information to third party social network such as Twitter and Facebook; iii) maintaining the CenceMe social network friend link structure; iv) performing routine user registration and account maintenance tasks; and v) the collection of statistics about user behavior both on the client and backend side of the system. LESSONS LEARNT In this section, we discuss the experience we gained by deploying CenceMe on the App Store and having it used by thousand of users worldwide. Throughout the development and the evolution stages of iPhone CenceMe we applied a “deploy-use-refine” model (see Figure 1). According to this strategy, after initial tests in the lab, the application is deployed on the App Store. Following this phase, users start downloading and using the application. Their feedback and user experience over time, submitted to us via a dedicated customer support email channel or the CenceMe discussion board, trigger the application fixing and refinement process, in order to meet users satisfaction and improve the application usability. In what follows, the lessons learnt from the iPhone CenceMe deployment are reported. Information Disclosure. When leveraging an application distribution system such as the App Store to collect data to be used for research purposes, it is very important that the user downloading the application is fully informed about the nature of the application and the data being collected, as well as how the data is going to be handled. Full disclosure of such information is often required by universities IRB and disclaimers should be made clear in the application terms of service. Given the sensitive nature of the iPhone CenceMe data, i.e., inference labels from sensor data, our university IRB makes us add a different consent form following the terms of service page which explicitly mentions the purpose of the application and describes the nature of the data collected along with the use we are making of the data. According to IRB, this extra step is needed because people do not often read terms of service notes carefully, so a second dedicated disclosure form is required. Of course, by requiring a consent form through the involvement of IRB as often needed when carrying out research involving human subjects, the cycle of an application deployment becomes much longer. The IRB might take months to approve a certain research project, and even so several iterative steps may be needed in order to meet the IRB requirements. This implies long cycles before an application can be released. This extra time should be taken into consideration by researchers that want to carry out research at large scale through application distribution systems. The second implication of adding an explicit consent form in the application is that users might opt out from using the application (as we verified with some of the iPhone CenceMe users). This is because people are not yet used to downloading research applications from a commercial platform such as the App Store and they do not often understand the purpose of the research. As a consequence, the pool of users participating in the research might grow slowly. Monetary and Time Costs. Moving research outside the lab for large scale deployments through the App Store has also monetary and time related costs. Bursty incoming user data, along with the necessity to rely on robust and reliable backend servers, most likely demand the support of cloud computing services [1]. In this way the researchers maintenance job is greatly alleviated since existing cloud services guarantee reliability and the ability to rapidly scale to more resources if needed. The flip side is that researchers have to be ready to sustain the subscription cost. It is also to be taken into account the time overhead needed to adapt the application to new phone OS releases (which often carry API changes) in order to make the application able to transition through different versions of the software seamlessly. Without this support, users would not be guaranteed a smooth usage of the application which could potentially be dropped with severe impacts on the research outcome. Users might also ask questions and need to be guided through the use of the application. Researchers need to be ready to devote some of their time to customer service support. A prompt response from an application developer gives strong feelings about the solidity of the application and the people supporting it. Software Robustness. Software robustness and clean user interface (UI) design may be a foregone conclusion. However, the effects of poor software design (which implies little robustness of the application) and poor UI layouts should not be underestimated. People downloading an application from any distribution system expect the software to be robust, simple to use, with easy-to-navigate UI. If any of these requirements are not met, users might loose confidence in the application and not use it anymore. As researchers, we might not have the UI design skills often required to make an application attractive. It is then important to collect feedback from domain experts that can guide the researcher to a proper design of the UI. We learnt about this issue after a couple of iterations of the iPhone CenceMe client. We modified the UI design and the different navigation views by taking into account users feedback and our own experience in using the app. By dealing with software that needs to run onmobile phones, researchers have to pay great attention to the impact the application might have on the phone performance itself. Phone manufactures often guarantee that the user experience with the phone is not degraded when third party apps are running. Hence, resources are reclaimed, namely RAM and CPU, when the phone OS assesses that there is a need for it. Researchers have to make sure that the application does not take too many CPU cycles and/or occupy too much RAM, otherwise the application might be shut down unexpectedly. This is a particularly important aspect to be considered for applications designed to run in the background. Researchers that want to deploy applications at large scale have to be ready to write code at near-production level, in order to maximize the application usability and robustness. Although testing the application in the lab might not let you discover all the possible glitches in the code, extensive testing phases are required before submitting an application to the distribution system. This is important in order to minimize the likelihood that users will encounter problems with an application and to reduce the chances that an application is rejected during the screening process; for example in the case of the Apple App Store. It should be noted that Android Market does not screen applications making it more attractive in the case of some applications. One of the CenceMe releases did not pass the screening phase because of a debugging flag was mistakenly left in the code causing the application to crash. As a result of this silly mistake the application was pushed to the back of the application screening process queue by Apple delaying the new release of CenceMe by several weeks. Hardware Incompatibilities. New phone models or the evolution of existing models could present hardware differences that could impact the application performance. We experienced this issue during the iPhone 2G to 3G generation transition phase, where the former mounts a different microphone than the latter. We started noticing a performance drop of the audio classifier when the same application was running on a 3G phone. The drop was caused by the fact the audio classifier for conversation detection was trained using audio samples mainly recorded with iPhones 2G. Since the frequency response of the iPhone 3G microphone is different from the 2G model, the classifier trained with 2G audio was not able to infer accurately 3G audio. For a large scale application developer it is then important to realize these differences in time to limit misbehaviors when people replace their devices. User Incentives. In order for users to use a research application they have to have an incentive and enjoy it when the application is active on their phone. If there is no or little return to them, the application might be used rarely with scarce benefit to the research outcome. In order to engage users we include a feature in the iPhone CenceMe client named IconMe. IconMe allows a user to select an icon that better represents their status, mood, and surroundings and associate a 140 character message to it. Such a message is shared with the CenceMe friends and posted on the personal Facebook, MySpace, and Twitter account according to the user’s preferences. We found this microblogging service an
Raw audio data captured from mobile phones is transformed into features allowing learning algorithms to identify classes of behavior (e.g., driving, in conservation, making coffee) occurring in a stream of sensor data, for example, by SoundSense ALGORITHMS In what follows, we present the detailed design of the SoundSense algorithms implemented on the phone as part of the SoundSense architecture. 4.1 Preprocessing This preprocessing component, as shown in Figure 3, is responsible for segmenting the incoming audio stream into frames and performing frame admission control by identifying when frames are likely to contain the start of an acoustic event (e.g., breaking glass, shouting) that warrants further processing. 1 Framing Segmenting the audio stream into uniform frames is common practice for feature extraction and classification. The frame width (i.e., duration) needs to be short enough so that the audio content is stable and meanwhile long enough to capture the characteristics signature of the sound. Existing work use frames that overlap each other since overlapping frames are able to capture subtle changes in the sounds more precisely. However, this can cause overlapping pieces of audio data to be processed multiple times. Given the resource constraints of the phone we use independent non-overlapping frames of 64 ms. This frame width is slightly larger than what is typically used in other forms of audio processing (e.g., speech recognition) where the width typically ranges between 25-46 ms. In addition to enabling a lower duty cycle on the phone, a frame width of 64 ms is sufficient for capturing the acoustic characteristics of environmental sounds. 2 Frame Admission Control Frame admission control is required since frames may contain audio content that is not interesting (e.g., white noise) or is not able to be classified (e.g., silence or insufficient amount of the signal is captured). These frames can occur at any time due to phone context; for example, the phone may be at a location that is virtually silent (e.g., library, home during night) or where the sounds that are sampled are simply too far away from the phone to be sufficiently captured
SENSE Individual mobile phones collect raw sensor data from sensors embedded in the phone. LEARN Information is extracted from the sensor data by applying machine learning and data mining techniques. These operations occur either directly on the phone, in the mobile cloud, or with some partitioning between the phone and cloud. Where these components run could be governed by various architectural considerations, such as privacy, providing user real-time feedback, reducing communication cost between the phone and cloud, available computing resources, and sensor fusion requirements. We therefore consider where these components run to be an open issue that requires research. INFORM, SHARE, AND PERSUASION We bundle a number of important architectural components together because of commonality or coupling of the components. For example, a personal sensing application will only inform the user, whereas a group or community sensing application may share an aggregate version of information with the broader population and obfuscate the identity of the users. Other considerations are how to best visualize sensor data for consumption of individuals, groups, and communities. Privacy is a very important consideration as well. While phones will naturally leverage the distributed resources of the mobile cloud (e.g., computation and services offered in the cloud), the computing, communications, and sensing resources on the phones are ever increasing. We believe that as resources of the phone rapidly expand, one of the main benefits of using the mobile computing cloud will be the ability to compute and mine big data from very large numbers of users. The availability of large-scale data benefits mobile phone sensing in a variety of ways; for example, more accurate interpretation algorithms that are updated based on sensor data sourced from an entire user community. This data enables personalizing of sensing systems based on the behavior of both the individual user and cliques of people with similar behavior. http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-1059-09.pdf INFORM, SHARE, AND PERSUASION: CLOSING THE SENSING LOOP How you use inferred sensor data to inform the user is application-specific. But a natural question is, once you infer a class or collect together a set of large-scale inferences, how do you close the loop with people and provide useful information back to users? Clearly, personal sensing applications would just inform the individual, while social networking sensing applications may share activities or inferences with friends. We discuss these forms of interaction with users as well as the important area of privacy. Another topic we touch on is using large-scale sensor data as a persuasive technology — in essence using big data to help users attain goals using targeted feedback. SHARING To harness the potential of mobile phone sensing requires effective methods of allowing people to connect with and benefit from the data. The standard approach to sharing is visualization using a web portal where sensor data and inferences are easily displayed. This offers a familiar and intuitive interface. For the same reasons, a number of phone sensing systems connect with existing web applications to either enrich existing applications or make the data more widely accessible [12, 23]. Researchers recognize the strength of leveraging social media outlets such as Facebook, Twitter, and Flickr as ways to not only disseminate information but build community awareness (e.g., citizen science [20]). A popular application domain is fitness, such as Nike+. Such systems combine individual statistics and visualizations of sensed data and promote competition between users. The result is the formation of communities around a sensing application. Even though, as in the case of Nike+, the sensor information is rather simple (i.e., just the time and distance of a run), people still become very engaged. Other applications have emerged that are considerably more sophisticated in the type of inference made, but have had limited up take. It is still too early to predict which sensing applications will become the most compelling for user communities. But social networking provides many attractive ways to share information. PERSONALIZED SENSING Mobile phones are not limited to simply collecting sensor data. For example, both the Google and Microsoft search clients that run on the iPhone allow users to search using voice recognition. Eye tracking and gesture recognition are also emerging as natural interfaces to the phone. Sensors are used to monitor the daily activities of a person and profile their preferences and behavior, making personalized recommendations for services, products, or points of interest possible [32]. The behavior of an individual along with an understanding of how behavior and preferences relate to other segments of the population with similar behavioral profiles can radically change not only online experiences but real world ones too. Imagine walking into a pharmacy and your phone suggesting vitamins and supplements with the effectiveness of a doctor. At a clothing store your phone could identify which items are manufactured without sweatshop labor. The behavior of the person, as captured by sensors embedded in their phone, become an interface that can be fed to many services (e.g., targeted advertising). Sensor technology personalized to a user’s profile empowers her to make more informed decisions across a spectrum of services. PERSUASION Sensor data gathered from communities (e.g., fitness, healthcare) can be used not only to inform users but to persuade them to make positive behavioral changes (e.g., nudge users to exercise more or smoke less). Systems that provide tailored feedback with the goal of changing users’ behavior are referred to as persuasive technology [33]. Mobile sensing applications open the door to building novel persuasive systems that are still largely unexplored. For many application domains, such as healthcare or environmental awareness, users commonly have desired objectives (e.g., to lose weight or lower carbon emissions). Simply providing a user with her own information is often not enough to motivate a change of behavior or habit. Mobile phones are an ideal platform capable of using low-level individual-scale sensor data and aggregated community-scale information to drive long-term change (e.g., contrasting the carbon footprint of a user with her friends can persuade the user to reduce her own footprint). The UbiFit Garden [1] project is an early example of integrating persuasion and sensing on the phone. UbiFit uses an ambient background display on the phone to offer the user continuous updates on her behavior in response to desired goals. The display uses the metaphor of a garden with different flowers blooming in response to physical exercise of the user during the day. It does not use comparison data but simply targets the individual user. A natural extension of UbiFit is to present community data. Ongoing research is exploring methods of identifying and using people in a community of users as influencers for different individuals in the user population. A variety of techniques are used in existing persuasive system research, such as the use of games, competitions among groups of people, sharing information within a social network, or goal setting accompanied by feedback. Understanding which types of metaphors and feedback are most effective for various persuasion goals is still an open research problem. Building mobile phone sensing systems that integrate persuasion requires interdisciplinary research that combines behavioral and social psychology theories with computer science. The use of large volumes of sensor data provided by mobile phones presents an exciting opportunity and is likely to enable new applications that have promise in enacting positive social changes in health and the environment over the next several years. The combination of large-scale sensor data combined with accurate models of persuasion could revolutionize how we deal with persistent problems in our lives such as chronic disease management, depression, obesity, or even voter participation.
PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary