Participatory urbanism

  • 833 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
833
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
20
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • 02/09/11 16:56 © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
  • - Not BI, BAM etc
  •  
  •  
  •  
  •  
  •  
  •   Touch screen technology is a great first step towards enabling ease of human to phone interaction. For example, by using the phone’s sensors it is possible to drive the phone in more intuitive ways. Using a combination of the onboard accelerometer, light sensor, compass and gyroscope, simple gestures could be used to activate/deactivate phones features. The phone’s front camera could, for example, detect eye movement to select applications – e.g., wink to activate an application the user focuses on re: EyePhone project, neuro-phone, darwin phone.
  • Mobile phone sensing is effective across multiple scales, including: a single individual (e.g., UbitFit Garden, groups such as social networks or special interest groups (e.g., Garbage Watch [23]), and entire communities/ population of a city (e.g., Participatory Urbanism [20]). UbiFit Garden explores how on-body sensing and personal displays can encourage people to incorporate physical activity into everyday life. • Physical activities are inferred from on-body sensors with the Intel Mobile Sensing Platform (MSP) • A mobile journal allows the individual to add activities that the MSP does not infer, as well as edit & annotate inferred activities The MSP infers walking, running, cycling, elliptical trainer, & stair machine The individual journals activities such as yoga, weight training, & swimming • As the individual performs activities, a garden blooms on his/her mobile phone, providing key information at-a-glance, such as: • if s/he is having an active or inactive week • has incorporated variety into his/her routine • has met his/her weekly goal, and • has met his/her goal recently Millions of people participate regularly within online social networks. The Dartmouth CenceMe project is investigating the use of sensors in the phone to automatically classify events in people’s lives, called sensing presence, and selectively share this presence using online social networks such as Twitter, Facebook, and MySpace, replacing manual actions people now perform daily. Garbage Watch is interested in investigating what gets thrown away on campus in garbage bins. By analyzing this data, it helps facilities figure out where recycle bins could be added and hoe waste can be reduced by identifying the main sources. Users submit photos of what is in trash bins across campuses.
  • Carbon Monoxide readings across Accra, Ghana.  Colors represent individual taxicabs, patch size indicate the intensity reading of carbon monoxide during a single day across the capital city of Ghana.  Note the variation across the city and within small neighborhoods. Heat-map visualization of Carbon Monoxide readings across Accra, Ghana.  Colors represent individual the intensity reading of carbon monoxide during a single day across the capital city of Ghana.  Note the variation across the city, within small neighborhoods, and on the approach to the international airport   What happens when individual mobile devices are augmented with novel sensing technologies such as noise pollution, air quality, UV levels, water quality,  etc .  We claim that it will shatter our understanding of these devices as simply communication tools (a.k.a. phones) and celebrates them in their new role as measurement instruments.  We envision a wide range of novel physical sensors attached to mobile devices, empowering everyday non-experts with new “super-senses” and abilities.  It radically alters the current models of civic government as sole data gatherer and decision maker by empowering everyday citizens to collectively participate in super-sampling their life, city, and environment. Integrating simple air quality sensors into networked mobile phones promotes everyday citizens to uncover, visualize, and collectively share real-time air quality measurements from their own everyday urban lifestyles. This rich people-driven sensor data leverages community power imbalances, and can increase agency and decision maker understanding of a community's claims, thereby potentially increasing public trust. This detailed local knowledge informs environmental health research and environmental policy making – persuading both individuals and civic government towards positive improvements in air quality and environmental change. Conventional ways of measuring and reporting environmental pollution rely on aggregate statistics that apply to a community or an entire city. The University of California at Los Angeles (UCLA) PEIR project uses sensors in phones to build a system that enables personalized environmental impact reports, which track how the actions of individuals affect both their exposure and their contribution to problems such as carbon emissions Most examples of community sensing only become useful once they have a large number of people participating; for example, tracking the spread of disease across a city, the migration patterns of birds, congestion patterns across city roads, or a noise map of a city. These applications represent large-scale data collection, analysis, and sharing for the good of the community. To achieve scale implicitly requires the cooperation of strangers who will not trust each other. This increases the need for community sensing systems with strong privacy protection and low commitment levels from users. The impact of scaling sensing applications from personal to population scale is unknown. Many issues related to information sharing, privacy, data mining, and closing the loop by providing useful feedback to an individual, group, community, and population remain open. Today, we only have limited experience in building scalable sensing systems Envisioned Scenarios   Mary suffers extreme problems with her asthma when she is in her yard gardening.  But the air and pollen counts for her city are consistently reported as below normal.  However, Mary’s particulate matter sensor on her mobile phone reports dangerous levels of exposure from wood burning pollutants.  Checking a selection of sensor data from people who also take measurements on her block, perhaps her neighbors, Mary is able to see that the problem is concentrated around a single  cul-de-sac  near her. Mary soon notices that several homes in that area are continuously using their fireplaces and generating excessive airborne pollutants, which are forbidden by local ordinance.  Mary forwards the measurement collection to her local air quality measurement district where action is taken to enforce the clean air policy Smell This    A strong odor suddenly overwhelmed residents of New York City.  Emergency crews were unable to pinpoint any gas leaks or other causes. The uncertainty caused anxiety and fear in citizens even after the odor dissipated the next day.  After searching 140 industrial facilities officials declare that hey are giving up hope of finding the source of the mysterious odor.(This is what really happened on 8 January 2007 in New York.  What follows is where Participatory Urbanism makes a difference. ) However, the odor left trace nitrogen dioxide measurements as logged by millions of New Yorkers mobile phones and a simple mashup of the data set on google maps identifies the culprit – a dangerous incinerator that is not suppose to be in use.  City officials shutdown the plant immediately. CleanCook    Tyler lives in Lagos, Nigeria where his family often cooks indoor using charcoal.  Tyler’s son is suffers nearly every two weeks from respiratory problems.  However, the government just received another award for outstanding regional air quality. Tyler checks his mobile phone’s sulfur dioxide sensor and realizes that several hazardous level measurements were taken about two weeks ago.  Tyler compares his measurements to others shared online and realizes the problem occurs during indoor cooking using freshly cut wood. Tyler alerts others with similar measurements to the problem and successfully petitions the government to provide new cleaner sources of cooking fuel based on his and others reported measurements.     TEAM:   Eric Paulos  (Intel Research Berkeley) Ian Smith  (Intel Research Seattle) RJ Honicky  (UC Berkeley)        Related Work A collection of several inspirational projects: Urban Sensing  (CENS / UCLA) SensorPlanet  (Nokia) AIR  (Preemptive Media) SenseWeb  (Microsoft) The Urban Pollution Monitoring Project  (Equator UK)  
  • Wireless sensor networks use battery-operated computing and sensing devices. A network of these devices will collaborate for a common application such as (say) environmental monitoring. The expectation is that sensor networks will be deployed in an ad hoc fashion, with individual nodes remaining largely inactive for long periods of time, but then becoming suddenly active when something is detected. These characteristics of sensor networks and applications motivate a MAC that is different from traditional wireless MACs such as IEEE 802.11 in almost every way: energy conservation and self-configuration are primary goals, while per-node fairness and latency are less important. Sensor MAC typically uses techniques to reduce energy consumption and support self-configuration. To reduce energy consumption in listening to an idle channel, nodes periodically sleep. Neighboring nodes could form  virtual clusters  to auto-synchronize on sleep schedules. They could also set the radio to sleep during transmissions of other nodes. Finally, S-MAC applies  message passing  to reduce contention latency for sensor-network applications that require store-and-forward processing as data move through the network. On a source node, an 802.11-like MAC typically consumes 2--6 times more energy than sensor MAC for traffic load with messages sent every 1--10s.  Multihop Sensor Networks (MSN): It is widely accepted that Wireless Sensor Networks are inherently Multihop in nature, due to the limited transmission range of resource-constrained sensor nodes. Advances have been made in Multibit distributed data aggregation schemes, which minimizes in-network communication, for event detection applications. -Multihop Cellular Networks (MCN) as compared to the existing single-hop cellular networks, provide higher throughput and capacity at lower transmission power requirements by effective spectral re-use. Many advances in route discovery and resilience protocols, as well as a probability of error based link scheduling algorithms have been made. -Multihop Cellular Sensor Networks (MCSN): Cell phones empowered with sensing capabilities have resulted in the emergence of Cellular Sensor Networks which can impact urban sensing applications in a profound sense. Multihopping in Cellular Sensor Networks has enormous utility in a moving event localization applications. Novel data aggregation and routing protocols take into consideration the underlying mobility model and time-varying connectivity in MCSN.  
  •  
  •  
  •  
  •  
  •  
  •   The CenceMe system infers “facts” of various types (e.g., activity, social setting), which collectively compose the sensing presence of a person, an enhancement over conventional, largely textual forms of presence information often used in IM clients (e.g., “I am away”). CenceMe allows a user to: (i) automatically export enriched forms of presence information to members of her social network (e.g., publish status messages in Facebook), and (ii) support historical analysis of his activity (e.g., how often did I go to the gym this week?). CenceMe users install a sensing daemon on their phone that is designed not to disturb the normal user experience. The daemon pipes data sampled from the available sensors on the phone through resource-aware classifiers to produce facts about the user. Facts are buffered locally on the phone and opportunistically transmitted (e.g., via GPRS,WiFi) to CenceMe backend servers. Backend classifiers draw crossuser inferences and inferences requiring more facts than are feasibly stored on the phone. Ultimately, facts stored in the backend servers are made available (filtered for privacy) via a standard CenceMe API supporting synchronous and subscription retrieval to applications such as web portals (e.g., Facebook, the CenceMe portal) and VOIP clients like Skype. IMPLEMENTATION: The system runs on any Symbian-based cell phone that include JVM support (e.g., Nokia N95, N80). The software architecture of the sensing daemon is split into modules written in C++ and Java to maintain portability where possible while addressing limitations of the JVM system APIs. Fact bundles are pushed to the backend servers via XMLRPC/WebService calls over either WiFi or GPRS. A web-service-based API is offered from the backend servers to external systems. We have built: (i) a number of CenceMe widgets for Facebook (see Figure 1(a)), and (ii) a web portal that offers a broader and deeper user experience than the widgets alone can provide (see Figure 1(b)). For cell phones without the suite of sensors found on highend models (e.g., accelerometers), a CenceMe key ring attachment is available which provides the CenceMe daemon on the phone Bluetooth access to GPS and a 3-axis accelerometer. Future: Expand the current focus on consumer-driven social networking, and apply CenceMe technology to public health initiatives, domain specific sensing (e.g., skiing) and supporting logistics and production line efficiency in the commercial setting. The Bluetooth interface is also used to discover neighbors to infer the social setting of the user. We plan to demo all the functionalities of the CenceMe system both on the mobile phone clients and the Web portal. LESSONS FROM IPHONE CENCEME CenceMe is a social sensing application which integrates with popular social networking platforms such as Facebook,MySpace, and Twitter to augment a person’s context status by using the phone’s onboard sensors. By running machine learning algorithms on the mobile phone itself over the sensor data collected by the phone’s accelerometer, microphone, bluetooth/wifi/GPS, and camera and fusion techniques on backend machines, CenceMe infers a person’s activity (e.g., sitting, walking, running) and social context (e.g., dancing, with friends, at a restaurant, listening to music) in an automated way. This information is shared within the person’s social circle automatically giving the user the ability to customize their privacy settings to regulate what and where to publish the sensing-based presence. In what follows, we describe the architecture of the iPhone CenceMe client (in order to meet usability, classifiers resiliency, and preserve the phone user experience in terms of battery life for example) and backend (designed to be robust against failures, bursty user access, etc). Client: The iPhone CenceMe client is implemented using a combination of Objective-C and legacy ANSI C code. Objective-C is mainly used for the user interface implementation, to access the low level sensors, the internal sqlite database, and to respect the model-view-controller principle of iPhone OS. C is adopted to implement the activity recognition classifier (which relies on a decision tree algorithm), and for the audio classifier (that determines the surrounding audio level - noise, quiet - or if a person is in a conversation). The audio classifier is a support vector machine (SVM) technique using the LIBSVM C library [3]. The client is responsible for: i) operating the person’s presence inference over the sensor data by locally running the inference algorithms; ii) communicating the inference labels to the backend; iii) displaying the user’s and their buddies sensing presence (activity, social context, location), the privacy configurations, and various other interface views that allow, for example, a user to post short messages on their preferred social network account. The classifiers are trained offline in a supervised manner, i.e., taking large collections of labeled data for both the audio and accelerometer modalities and using that data to train the classification models which are later deployed on the phone. Although earlier versions of the iPhone OS did not support multitasking (i.e., the capability to run applications in the background) the CenceMe client is designed to properly duty-cycle the sensing, inference routines, and communication rendezvous with the backend to limit the battery drain of the phone when the application is active. Reducing the battery drain is critical to avoid rapid battery discharges when the application is used, a condition that would negatively impact the phone user experience. Cloud The iPhone CenceMe backend, which is implemented on the Amazon Web Service cloud infrastructure, is comprised by a series of different virtual machine images. Each machine is an Amazon EC2 virtual machine instance running Linux which provides a series of PHP and Python based REST web service allowing multiple machines to be composed together. Each image performs a different role in the backend infrastructure and has been designed to be initialized and composed together to offer different operating Figure 1. The “deploy-use-refine” model adopted in CenceMe. points of cost and performance. This allows us to temporarily initialize different numbers of machines of different types depending on the existing or expected user workload. It also allows us to manage the cost of running the CenceMe service so that we can provision additional machines (which incur additional costs) only when user demand requires it (for example, when a new model of the Apple iPhone is released and temporarily many users try out our application, which causes us to reconfigure our system). The server side system is responsible for: i) storing user sensor presence information and allowing other CenceMe clients restricted access to this data; ii) publishing this sensor presence information to third party social network such as Twitter and Facebook; iii) maintaining the CenceMe social network friend link structure; iv) performing routine user registration and account maintenance tasks; and v) the collection of statistics about user behavior both on the client and backend side of the system. LESSONS LEARNT In this section, we discuss the experience we gained by deploying CenceMe on the App Store and having it used by thousand of users worldwide. Throughout the development and the evolution stages of iPhone CenceMe we applied a “deploy-use-refine” model (see Figure 1). According to this strategy, after initial tests in the lab, the application is deployed on the App Store. Following this phase, users start downloading and using the application. Their feedback and user experience over time, submitted to us via a dedicated customer support email channel or the CenceMe discussion board, trigger the application fixing and refinement process, in order to meet users satisfaction and improve the application usability. In what follows, the lessons learnt from the iPhone CenceMe deployment are reported. Information Disclosure. When leveraging an application distribution system such as the App Store to collect data to be used for research purposes, it is very important that the user downloading the application is fully informed about the nature of the application and the data being collected, as well as how the data is going to be handled. Full disclosure of such information is often required by universities IRB and disclaimers should be made clear in the application terms of service. Given the sensitive nature of the iPhone CenceMe data, i.e., inference labels from sensor data, our university IRB makes us add a different consent form following the terms of service page which explicitly mentions the purpose of the application and describes the nature of the data collected along with the use we are making of the data. According to IRB, this extra step is needed because people do not often read terms of service notes carefully, so a second dedicated disclosure form is required. Of course, by requiring a consent form through the involvement of IRB as often needed when carrying out research involving human subjects, the cycle of an application deployment becomes much longer. The IRB might take months to approve a certain research project, and even so several iterative steps may be needed in order to meet the IRB requirements. This implies long cycles before an application can be released. This extra time should be taken into consideration by researchers that want to carry out research at large scale through application distribution systems. The second implication of adding an explicit consent form in the application is that users might opt out from using the application (as we verified with some of the iPhone CenceMe users). This is because people are not yet used to downloading research applications from a commercial platform such as the App Store and they do not often understand the purpose of the research. As a consequence, the pool of users participating in the research might grow slowly. Monetary and Time Costs. Moving research outside the lab for large scale deployments through the App Store has also monetary and time related costs. Bursty incoming user data, along with the necessity to rely on robust and reliable backend servers, most likely demand the support of cloud computing services [1]. In this way the researchers maintenance job is greatly alleviated since existing cloud services guarantee reliability and the ability to rapidly scale to more resources if needed. The flip side is that researchers have to be ready to sustain the subscription cost. It is also to be taken into account the time overhead needed to adapt the application to new phone OS releases (which often carry API changes) in order to make the application able to transition through different versions of the software seamlessly. Without this support, users would not be guaranteed a smooth usage of the application which could potentially be dropped with severe impacts on the research outcome. Users might also ask questions and need to be guided through the use of the application. Researchers need to be ready to devote some of their time to customer service support. A prompt response from an application developer gives strong feelings about the solidity of the application and the people supporting it. Software Robustness. Software robustness and clean user interface (UI) design may be a foregone conclusion. However, the effects of poor software design (which implies little robustness of the application) and poor UI layouts should not be underestimated. People downloading an application from any distribution system expect the software to be robust, simple to use, with easy-to-navigate UI. If any of these requirements are not met, users might loose confidence in the application and not use it anymore. As researchers, we might not have the UI design skills often required to make an application attractive. It is then important to collect feedback from domain experts that can guide the researcher to a proper design of the UI. We learnt about this issue after a couple of iterations of the iPhone CenceMe client. We modified the UI design and the different navigation views by taking into account users feedback and our own experience in using the app. By dealing with software that needs to run onmobile phones, researchers have to pay great attention to the impact the application might have on the phone performance itself. Phone manufactures often guarantee that the user experience with the phone is not degraded when third party apps are running. Hence, resources are reclaimed, namely RAM and CPU, when the phone OS assesses that there is a need for it. Researchers have to make sure that the application does not take too many CPU cycles and/or occupy too much RAM, otherwise the application might be shut down unexpectedly. This is a particularly important aspect to be considered for applications designed to run in the background. Researchers that want to deploy applications at large scale have to be ready to write code at near-production level, in order to maximize the application usability and robustness. Although testing the application in the lab might not let you discover all the possible glitches in the code, extensive testing phases are required before submitting an application to the distribution system. This is important in order to minimize the likelihood that users will encounter problems with an application and to reduce the chances that an application is rejected during the screening process; for example in the case of the Apple App Store. It should be noted that Android Market does not screen applications making it more attractive in the case of some applications. One of the CenceMe releases did not pass the screening phase because of a debugging flag was mistakenly left in the code causing the application to crash. As a result of this silly mistake the application was pushed to the back of the application screening process queue by Apple delaying the new release of CenceMe by several weeks. Hardware Incompatibilities. New phone models or the evolution of existing models could present hardware differences that could impact the application performance. We experienced this issue during the iPhone 2G to 3G generation transition phase, where the former mounts a different microphone than the latter. We started noticing a performance drop of the audio classifier when the same application was running on a 3G phone. The drop was caused by the fact the audio classifier for conversation detection was trained using audio samples mainly recorded with iPhones 2G. Since the frequency response of the iPhone 3G microphone is different from the 2G model, the classifier trained with 2G audio was not able to infer accurately 3G audio. For a large scale application developer it is then important to realize these differences in time to limit misbehaviors when people replace their devices. User Incentives. In order for users to use a research application they have to have an incentive and enjoy it when the application is active on their phone. If there is no or little return to them, the application might be used rarely with scarce benefit to the research outcome. In order to engage users we include a feature in the iPhone CenceMe client named IconMe. IconMe allows a user to select an icon that better represents their status, mood, and surroundings and associate a 140 character message to it. Such a message is shared with the CenceMe friends and posted on the personal Facebook, MySpace, and Twitter account according to the user’s preferences. We found this microblogging service an
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •   Raw audio data captured from mobile phones is transformed into features allowing learning algorithms to identify classes of behavior (e.g., driving, in conservation, making coffee) occurring in a stream of sensor data, for example, by SoundSense ALGORITHMS In what follows, we present the detailed design of the SoundSense algorithms implemented on the phone as part of the SoundSense architecture. 4.1 Preprocessing This preprocessing component, as shown in Figure 3, is responsible for segmenting the incoming audio stream into frames and performing frame admission control by identifying when frames are likely to contain the start of an acoustic event (e.g., breaking glass, shouting) that warrants further processing. 1 Framing Segmenting the audio stream into uniform frames is common practice for feature extraction and classification. The frame width (i.e., duration) needs to be short enough so that the audio content is stable and meanwhile long enough to capture the characteristics signature of the sound. Existing work use frames that overlap each other since overlapping frames are able to capture subtle changes in the sounds more precisely. However, this can cause overlapping pieces of audio data to be processed multiple times. Given the resource constraints of the phone we use independent non-overlapping frames of 64 ms. This frame width is slightly larger than what is typically used in other forms of audio processing (e.g., speech recognition) where the width typically ranges between 25-46 ms. In addition to enabling a lower duty cycle on the phone, a frame width of 64 ms is sufficient for capturing the acoustic characteristics of environmental sounds. 2 Frame Admission Control Frame admission control is required since frames may contain audio content that is not interesting (e.g., white noise) or is not able to be classified (e.g., silence or insufficient amount of the signal is captured). These frames can occur at any time due to phone context; for example, the phone may be at a location that is virtually silent (e.g., library, home during night) or where the sounds that are sampled are simply too far away from the phone to be sufficiently captured
  •   SENSE Individual mobile phones collect raw sensor data from sensors embedded in the phone. LEARN Information is extracted from the sensor data by applying machine learning and data mining techniques. These operations occur either directly on the phone, in the mobile cloud, or with some partitioning between the phone and cloud. Where these components run could be governed by various architectural considerations, such as privacy, providing user real-time feedback, reducing communication cost between the phone and cloud, available computing resources, and sensor fusion requirements. We therefore consider where these components run to be an open issue that requires research. INFORM, SHARE, AND PERSUASION We bundle a number of important architectural components together because of commonality or coupling of the components. For example, a personal sensing application will only inform the user, whereas a group or community sensing application may share an aggregate version of information with the broader population and obfuscate the identity of the users. Other considerations are how to best visualize sensor data for consumption of individuals, groups, and communities. Privacy is a very important consideration as well. While phones will naturally leverage the distributed resources of the mobile cloud (e.g., computation and services offered in the cloud), the computing, communications, and sensing resources on the phones are ever increasing. We believe that as resources of the phone rapidly expand, one of the main benefits of using the mobile computing cloud will be the ability to compute and mine big data from very large numbers of users. The availability of large-scale data benefits mobile phone sensing in a variety of ways; for example, more accurate interpretation algorithms that are updated based on sensor data sourced from an entire user community. This data enables personalizing of sensing systems based on the behavior of both the individual user and cliques of people with similar behavior. http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-1059-09.pdf INFORM, SHARE, AND PERSUASION: CLOSING THE SENSING LOOP How you use inferred sensor data to inform the user is application-specific. But a natural question is, once you infer a class or collect together a set of large-scale inferences, how do you close the loop with people and provide useful information back to users? Clearly, personal sensing applications would just inform the individual, while social networking sensing applications may share activities or inferences with friends. We discuss these forms of interaction with users as well as the important area of privacy. Another topic we touch on is using large-scale sensor data as a persuasive technology — in essence using big data to help users attain goals using targeted feedback. SHARING To harness the potential of mobile phone sensing requires effective methods of allowing people to connect with and benefit from the data. The standard approach to sharing is visualization using a web portal where sensor data and inferences are easily displayed. This offers a familiar and intuitive interface. For the same reasons, a number of phone sensing systems connect with existing web applications to either enrich existing applications or make the data more widely accessible [12, 23]. Researchers recognize the strength of leveraging social media outlets such as Facebook, Twitter, and Flickr as ways to not only disseminate information but build community awareness (e.g., citizen science [20]). A popular application domain is fitness, such as Nike+. Such systems combine individual statistics and visualizations of sensed data and promote competition between users. The result is the formation of communities around a sensing application. Even though, as in the case of Nike+, the sensor information is rather simple (i.e., just the time and distance of a run), people still become very engaged. Other applications have emerged that are considerably more sophisticated in the type of inference made, but have had limited up take. It is still too early to predict which sensing applications will become the most compelling for user communities. But social networking provides many attractive ways to share information. PERSONALIZED SENSING Mobile phones are not limited to simply collecting sensor data. For example, both the Google and Microsoft search clients that run on the iPhone allow users to search using voice recognition. Eye tracking and gesture recognition are also emerging as natural interfaces to the phone. Sensors are used to monitor the daily activities of a person and profile their preferences and behavior, making personalized recommendations for services, products, or points of interest possible [32]. The behavior of an individual along with an understanding of how behavior and preferences relate to other segments of the population with similar behavioral profiles can radically change not only online experiences but real world ones too. Imagine walking into a pharmacy and your phone suggesting vitamins and supplements with the effectiveness of a doctor. At a clothing store your phone could identify which items are manufactured without sweatshop labor. The behavior of the person, as captured by sensors embedded in their phone, become an interface that can be fed to many services (e.g., targeted advertising). Sensor technology personalized to a user’s profile empowers her to make more informed decisions across a spectrum of services. PERSUASION Sensor data gathered from communities (e.g., fitness, healthcare) can be used not only to inform users but to persuade them to make positive behavioral changes (e.g., nudge users to exercise more or smoke less). Systems that provide tailored feedback with the goal of changing users’ behavior are referred to as persuasive technology [33]. Mobile sensing applications open the door to building novel persuasive systems that are still largely unexplored. For many application domains, such as healthcare or environmental awareness, users commonly have desired objectives (e.g., to lose weight or lower carbon emissions). Simply providing a user with her own information is often not enough to motivate a change of behavior or habit. Mobile phones are an ideal platform capable of using low-level individual-scale sensor data and aggregated community-scale information to drive long-term change (e.g., contrasting the carbon footprint of a user with her friends can persuade the user to reduce her own footprint). The UbiFit Garden [1] project is an early example of integrating persuasion and sensing on the phone. UbiFit uses an ambient background display on the phone to offer the user continuous updates on her behavior in response to desired goals. The display uses the metaphor of a garden with different flowers blooming in response to physical exercise of the user during the day. It does not use comparison data but simply targets the individual user. A natural extension of UbiFit is to present community data. Ongoing research is exploring methods of identifying and using people in a community of users as influencers for different individuals in the user population. A variety of techniques are used in existing persuasive system research, such as the use of games, competitions among groups of people, sharing information within a social network, or goal setting accompanied by feedback. Understanding which types of metaphors and feedback are most effective for various persuasion goals is still an open research problem. Building mobile phone sensing systems that integrate persuasion requires interdisciplinary research that combines behavioral and social psychology theories with computer science. The use of large volumes of sensor data provided by mobile phones presents an exciting opportunity and is likely to enable new applications that have promise in enacting positive social changes in health and the environment over the next several years. The combination of large-scale sensor data combined with accurate models of persuasion could revolutionize how we deal with persistent problems in our lives such as chronic disease management, depression, obesity, or even voter participation.
  •   PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
  •   PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
  •   PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary
  •   PRIVACY Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system. People are understandably sensitive about how sensor data is captured and used, especially if the data reveals a user’s location, speech, or potentially sensitive images. Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient [34]. For instance, how can the user temporarily pause the collection of sensor data without causing a suspicious gap in the data stream that would be noticeable to anyone (e.g., family or friends) with whom they regularly share data? In personal sensing applications processing data locally may provide privacy advantages compared to using remote more powerful servers. SoundSense adopts this strategy: all the audio data is processed on the phone, and raw audio is never stored. Similarly, the UbiFit Garden [1] application processes all data locally on the device. Privacy for group sensing applications is based on user group membership. For instance, although social networking applications like Loopt and CenceMe share sensitive information (e.g., location and activity), they do so within groups in which users have an existing trust relationship based on friendship or a shared common interest such as reducing their carbon footprint. Community sensing applications that can collect and combine data from millions of people run the risk of unintended leakage of personal information. The risks from location-based attacks are fairly well understood given years of previous research. However, our understanding of the dangers of other modalities (e.g., activity inferences, social network data) are less developed. There are growing examples of reconstruction type attacks where data that may look safe and innocuous to an individual user may allow invasive information to be reverse-engineered. For example, the UIUC Poolview project shows that even careful sharing of personal weight data within a community can expose information on whether a user’s weight is trending upward or downward . The PEIR project evaluates different countermeasures to this type of scenario, such as adding noise to the data or replacing chunks of the data with synthetic but realistic samples that have limited impact on the quality of the aggregate analysis. Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future. In particular, the secondhand smoke problem of mobile sensing creates new privacy challenges, such as: • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby? • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party? Furthermore, this type of sensing presents even larger societal questions, such as who is responsible when collected sensor data from these mobile devices cause financial harm? Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary

Transcript

  • 1.  
  • 2.
    • People centric, mobile, sensing applications enabling
    • Personal insight
    • Participation
    • Passive altruism
    What is the coming Wave?
  • 3.
    • people centric, mobile, sensing applications enabling
    • Personal insight
    • Participation
    • Passive altruism
    What is the coming wave?
  • 4.
    • Communication modes have changed
    What is the interaction shift?
  • 5. What is the scale of the shift? ~585 million Facebook users in 2010 ~200+ million users in 2010 ~85 million tweets/day
  • 6.
    • Presence and status sharing is essentially an early form of people centric sensing
    Where does sensing fit? ~585 million Facebook users in 2010 ~200+ million users in 2010 ~85 million tweets/day
  • 7.
    • desktop interaction:
    • predominantly manual, textual
    What is the interaction shift?
  • 8.
    • mobile interaction:
    • manual, multi-modal
    The mobile sensor impact?
  • 9.
    • mobile sensor interaction:
    • automated, inferred, multi-modal (i/o)
    The mobile sensor impact?
  • 10.
    • willing to share sensitive presence information to trusted friends as long as “sensor off” button is clearly available
    • value access to presence information from the phone
    • value inferences obtained from sensing for personal insight
    • enjoy comparing with friends and population
    Are users ready?
  • 11. What are the mobile sensing scales?
  • 12. Participatory Urbanism? Passive Altruism? Carbon Monoxide readings across Accra, Ghana using mobile sensors attached to taxi cabs Carbon Monoxide heat map visualization across Accra, Ghana
  • 13.
    • Within the problem domain of people centric, mobile, sensing applications, focus has shifted from
    • MAC
    • Multi-Hop
    • to
    What has changed in focus?
  • 14.
    • while important, within the problem domain of people centric, mobile, sensing applications, focus has currently shifted from
    • MAC
    • Multi-Hop
    • to
    • Sensor data inferencing
    • Privacy with verifiability
    What has changed?
  • 15.
    • Some areas in the problem domain have remained
    • Energy Consumption
    • Computation
    • Memory
    • Sensor Data Calibration
    What has remained?
  • 16.
    • Different sensor & classifier units have different energy costs e.g. {gps, location} is roughly 10x times cost of {accel,activity}
    • Duty cycle of {bt,gps} is roughly 10x that of {accel,activity}
    • Classifier tradeoffs: fidelity, responsiveness, energy
    What has remained?
  • 17. Sensing with smart phones gps camera Accelerometer/3 axis gyro microphone blue tooth radio (proximity) ambient light
  • 18. Sensing with smart phones & mobile sensing applications
  • 19. Sensing with smart phones & mobile sensing applications
  • 20. Sensing with smart phones & mobile sensing applications
    • User centered social app
    • Critical questions
      • Where are you?
      • What are you doing?
    • Use existing hardware (iPhone, Android, Nokia etc.)
    • social networking integration
  • 21. Sensor driven inferencing
  • 22.
      • Mobile OS Limitations
    • limited programmability and resource usage control offered to the developer
    • applications running on the phone may be denied resource requests
    • must be designed to allow interruption at any time so as not to disrupt regular operation of the phone.
      • API and Operational Limitations
    • JME implements a reduced set of the Java Standard Edition APIs for use on mobile phones.
    • Portions of the app must be ported to each phone and fix for missing or malfunctioning APIs (accelerometer /GPS APIs , memory leaks)
      • Security Limitations
    • phone manufacturers and cellular network operators typically control access to critical components, including the APIs for access to the file system, multimedia features, Bluetooth, GPS, and communications via GPRS or WiFi, through a rights management system
    • Properly signed keys from a Certificate Authority
    Design Constraints?
  • 23.
      • Energy Management Limitations
    • Power conservation
    • Bluetooth, GPS, and GPRS radios are responsible for draining most of the battery power when sensor apps run
    • JME does not provide APIs to power cycle (i.e., toggle on and off) the Bluetooth and GPS radios to implement an efficient radio duty-cycle strategy
    • Sensing and classification algorithms that run on the phone can also consume a considerable amount of energy if left unchecked
    • sampling the phone’s microphone and running a discrete Fourier transform on the sound sample uses more power than sampling the accelerometer and classifying the accelerometer data
    • To reduce energy at the application layer a sensing duty-cycle is needed that samples sensors less frequently and avoids the use of the radios for communications or acquisition of satellite signals for location coordinates.
    Design Constraints?
  • 24.
    • Architectural Design Issues
    • Split-Level Classification
    • -Classifying streams of sensor data from a large number of mobile phones is computationally intensive
    • -Split some classification to the phone and some to the backend servers
    • -The output of the classification process on the phone are primitives
    • -The classification operation on the backend returns facts
      • -Supports customized tags
      • -Provides resiliency to cellular/WiFi radio dropouts - provides resiliency to cellular/WiFi radio dropouts.
      • -Minimizes the sensor data the phone sends to the backend servers improving the system efficiency by only uploading classification derived-primitives rather than higher bandwidth raw sensed data
      • -Reduces the energy consumed by the phone and therefore monetary cost
    Architectural Considerations?
  • 25.
    • Power Aware Duty-Cycle
      • To extend the battery lifetime of the phone when running the applicartion, scheduled sleep techniques to both the data upload and the sensing components.
      • Larger the sleep interval the lower the classification responsiveness of the system?
    • Software Portability
      • push as much as possible to java
      • a number of components need to be implemented directly using native APIs to support the necessary features
    Architectural Considerations?
  • 26. What goes on the phone? The app runs on mobile phones and integrates with backend infrastructure hosted on server machines The software on the phones senses, classifies raw sensed data, produce primitives, presents people’s presence on the phone, and uploads of the primitives to the backend servers
  • 27. Primitives and Local Storage?
    • Primitives are the result of
    • classification of sound samples from the phone’s microphone using a discrete fourier transform (DFT) technique and a machine learning algorithm to classify the nature of the sound
    • the classification of on board accelerometer data to determine the activity, (e.g., sitting, standing, walking, running)
    • scanned Bluetooth MAC addresses in the phone’s vicinity GPS readings and random photos, where a picture is taken randomly when a phone keypad key is pressed or a call is received.
    • Local Storage stores the raw sensed data records to be processed by the phone classifiers. As the classification of raw data records is performed, the data records are discarded This is particularly important to address the integrity of the data and the privacy of the person Primitives, GPS coordinates, and Bluetooth scanned MAC addresses are stored in local storage as well, waiting for an upload session to start.
  • 28. Upload and Privacy?
    • Upload Manager establishing connections to the backend servers in an opportunistic way, depending on radio link availability which can be either cellular or WiFi. It also uploads the primitives from local storage and tears down the connection after the data is transferred.
    • Privacy Settings UI allows the user to enable and disable the five sensing modalities supported on the phone, (viz. audio, accelerometer, Bluetooth, random photo, and GPS).
    • Users can control the privacy policy settings from the phone and a portal. users determine what parts of their presence to share and who they are willing to share sensing presence with.
    • A visualization client that runs on the mobile phone. The sensing presence is rendered as both icons and text on the phone GUI
  • 29. Watchtasks
    • restart any process that fails and perform other purposes like: launching the application when the phone is turned on;
    • starting the application components in the correct order;
    • restarting the application
    • restarting all support daemons when the application fails.
    • restarting all the software components at a preset interval to clear any malfunctioning threads.
    • The architecture is a threaded; each component is designed to be a single thread
    • This ensures that one component failure does not compromise or block other components.
  • 30. What goes on the Server?
  • 31.
    • Audio classifier
    • retrieves the PCM sound byte stream from the phone’s local storage and outputs the audio primitive resulting from the classification.
    • This audio primitive indicates whether the audio sample represents human voice and is used by backend classifiers such as the conversation classifier
    • The audio classification on the phone involves two steps:
      • feature extraction from the audio sample and
      • classification.
    PHONE CLASSIFIERS
  • 32. PHONE CLASSIFIERS
    • Classification follows feature extraction based on a machine learning algorithm using the supervised learning technique of discriminant analysis.
    • training set: large set of human voice samples, and a set of audio samples for various environmental conditions including quiet and noisy settings
    • The classifier’s feature vector is composed of
      • the mean and
      • standard deviation of the DFT power.
    • The mean is used because the absence of talking shifts the mean lower.
    • The standard deviation is used because the variation of the power in the spectrum under analysis is larger when talking is present,
  • 33. Activity Classification (inferencing)
    • fetches the raw accelerometer data from the phone’s local storage and classifies this data in order to return the current activity, namely, sitting, standing, walking, and running.
    • The activity classifier consists of two components:
      • the preprocessor and
      • the classifier itself.
    • The preprocessor fetches the raw data from the local storage component and extracts features (i.e., attributes).
    • Given the computational and memory constraints of mobile phones, we use a simple features extraction technique using the mean, standard deviation, and number of peaks of the accelerometer readings along the three axes of the accelerometer.
  • 34. Activity Classification (inferencing)
    • the sitting and standing traces are flatter than when the person is walking and running.
    • When standing, the deviation from the mean is slightly larger because typically people tend to rock a bit while standing.
    • The peaks in the walking and running traces are a good indicator of footstep frequency.
    • When the person runs a larger number of peaks per second is registered than when people walk.
    • The standard deviation is larger for the running case than walking.
  • 35. Backend Classifiers http://www.youtube.com/watch?v=aAKplAaPAHE&NR=1 Conversation Classifier Binary classifier - to determine whether a person is in a conversation or not, taking as input the audio primitives from the phone. Social Context The output of this classifier is the social context fact, which is derived from multiple primitives and facts provided by the phone and other backend classifiers. This could be: neighborhood conditions, determines if there are any buddies in a person’s surrounding area or not. Social Status Builds on the output of the conversation and activity classifiers, and detected neighboring buddies to determine if a person is gathered with buddies, talking (for example at a meeting or restaurant), alone, or at a party Social status also includes the classification of partying and dancing.
  • 36. Conversation inferencing http://www.youtube.com/watch?v=aAKplAaPAHE&NR=1
  • 37. Conversation inferencing
  • 38. Conversation inferencing
    • Clustering that results from the discriminant analysis algorithm
    • The equation of the dashed line in Figure is used by the audio classifier running on the phone to discern whether the sound samples comes from human voice or a noisy/quite environment with 22% misclassification rate.
    • Audio samples misclassified as voice are filtered out by a rolling window technique used by the conversation classifier that runs on the backend
    • This boosts the performance fidelity of the system for conversation recognition.
  • 39. Mobile phone sensing architecture
  • 40. Mobile phone sensing and Privacy
    • Respecting the privacy of the user is perhaps the most fundamental responsibly of a phone sensing system
    • Although there are existing approaches that can help with these problems (e.g., cryptography, privacy-preserving data mining), they are often insufficient
    • Processing data locally may provide privacy advantages compared to using remote more powerful servers but..
  • 41. Mobile phone sensing and Privacy
  • 42. Mobile phone sensing and Privacy
    • Sharing is within groups in which users have an existing trust relationship based on friendship or a shared common interest
    • our understanding of the dangers of other modalities (e.g., activity inferences overlaid with social network data) are less developed
    • Privacy and anonymity will remain a significant problem in mobile-phone-based sensing for the foreseeable future.
  • 43. Mobile phone sensing and Privacy
    • New privacy challenges:
    • How can the privacy of third parties be effectively protected when other people wearing sensors are nearby?
    • How can mismatched privacy policies be managed when two different people are close enough to each other for their sensors to collect information from the other party?
    • Larger societal questions remain such as who is responsible when collected sensor data from these mobile devices cause financial harm?
    • Stronger techniques for protecting the rights of people as sensing becomes more commonplace will be necessary