Ubicomp07 Intel Mit

1,770 views

Published on

A Long-Term Evaluation of Sensing Modalities for Activity Recognition

Published in: Economy & Finance, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,770
On SlideShare
0
From Embeds
0
Number of Embeds
36
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Ubicomp07 Intel Mit

    1. 1. A Long-Term Evaluation of Sensing Modalities for Activity Recognition September 19 2007 Beth Logan 1 , Jennifer Healey 1 , Matthai Philipose 2 , Emanuel Munguia Tapia 3 and Stephen Intille 3 1 Intel Digital Health, 2 Intel Research Seattle, 3 MIT House_n
    2. 2. Activity Recognition <ul><li>Activity recognition enables novel applications </li></ul><ul><ul><li>E.g. aging in place </li></ul></ul><ul><li>But much prior work uses artificial data, e.g. </li></ul><ul><ul><li>Created by researchers or affiliates </li></ul></ul><ul><ul><li>Collected in short sessions </li></ul></ul><ul><ul><li>Collected in lab settings </li></ul></ul><ul><ul><li>Collected in an experiment biased towards new sensor or algorithm </li></ul></ul><ul><ul><li>Labeled by researchers or affiliates </li></ul></ul><ul><li>We address these shortcomings </li></ul>
    3. 3. Our Approach <ul><li>Study activity recognition in an instrumented home </li></ul><ul><ul><li>Data from ~900 sensors over a “long”, 10 week period </li></ul></ul><ul><ul><li>Annotate and study a 104 hour subset </li></ul></ul><ul><ul><li>Subjects and labeler unaffiliated with the authors </li></ul></ul><ul><li>Report on </li></ul><ul><ul><li>Activity recognition for several types of sensors </li></ul></ul><ul><ul><li>Difficult cases which could impact design of future systems </li></ul></ul><ul><ul><li>Challenges encountered with such a long study </li></ul></ul><ul><li>Dataset available for all to use </li></ul><ul><ul><li>URL at end of talk </li></ul></ul>
    4. 4. MIT House_n PlaceLab <ul><ul><li>Condo near MIT </li></ul></ul><ul><ul><li>Hundreds of sensors </li></ul></ul><ul><ul><li>Video for annotation </li></ul></ul><ul><ul><li>Subjects live undisturbed for weeks or months </li></ul></ul><ul><ul><ul><li>Data uploaded weekly </li></ul></ul></ul><ul><ul><ul><li>Residents have ‘veto rights’ </li></ul></ul></ul>
    5. 5. Built-In Sensors <ul><li>206 inputs total </li></ul><ul><ul><li>101 wired reed switches </li></ul></ul><ul><ul><li>14 water flow </li></ul></ul><ul><ul><li>37 current </li></ul></ul><ul><ul><li>36 temperature </li></ul></ul><ul><ul><li>1 pressure </li></ul></ul><ul><ul><li>10 humidity </li></ul></ul><ul><ul><li>1 gas </li></ul></ul><ul><ul><li>6 light </li></ul></ul>Barometric pressure Humidity Temperature IR video camera Color video camera Top-down camera Light sensor Microphone Switches to detect open/close Temperature Water flow Current
    6. 6. RFID Tags and Bracelet <ul><li>435 tags installed throughout condo </li></ul>RFID Bracelet RFID Tags
    7. 7. Motion Based Sensors <ul><li>281 inputs total </li></ul><ul><ul><li>265 on-object accelerometers </li></ul></ul><ul><ul><li>2x3axis on-body accelerometers </li></ul></ul><ul><ul><li>10 infra-red motion detectors </li></ul></ul>
    8. 8. Study Detail <ul><li>Studied a young married couple living for 10 weeks in the PlaceLab </li></ul><ul><li>104 hours of data annotated </li></ul><ul><ul><li>Covers 15 days from middle of experiment </li></ul></ul><ul><ul><li>Chose periods when male subject was home </li></ul></ul><ul><li>Independent annotator </li></ul><ul><ul><li>Annotated male’s activities from the video </li></ul></ul><ul><ul><li>Used a 98 activity ontology </li></ul></ul><ul><ul><li>Limited/no video in bathrooms and bedroom; still had audio </li></ul></ul>
    9. 9. Characteristics of Data Collected <ul><li>Most and least observed activities in 104h annotated set, male subject only </li></ul>6 s Drying dishes 24 s Washing hands 34 s Making the bed 42 s Leaving the home 44 s Preparing a snack 359 min Reading book/paper/magazine 728 min Sleeping deeply 732 min Actively watching tv or movies 813 min Background listening to music or radio 1866 min Using a computer Total Observed Time Activity
    10. 10. Activities Studied <ul><li>Studied activities which had at least 10 minutes of data </li></ul><ul><li>Many health-related </li></ul><ul><li>Some trivial to recognize with alternate sensors </li></ul>4.8 min 32.7 min 21.0 min 54 s 6.0 min 2.1 min 5.0 min 29s 51.1 min Std Dev. Time 2.0 min 19.2 min 14.4 min 27 s 3.1 min 1.1 min 1.6 min 23 s 33.3 min Mean Time 204 min Using a phone 1866 min Using a computer 359 min Reading book/paper/magazine 59 min Meal preparation 116 min Hygiene 50 min Grooming 311 min Eating 11 min Dishwashing 732 min Actively watching tv or movies Total Time Activity
    11. 11. Activity Recognition Experiments <ul><li>“ Activity” vs. “The rest of the world” classification </li></ul><ul><ul><li>Data converted to features covering 30s windows </li></ul></ul><ul><ul><li>Studied static classifiers </li></ul></ul><ul><li>Experiment methodology </li></ul><ul><ul><li>`leave one day out’ cross validation </li></ul></ul><ul><ul><li>accounts for daily variations in activities </li></ul></ul><ul><li>Figure of merit: ROC area </li></ul><ul><ul><li>combines sensitivity and specificity into single number </li></ul></ul>
    12. 12. Results: All Sensors Excellent  Poor 
    13. 13. Results: Motion-based Sensors Excellent  Poor 
    14. 14. Why Is Infra-red So Good ? <ul><li>Infra-red performs well because </li></ul><ul><ul><li>Most activities studied were performed in consistent, non-overlapping locations </li></ul></ul><ul><ul><li>E.g. dishwashing, grooming, hygiene, computer use </li></ul></ul><ul><li>Similar to prior results (e.g. in office context Wren05) </li></ul><ul><li>Would likely not see similar performance on finer-grained activities </li></ul><ul><ul><li>e.g. shaving vs. showering </li></ul></ul>
    15. 15. Why Does RFID Perform So Poorly? <ul><li>RFID performed poorly on most activities because </li></ul><ul><ul><li>Few tag firings for most activities </li></ul></ul><ul><ul><li>Bracelet range sometimes insufficient </li></ul></ul><ul><ul><li>“ Wrong” hand sometimes used to manipulate objects </li></ul></ul><ul><ul><li>Many kitchen and bathroom objects not tagged due to size, metallic, or because might go in microwave </li></ul></ul><ul><ul><li>Bracelet sometimes removed e.g. for hygiene activities </li></ul></ul><ul><li>Future deployments should address tag placement issues </li></ul>
    16. 16. Eating/Reading/Phone Use <ul><li>Eating/reading/phone use poorly recognized because </li></ul><ul><ul><li>Don’t happen in consistent locations in a real-world house </li></ul></ul><ul><ul><li>May happen in front of the TV or computer so easily confused with these activities </li></ul></ul><ul><li>Alternate sensors/algorithms needed </li></ul>
    17. 17. Discovered in Hindsight… <ul><li>Lack of data </li></ul><ul><ul><li>104 hours of home-life still gave insufficient examples of many activities </li></ul></ul><ul><li>Multiple subjects </li></ul><ul><ul><li>Female subject’s activities likely blur our results </li></ul></ul><ul><li>Incomplete annotations </li></ul><ul><ul><li>Limited detail on bedroom and bathroom activities </li></ul></ul><ul><li>Insufficient sensor density </li></ul><ul><ul><li>No good sensors for some activities e.g. eating, reading, dressing and undressing </li></ul></ul>
    18. 18. Behavioral Factors <ul><li>Observing subjects for weeks in a home setting revealed how people really behave </li></ul><ul><ul><li>interrupted activities, multi-tasking, activities in multiple and unexpected locations </li></ul></ul><ul><li>Further study of video might identify “optimal” locations of key sensors </li></ul><ul><li>Elders may behave differently to our subjects </li></ul>
    19. 19. Conclusions <ul><li>Location “good enough” for many simple activities </li></ul><ul><ul><li>Likely not good enough for fine-grained activities </li></ul></ul><ul><li>People behave unpredictably “in the wild” </li></ul><ul><ul><li>Revealed by our long-term study with unaffiliated subjects and annotator </li></ul></ul><ul><li>Many activities are short and infrequent </li></ul><ul><ul><li>Collecting sufficient “real life” data very challenging </li></ul></ul><ul><li>Even with 100s of sensors, our sensor density was sometimes insufficient </li></ul><ul><ul><li>Opportunity for new sensors </li></ul></ul>
    20. 20. Future Directions <ul><li>Address lack of data by studying ways users can give feedback to an adaptive system </li></ul><ul><li>Shift focus to older subjects </li></ul><ul><li>Developing a ‘portable’ PlaceLab so subjects can live in their own homes </li></ul><ul><li>Dataset available from MIT </li></ul><ul><ul><li>Sensor data, activity ontology, annotations </li></ul></ul><ul><ul><li>http://architecture.mit.edu/house_n/data/PlaceLab/PLCouple1.htm </li></ul></ul>
    21. 21. Backup
    22. 22. Katz Index of Independence in Activities of Daily Living <ul><li>BATHING: </li></ul><ul><ul><li>Bathes self completely or needs help in bathing only a single part of the body </li></ul></ul><ul><li>DRESSING: </li></ul><ul><ul><li>Gets clothes from closets and drawers and puts on clothes and outer garments complete with fasteners. </li></ul></ul><ul><li>TOILETING: </li></ul><ul><ul><li>Goes to toilet, gets on and off, arranges clothes, cleans </li></ul></ul><ul><li>TRANSFERRING: </li></ul><ul><ul><li> Moves in and out of bed or chair unassisted. </li></ul></ul><ul><li>CONTINENCE: </li></ul><ul><ul><li>Complete self control over urination and defecation. </li></ul></ul><ul><li>FEEDING: </li></ul><ul><ul><li>Gets food from plate into mouth without help. Preparation of food may be done by another person. </li></ul></ul>
    23. 23. Characteristics of Activities Studied <ul><li>Chose 9 activities to study for which we had at least 10 minutes of data; many but not all health-related. </li></ul>4.8 min 32.7 min 21.0 min 54 s 6.0 min 2.1 min 5.0 min 29s 51.1 min Std Dev. Time 2.0 min 19.2 min 14.4 min 27 s 3.1 min 1.1 min 1.6 min 23 s 33.3 min Mean Time 204 min Using a phone 1866 min Using a computer 359 min Reading book/paper/magazine 59 min Meal preparation (cutting, slicing, warming food, retrieving ingredients/cookware, mixing, combining, washing ingredients, preparing a drink etc) 116 min Hygiene (showering, brushing teeth, toileting etc; many ‘hygiene misc’ activities because of lack of camera in bathroom) 50 min Grooming (getting dressed/undressed, mostly taking on and off coats and wearable sensors in the hallway due to lack of camera in bedroom) 311 min Eating (eating meals, eating snacks and drinking) 11 min Dishwashing (rinsing dishes, drying dishes etc) 732 min Actively watching tv or movies Total Time Activity

    ×