<ul><li>Introduction </li></ul><ul><li>Sensors </li></ul><ul><li>Techniques </li></ul><ul><li>Evaluation Methodology </li>...
<ul><li>Object use is a promising ingredient for general, low-overhead indoor activity recognition </li></ul><ul><li>(e.g....
<ul><li>two people performing 12 activities involving 10 actions on 30 objects </li></ul><ul><li>defined simple commonsens...
<ul><li>wear two bracelets on the dominant wrist </li></ul><ul><li>One is a Radio Frequency Identification (RFID) reader, ...
 
<ul><li>Problem Statement </li></ul><ul><ul><li>Infers the current action being performed and the object on which it is be...
 
 
 
 
<ul><li>1. How good are the learned models? </li></ul><ul><ul><li>how much better are the models of activity with action i...
 
 
 
 
 
 
<ul><li>Feature selection and labeling are known bottlenecks in learning sensor-based models of human activity </li></ul><...
Upcoming SlideShare
Loading in …5
×

Common Sense Based Joint Training of Human Activity Recognizers

1,027 views
916 views

Published on

Published in: Business, Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,027
On SlideShare
0
From Embeds
0
Number of Embeds
21
Actions
Shares
0
Downloads
18
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Common Sense Based Joint Training of Human Activity Recognizers

  1. 2. <ul><li>Introduction </li></ul><ul><li>Sensors </li></ul><ul><li>Techniques </li></ul><ul><li>Evaluation Methodology </li></ul><ul><li>Results </li></ul><ul><li>Conclusions </li></ul>
  2. 3. <ul><li>Object use is a promising ingredient for general, low-overhead indoor activity recognition </li></ul><ul><li>(e.g., whether a door is being opened or closed) </li></ul><ul><li>(e.g., microwaveable objects) </li></ul><ul><li>(e.g., clearing a table and eating dinner) </li></ul><ul><li>We present a joint probabilistic model of object-use, physical actions and activities that improves activity detection relative to models that reason separately about these quantities, and learns action models with much less labeling overhead than conventional approaches </li></ul>
  3. 4. <ul><li>two people performing 12 activities involving 10 actions on 30 objects </li></ul><ul><li>defined simple commonsense models for the activities using an average of 4 English words per activity </li></ul><ul><li>Combining action data with object-use in this manner increases precision/recall of activity recognition from 76/85% to 81/90%; the automated action-learning techniques yielded 40/85% precision/recall over 10 possible object/action classes </li></ul>
  4. 5. <ul><li>wear two bracelets on the dominant wrist </li></ul><ul><li>One is a Radio Frequency Identification (RFID) reader, called the iBracelet, for detecting object use, </li></ul><ul><li>The other is a personal sensing reader, called the Mobile Sensing Platform (MSP), for detecting arm movement and ambient conditions </li></ul><ul><li>eight sensors: a six-degree-of-freedom accelerometer, microphones sampling 8-bit audio at 16kHz, IR/visible light, high-frequency light, barometric pressure, humidity, temperature and compass </li></ul>
  5. 7. <ul><li>Problem Statement </li></ul><ul><ul><li>Infers the current action being performed and the object on which it is being performed. </li></ul></ul><ul><ul><li>Combines with object-use data O (when available) to produce better estimates of current activity A. </li></ul></ul><ul><ul><li>For efficiency reasons, uses F’ « F features of M. </li></ul></ul><ul><li>A Joint Model </li></ul><ul><ul><li>The Dynamic Bayesian Network (DBN) </li></ul></ul><ul><li>A Layered Model </li></ul>
  6. 12. <ul><li>1. How good are the learned models? </li></ul><ul><ul><li>how much better are the models of activity with action information combined? </li></ul></ul><ul><ul><li>how good are the action models themselves? </li></ul></ul><ul><ul><li>Why? </li></ul></ul><ul><li>2. How does classifier performance vary with our design choices? </li></ul><ul><ul><li>With hyperparameters? </li></ul></ul><ul><li>3. How necessary were the more sophisticated aspects of our design? </li></ul><ul><ul><li>Do simple techniques do as well? </li></ul></ul>
  7. 19. <ul><li>Feature selection and labeling are known bottlenecks in learning sensor-based models of human activity </li></ul><ul><li>it is possible to use data from dense object-use sensors and very simple commonsense models of object use, actions and activities to automatically interpret and learn models for other sensors, a technique we call common sense based joint training </li></ul>

×