Common Sense Based Joint Training of Human Activity Recognizers
Outline <ul><li>Introduction </li></ul><ul><li>Sensors </li></ul><ul><li>Techniques </li></ul><ul><li>Evaluation Methodolo...
Introduction <ul><li>The paper presents a joint probabilistic model of object-use, physical actions and activities that im...
Sensors <ul><li>Two bracelets on the dominant wrist </li></ul><ul><li>Radio Frequency Identification (RFID) reader, called...
 
Techniques <ul><li>Problem Statement </li></ul><ul><li>A Joint Model </li></ul><ul><ul><li>The Dynamic Bayesian Network (D...
Problem Statement <ul><li>Given the above domain information and  observed data, we wish to build a classifier over MSP dat...
Joint models
Action recognition
The VirtualBoost Algorithm
Evaluation Methodology (cont.) <ul><li>Collected iBracelet and MSP data from two researchers performing 12 activities cont...
Evaluation Methodology (cont.)
Result(cont.)
Result(cont.)
Conclusions <ul><li>This paper have demonstrated that it is possible to use data from dense object-use sensors and very si...
Upcoming SlideShare
Loading in …5
×

Common Sense Based Joint Training Of Human Activity Recogizers

536 views

Published on

Published in: Economy & Finance
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
536
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
5
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Common Sense Based Joint Training Of Human Activity Recogizers

  1. 1. Common Sense Based Joint Training of Human Activity Recognizers
  2. 2. Outline <ul><li>Introduction </li></ul><ul><li>Sensors </li></ul><ul><li>Techniques </li></ul><ul><li>Evaluation Methodology </li></ul><ul><li>Results </li></ul><ul><li>Conclusions </li></ul>
  3. 3. Introduction <ul><li>The paper presents a joint probabilistic model of object-use, physical actions and activities that improves activity detection relative to models that reason separately about these quantities, and learns action models with much less labeling overhead than conventional approaches </li></ul>
  4. 4. Sensors <ul><li>Two bracelets on the dominant wrist </li></ul><ul><li>Radio Frequency Identification (RFID) reader, called the iBracelet, for detecting object use </li></ul><ul><li>The other is a personal sensing reader, called the Mobile Sensing Platform (MSP), for detecting arm movement and ambient conditions </li></ul><ul><li>eight sensors: a six-degree-of-freedom accelerometer, microphones sampling 8-bit audio at 16kHz, IR/visible light, high-frequency light, barometric pressure, humidity, temperature and compass </li></ul>
  5. 6. Techniques <ul><li>Problem Statement </li></ul><ul><li>A Joint Model </li></ul><ul><ul><li>The Dynamic Bayesian Network (DBN) </li></ul></ul><ul><li>A Layered Model </li></ul>
  6. 7. Problem Statement <ul><li>Given the above domain information and observed data, we wish to build a classifier over MSP data that: </li></ul><ul><ul><li>Infers the current action being performed and the object on which it is being performed. </li></ul></ul><ul><ul><li>Combines with object-use data O (when available) to produce better estimates of current activity A. </li></ul></ul><ul><ul><li>For efficiency reasons, uses F’ « F features of M. </li></ul></ul>
  7. 8. Joint models
  8. 9. Action recognition
  9. 10. The VirtualBoost Algorithm
  10. 11. Evaluation Methodology (cont.) <ul><li>Collected iBracelet and MSP data from two researchers performing 12 activities containing 10 distinct actions of interest using 30 objects in an instrumented apartment. </li></ul><ul><li>Four to ten executions of each activity were recorded over a period of two weeks. </li></ul><ul><li>Ground truth was recorded using video, and each sensor data frame was annotated in a post-pass with the actual activity and action (if any) during that frame, and designated “other” if not </li></ul>
  11. 12. Evaluation Methodology (cont.)
  12. 13. Result(cont.)
  13. 14. Result(cont.)
  14. 15. Conclusions <ul><li>This paper have demonstrated that it is possible to use data from dense object-use sensors and very simple commonsense models of object use, actions and activities to automatically interpret and learn models for other sensors, a technique we call common sense based joint training. </li></ul>

×