Your SlideShare is downloading. ×
  • Like
Common Sense Based Joint Training of Human Activity Recognizers
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Common Sense Based Joint Training of Human Activity Recognizers

  • 811 views
Published

 

Published in Business , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
811
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
17
Comments
0
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1.  
  • 2.
    • Introduction
    • Sensors
    • Techniques
    • Evaluation Methodology
    • Results
    • Conclusions
  • 3.
    • Object use is a promising ingredient for general, low-overhead indoor activity recognition
    • (e.g., whether a door is being opened or closed)
    • (e.g., microwaveable objects)
    • (e.g., clearing a table and eating dinner)
    • We present a joint probabilistic model of object-use, physical actions and activities that improves activity detection relative to models that reason separately about these quantities, and learns action models with much less labeling overhead than conventional approaches
  • 4.
    • two people performing 12 activities involving 10 actions on 30 objects
    • defined simple commonsense models for the activities using an average of 4 English words per activity
    • Combining action data with object-use in this manner increases precision/recall of activity recognition from 76/85% to 81/90%; the automated action-learning techniques yielded 40/85% precision/recall over 10 possible object/action classes
  • 5.
    • wear two bracelets on the dominant wrist
    • One is a Radio Frequency Identification (RFID) reader, called the iBracelet, for detecting object use,
    • The other is a personal sensing reader, called the Mobile Sensing Platform (MSP), for detecting arm movement and ambient conditions
    • eight sensors: a six-degree-of-freedom accelerometer, microphones sampling 8-bit audio at 16kHz, IR/visible light, high-frequency light, barometric pressure, humidity, temperature and compass
  • 6.  
  • 7.
    • Problem Statement
      • Infers the current action being performed and the object on which it is being performed.
      • Combines with object-use data O (when available) to produce better estimates of current activity A.
      • For efficiency reasons, uses F’ « F features of M.
    • A Joint Model
      • The Dynamic Bayesian Network (DBN)
    • A Layered Model
  • 8.  
  • 9.  
  • 10.  
  • 11.  
  • 12.
    • 1. How good are the learned models?
      • how much better are the models of activity with action information combined?
      • how good are the action models themselves?
      • Why?
    • 2. How does classifier performance vary with our design choices?
      • With hyperparameters?
    • 3. How necessary were the more sophisticated aspects of our design?
      • Do simple techniques do as well?
  • 13.  
  • 14.  
  • 15.  
  • 16.  
  • 17.  
  • 18.  
  • 19.
    • Feature selection and labeling are known bottlenecks in learning sensor-based models of human activity
    • it is possible to use data from dense object-use sensors and very simple commonsense models of object use, actions and activities to automatically interpret and learn models for other sensors, a technique we call common sense based joint training