SlideShare a Scribd company logo
We use bag-of-words (BoW) to encode our DenseTrack interest points. Using k-means
over all five actions, we create a codebook of size 2000. We used SVM with Radio Bases
Function (RBF) kernel to classify our dataset. SVM finds the optimal hyper-plane by
maximizing the margin from the supporting vectors of each action dataset. Using k-fold
cross validation we split the data into train and test and achieved an average F1-score of
%67.
In order to continuously track a person we run
Viola Jones once every 20 frames and use
Camshift method for the remaining frames to
track more efficiently. Camshift continuously
uses and adapts the Mean-Shift algorithm which
creates a confidence map in the new image based
on the color histogram of the previous object in
the previous image.
We present a systematic approach to automatically detect and classify limited number
of human actions in an office environment. Our setup includes two Pan-Tilt-Zoom
(PTZ) network cameras that track faces and recognize people using linear discriminant
analysis (LDA). The subjects are tracked in the image plane and the PTZ camera
parameters are updated in real time to keep the person at the center of the image. Our
office dataset includes 863 samples covering five actions (Interaction between two or
more people, walking, sitting, writing on a whiteboard, and getting coffee) from two
different viewpoints. A set of spatio-temporal visual features are computed to represent
these actions. DenseTrack features include Histogram of Oriented Gradients (HOG),
Histogram of Optical Flow (HOF), Motion Boundary Histogram (MBH) and trajectory
data. Using support vector machine (SVM) we train and test DenseTrack features with
k-fold cross validation to achieve an average accuracy of 67%.
The purpose of face recognition is to tag actions with corresponding people. Three
different algorithms (Eigenface, Fisherface, and Local Binary Pattern Histogram) are
implemented, however Fisherface shows to be more reliable in our setup. This is due to
the fact that it uses LDA to differentiate features
Both PTZ-213 cameras used in our experiment are installed in the back corner of the
office ceiling. Cameras are connected to a secured network and are controlled with
TCP/IP. Each camera is initialized at a different predefined location (entrance, near the
coffee machine). Once a person is detected, the tracker computes the pan, tilt, and zoom
to keep the person in the center of the screen in real time until there is no motion for a
predefined duration of time (10 sec).
Unconstrained Activity Recognition in an Office Environment
Christopher Ray Ramirez, Parker James Paul Sankey, California State University San Bernardino
Amir M. Rahimi, B.S. Manjunath, Department of Electrical and Computer Engineering
Abstract
Face Detection
Acknowledgements
For face detection we used Viola
Jones/Haar Training algorithm. This
algorithm has a bag of features that is
used to compare darker and lighter
regions in frame. This selects the
region to be used in tracking the face.
Conclusion
Camera Setup
Tracking
This work was supported in part by the National Science Foundation
through grant number IIS-0808772 and by the U.S. Office of Naval
Research N00014-12-1-0503.
We implemented a systematic framework
that automatically detects, recognize, track,
and classify human actions in an
unconstrained office environment. Each
module (detection, recognition,
classification) is optimized separately for
the most reliable performance. In future
work higher order interactions can be
modeled among actions to model and
predict long sequence of actions (Activities).
Facial Recognition
Action Dataset
DenseTrack contains dense points and tracks them based off movements from a dense
optical flow field. By having HOG, HOF, and MBH, DenseTrack is robust to fast,
irregular movements as well as a efficient solution to remove camera motion.
Collecting/Annotating Dataset
Wang, Heng, Alexander Kläser, Cordelia Schmid, and Cheng-Lin Liu. "Dense Trajectories and Motion Boundary Descriptors for Action Recognition."
International Journal of Computer Vision 103.1 (2013): 60-79. Web. 11 Aug. 2014.
Performance (Face Detection/Recognition)
Feature Encoding and Classification

More Related Content

What's hot

A deep learning facial expression recognition based scoring system for restau...
A deep learning facial expression recognition based scoring system for restau...A deep learning facial expression recognition based scoring system for restau...
A deep learning facial expression recognition based scoring system for restau...
CloudTechnologies
 
Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748
EditorIJAERD
 
Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network  Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network
ijcga
 
Interactive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor networkInteractive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor network
ijcga
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
Sudhakar Spartan
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment
University of Moratuwa
 
Multiple Sensor Fusion for Moving Object Detection and Tracking
Multiple Sensor Fusion  for Moving Object Detection and TrackingMultiple Sensor Fusion  for Moving Object Detection and Tracking
Multiple Sensor Fusion for Moving Object Detection and Tracking
IRJET Journal
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Ahmed Gad
 
Gaze detection
Gaze detectionGaze detection
Gaze detection
zeyad algshai
 
N010226872
N010226872N010226872
N010226872
IOSR Journals
 
A comparative review of various approaches for feature extraction in Face rec...
A comparative review of various approaches for feature extraction in Face rec...A comparative review of various approaches for feature extraction in Face rec...
A comparative review of various approaches for feature extraction in Face rec...
Vishnupriya T H
 
Robot Machine Vision
Robot Machine VisionRobot Machine Vision
Robot Machine Vision
anand hd
 
Detection of Virtual Passive Pointer
Detection of Virtual Passive PointerDetection of Virtual Passive Pointer
Detection of Virtual Passive Pointer
CSCJournals
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar report
Vishnu Prasad
 
Real time object tracking and learning using template matching
Real time object tracking and learning using template matchingReal time object tracking and learning using template matching
Real time object tracking and learning using template matching
eSAT Publishing House
 
KaoNet: Face Recognition and Generation App using Deep Learning
KaoNet: Face Recognition and Generation App using Deep LearningKaoNet: Face Recognition and Generation App using Deep Learning
KaoNet: Face Recognition and Generation App using Deep Learning
Van Huy
 
IRJET- Finger Vein Pattern Recognition Security
IRJET- Finger Vein Pattern Recognition SecurityIRJET- Finger Vein Pattern Recognition Security
IRJET- Finger Vein Pattern Recognition Security
IRJET Journal
 
Super Resolution
Super ResolutionSuper Resolution
Super Resolution
alokahuti
 
06 robot vision
06 robot vision06 robot vision
06 robot vision
Tianlu Wang
 
Cozzella presentation ICAPMMOMI 2010
Cozzella presentation ICAPMMOMI 2010Cozzella presentation ICAPMMOMI 2010
Cozzella presentation ICAPMMOMI 2010
Lorenzo Cozzella
 

What's hot (20)

A deep learning facial expression recognition based scoring system for restau...
A deep learning facial expression recognition based scoring system for restau...A deep learning facial expression recognition based scoring system for restau...
A deep learning facial expression recognition based scoring system for restau...
 
Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748
 
Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network  Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network
 
Interactive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor networkInteractive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor network
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment
 
Multiple Sensor Fusion for Moving Object Detection and Tracking
Multiple Sensor Fusion  for Moving Object Detection and TrackingMultiple Sensor Fusion  for Moving Object Detection and Tracking
Multiple Sensor Fusion for Moving Object Detection and Tracking
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
 
Gaze detection
Gaze detectionGaze detection
Gaze detection
 
N010226872
N010226872N010226872
N010226872
 
A comparative review of various approaches for feature extraction in Face rec...
A comparative review of various approaches for feature extraction in Face rec...A comparative review of various approaches for feature extraction in Face rec...
A comparative review of various approaches for feature extraction in Face rec...
 
Robot Machine Vision
Robot Machine VisionRobot Machine Vision
Robot Machine Vision
 
Detection of Virtual Passive Pointer
Detection of Virtual Passive PointerDetection of Virtual Passive Pointer
Detection of Virtual Passive Pointer
 
Sensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar reportSensors on 3 d digitization seminar report
Sensors on 3 d digitization seminar report
 
Real time object tracking and learning using template matching
Real time object tracking and learning using template matchingReal time object tracking and learning using template matching
Real time object tracking and learning using template matching
 
KaoNet: Face Recognition and Generation App using Deep Learning
KaoNet: Face Recognition and Generation App using Deep LearningKaoNet: Face Recognition and Generation App using Deep Learning
KaoNet: Face Recognition and Generation App using Deep Learning
 
IRJET- Finger Vein Pattern Recognition Security
IRJET- Finger Vein Pattern Recognition SecurityIRJET- Finger Vein Pattern Recognition Security
IRJET- Finger Vein Pattern Recognition Security
 
Super Resolution
Super ResolutionSuper Resolution
Super Resolution
 
06 robot vision
06 robot vision06 robot vision
06 robot vision
 
Cozzella presentation ICAPMMOMI 2010
Cozzella presentation ICAPMMOMI 2010Cozzella presentation ICAPMMOMI 2010
Cozzella presentation ICAPMMOMI 2010
 

Similar to Unconstrained Activity Recognition in an Office Environment

Abstract.docx
Abstract.docxAbstract.docx
Abstract.docx
KISHWARYA2
 
IRJET- Recognition of Human Action Interaction using Motion History Image
IRJET-  	  Recognition of Human Action Interaction using Motion History ImageIRJET-  	  Recognition of Human Action Interaction using Motion History Image
IRJET- Recognition of Human Action Interaction using Motion History Image
IRJET Journal
 
IRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCV
IRJET Journal
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
IRJET Journal
 
Real-Time Pertinent Maneuver Recognition for Surveillance
Real-Time Pertinent Maneuver Recognition for SurveillanceReal-Time Pertinent Maneuver Recognition for Surveillance
Real-Time Pertinent Maneuver Recognition for Surveillance
IRJET Journal
 
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docxA feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
evonnehoggarth79783
 
Automated Surveillance System and Data Communication
Automated Surveillance System and Data CommunicationAutomated Surveillance System and Data Communication
Automated Surveillance System and Data Communication
IOSR Journals
 
Chapter 1_Introduction.docx
Chapter 1_Introduction.docxChapter 1_Introduction.docx
Chapter 1_Introduction.docx
KISHWARYA2
 
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
csandit
 
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
cscpconf
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
 
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
IRJET-  	  Behavior Analysis from Videos using Motion based Feature ExtractionIRJET-  	  Behavior Analysis from Videos using Motion based Feature Extraction
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
IRJET Journal
 
Flow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionFlow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action Recognition
IRJET Journal
 
People or human tracking system
People or human tracking systemPeople or human tracking system
People or human tracking system
Venkatesan S
 
Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...
butest
 
Hand gesture recognition using support vector machine
Hand gesture recognition using support vector machineHand gesture recognition using support vector machine
Hand gesture recognition using support vector machine
theijes
 
Gait Recognition using MDA, LDA, BPNN and SVM
Gait Recognition using MDA, LDA, BPNN and SVMGait Recognition using MDA, LDA, BPNN and SVM
Gait Recognition using MDA, LDA, BPNN and SVM
IJEEE
 
Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...
IAEME Publication
 
Moving Object Detection for Video Surveillance
Moving Object Detection for Video SurveillanceMoving Object Detection for Video Surveillance
Moving Object Detection for Video Surveillance
IJMER
 
IRJET- Traffic Sign Classification and Detection using Deep Learning
IRJET- Traffic Sign Classification and Detection using Deep LearningIRJET- Traffic Sign Classification and Detection using Deep Learning
IRJET- Traffic Sign Classification and Detection using Deep Learning
IRJET Journal
 

Similar to Unconstrained Activity Recognition in an Office Environment (20)

Abstract.docx
Abstract.docxAbstract.docx
Abstract.docx
 
IRJET- Recognition of Human Action Interaction using Motion History Image
IRJET-  	  Recognition of Human Action Interaction using Motion History ImageIRJET-  	  Recognition of Human Action Interaction using Motion History Image
IRJET- Recognition of Human Action Interaction using Motion History Image
 
IRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCV
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
 
Real-Time Pertinent Maneuver Recognition for Surveillance
Real-Time Pertinent Maneuver Recognition for SurveillanceReal-Time Pertinent Maneuver Recognition for Surveillance
Real-Time Pertinent Maneuver Recognition for Surveillance
 
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docxA feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
A feasibility study of Smart Stadium WatchingCheck IEEE Format.docx
 
Automated Surveillance System and Data Communication
Automated Surveillance System and Data CommunicationAutomated Surveillance System and Data Communication
Automated Surveillance System and Data Communication
 
Chapter 1_Introduction.docx
Chapter 1_Introduction.docxChapter 1_Introduction.docx
Chapter 1_Introduction.docx
 
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
 
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
IRJET-  	  Behavior Analysis from Videos using Motion based Feature ExtractionIRJET-  	  Behavior Analysis from Videos using Motion based Feature Extraction
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
 
Flow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionFlow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action Recognition
 
People or human tracking system
People or human tracking systemPeople or human tracking system
People or human tracking system
 
Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...
 
Hand gesture recognition using support vector machine
Hand gesture recognition using support vector machineHand gesture recognition using support vector machine
Hand gesture recognition using support vector machine
 
Gait Recognition using MDA, LDA, BPNN and SVM
Gait Recognition using MDA, LDA, BPNN and SVMGait Recognition using MDA, LDA, BPNN and SVM
Gait Recognition using MDA, LDA, BPNN and SVM
 
Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...
 
Moving Object Detection for Video Surveillance
Moving Object Detection for Video SurveillanceMoving Object Detection for Video Surveillance
Moving Object Detection for Video Surveillance
 
IRJET- Traffic Sign Classification and Detection using Deep Learning
IRJET- Traffic Sign Classification and Detection using Deep LearningIRJET- Traffic Sign Classification and Detection using Deep Learning
IRJET- Traffic Sign Classification and Detection using Deep Learning
 

Unconstrained Activity Recognition in an Office Environment

  • 1. We use bag-of-words (BoW) to encode our DenseTrack interest points. Using k-means over all five actions, we create a codebook of size 2000. We used SVM with Radio Bases Function (RBF) kernel to classify our dataset. SVM finds the optimal hyper-plane by maximizing the margin from the supporting vectors of each action dataset. Using k-fold cross validation we split the data into train and test and achieved an average F1-score of %67. In order to continuously track a person we run Viola Jones once every 20 frames and use Camshift method for the remaining frames to track more efficiently. Camshift continuously uses and adapts the Mean-Shift algorithm which creates a confidence map in the new image based on the color histogram of the previous object in the previous image. We present a systematic approach to automatically detect and classify limited number of human actions in an office environment. Our setup includes two Pan-Tilt-Zoom (PTZ) network cameras that track faces and recognize people using linear discriminant analysis (LDA). The subjects are tracked in the image plane and the PTZ camera parameters are updated in real time to keep the person at the center of the image. Our office dataset includes 863 samples covering five actions (Interaction between two or more people, walking, sitting, writing on a whiteboard, and getting coffee) from two different viewpoints. A set of spatio-temporal visual features are computed to represent these actions. DenseTrack features include Histogram of Oriented Gradients (HOG), Histogram of Optical Flow (HOF), Motion Boundary Histogram (MBH) and trajectory data. Using support vector machine (SVM) we train and test DenseTrack features with k-fold cross validation to achieve an average accuracy of 67%. The purpose of face recognition is to tag actions with corresponding people. Three different algorithms (Eigenface, Fisherface, and Local Binary Pattern Histogram) are implemented, however Fisherface shows to be more reliable in our setup. This is due to the fact that it uses LDA to differentiate features Both PTZ-213 cameras used in our experiment are installed in the back corner of the office ceiling. Cameras are connected to a secured network and are controlled with TCP/IP. Each camera is initialized at a different predefined location (entrance, near the coffee machine). Once a person is detected, the tracker computes the pan, tilt, and zoom to keep the person in the center of the screen in real time until there is no motion for a predefined duration of time (10 sec). Unconstrained Activity Recognition in an Office Environment Christopher Ray Ramirez, Parker James Paul Sankey, California State University San Bernardino Amir M. Rahimi, B.S. Manjunath, Department of Electrical and Computer Engineering Abstract Face Detection Acknowledgements For face detection we used Viola Jones/Haar Training algorithm. This algorithm has a bag of features that is used to compare darker and lighter regions in frame. This selects the region to be used in tracking the face. Conclusion Camera Setup Tracking This work was supported in part by the National Science Foundation through grant number IIS-0808772 and by the U.S. Office of Naval Research N00014-12-1-0503. We implemented a systematic framework that automatically detects, recognize, track, and classify human actions in an unconstrained office environment. Each module (detection, recognition, classification) is optimized separately for the most reliable performance. In future work higher order interactions can be modeled among actions to model and predict long sequence of actions (Activities). Facial Recognition Action Dataset DenseTrack contains dense points and tracks them based off movements from a dense optical flow field. By having HOG, HOF, and MBH, DenseTrack is robust to fast, irregular movements as well as a efficient solution to remove camera motion. Collecting/Annotating Dataset Wang, Heng, Alexander Kläser, Cordelia Schmid, and Cheng-Lin Liu. "Dense Trajectories and Motion Boundary Descriptors for Action Recognition." International Journal of Computer Vision 103.1 (2013): 60-79. Web. 11 Aug. 2014. Performance (Face Detection/Recognition) Feature Encoding and Classification