Introduction

241 views
190 views

Published on

It is all about hand gesture recognition using MEMS accelerometer.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
241
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
18
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Introduction

  1. 1. MICRO ELECTRICAL MECHANICAL SYSTEM ACCELEROMETER BASED NONSPECIFIC-USER HAND GESTURE RECOGNITION PRESENTED BY APARNA C BHADRAN S7 CSE B.Tech 1
  2. 2. INDEX 1. INTRODUCTION 2. GESTURE MOTION ANALYSIS 3. SYSTEM SENSING OVERVIEW 4. GESTURE SEGMENTATION 5. GESTURE RECOGNITION BASED ON SIGN SEQUENCE AND HOPFIELD NETWORK 6. GESTURE RECOGNITION BASED ON VELOCITY INCREMENT 2
  3. 3. 7.GESTURE RECOGNITION BASED ON SIGN SEQUENCE AND TEMPLATE MATCHING 8.EXPERIMENTAL RESULTS 9.CONCLUSION 10.REFERENCES 3
  4. 4. INTRODUCTION • The increase in human-machine interactions has made user interface technology more important. • MEMS-Micro Electrical Mechanical System. • Physical gestures will 1. greatly ease the interaction process. 2. enable humans to more naturally command computers or machines. 4
  5. 5. • Examples: • telerobotics • character-recognition • controlling a television set remotely • enabling a hand as a 3-D mouse . • Many existing devices can capture gestures such as – joystick – trackball – touch tablet. 5
  6. 6. • The technology employed for capturing gestures can be • Relatively expensive. • So Micro Inertial Measurement Unit is used. • Two types of gesture recognition methods • Vision-based • Accelerometer. • Due to the limitations of vision based such as • unexpected optical noise, • slower dynamic response 6
  7. 7. • Recognition system is implemented based on MEMS acceleration sensors. • The acceleration patterns are not mapped into -velocity -displacement • not transformed into frequency domain. • But are recognized in time domain. 7
  8. 8. • There are three different gesture recognition models: 1) sign sequence and Hopfield based gesture recognition model 2) velocity increment based gesture recognition model 3) sign sequence and template matching based gesture recognition model 8
  9. 9. GESTURE MOTION ANALYSIS • Gesture motions are in the vertical plane i.e x- z plane. • The alternate sign changes of acceleration on the two axes are required to differentiate any one of the 7 gestures: – up, – down, – left, – right, – tick, – circle, – cross 9
  10. 10. • the gesture up has • the acceleration on z-axis in the order: negative —positive— negative • no acceleration on x-axis 10
  11. 11. 1 3: acceleration on z-axis is negative • velocity changes from zero to a maximum value at 3 • acceleration at point 3 is zero. 11
  12. 12. • 3 4: acceleration on z-axis is positive; velocity changes from negative to positive and is maximum at point 4, where acceleration becomes zero. • 4 1: acceleration on z-axis is negative; velocity changes from positive to zero. • acceleration and velocity become zero at point 1 12
  13. 13. 13
  14. 14. SENSING SYSTEM OVERVIEW 1. Sensor Description • The sensing system utilized for hand motion data collection : 14
  15. 15. 2. System Work Flow 1. The system is switched on 2. The accelerations in three perpendicular directions are detected by the MEMS sensors. 3. Transmitted to a PC via Bluetooth protocol. 4. The data is passed through a segmentation program 5. the processed data are recognized by a comparison program to determine the presented gestures. 15
  16. 16. GESTURE SEGMENTATION A. Data Acquisition • The sensing devices should be held horizontally during the whole data collection process. • The time interval between two gestures should be no less than 0.2 seconds 16
  17. 17. • The gestures should be performed as 17
  18. 18. B. Gesture Segmentation 1. Data Preprocessing: • Raw data received from the sensors are preprocessed by two 2 processes: a) vertical axis offsets are removed in the time-sequenced data b) a filter is applied to the data sets to eliminate high-frequency noise data 18
  19. 19. 2.Segmentation: • The purpose is to find the terminal points of each gesture in a data set . • The conditions of determining the gesture terminal points are a) amplitude of the points b) point separation c) mean value d) distance from the nearest intersection e) sign variation between two successive points. 19
  20. 20. • all these five conditions are checked separately on x- and z- axes acceleration data. • two matrices are generated for each of gesture sequence data 20
  21. 21. GESTURE RECOGNITION BASED ON SIGN SEQUENCE AND HOPFIELD NETWORK 1) Feature Extraction: • The gestures which motions on one axis are separated from those which involve 2D motions. 21
  22. 22. 22
  23. 23. • The gesture code is 1,-1,1,-1 2) Gesture Encoding: • Before recognition – the obtained gesture code should be encoded – it can be restored later by Hopfield network. • The maximum number of signs for one gesture on one axis is four. • so if the x and z axes sign sequences are combined, there will be totally eight numbers in one gesture code. • the input for Hopfield network can only be ―1‖ or ― -1‖. 23
  24. 24. • we encoded the positive sign, negative sign and zero using the following rules: • ―1 1‖ represents positive sign; • ― - 1 -1‖ represents negative sign; • ―1 -1‖ represents zero. 24
  25. 25. 25
  26. 26. 3) Hopfield Network as Associative Memory: • The involvement of Hopfield network has made more fault tolerant. • The network can retrieve the patterns that has been stored previously. 26
  27. 27. 4) Gesture Comparison: • After gesture code restoration, each gesture code is compared with the standard gesture codes. • The comparison is made by calculating the difference between the two codes • Smallest difference indicates the most likely gesture. 27
  28. 28. GESTURE RECOGNITION BASED ON VELOCITY INCREMENT • The acceleration of a gesture on one axis is partitioned firstly according to the signs. 28
  29. 29. • Due to the intensity variance of each gesture, an area sequence should be normalized 29
  30. 30. • after normalization: • the area sequences are not compared immediately • They are processed by using an algorithm analogous to ―center of mass‖. • The final step - to compare the velocity increment sequence • The gesture, which has the minimum value can be recognized. 30
  31. 31. GESTURE RECOGNITION BASED ON SIGN SEQUENCE AND TEMPLATE MATCHING • The recognition algorithm of this model is very similar to that of model one • except that no Hopfield network is used • All the sign sequences are represented by - 1, 1 and 0 31
  32. 32. 32
  33. 33. EXPERIMENTAL RESULTS 33
  34. 34. • Model III has the highest accuracy among the three models, while the performance of Model II is the worst of the three. • The test results shown in Table III are based on 72 test samples. 34
  35. 35. CONCLUSION • To enhance the performance : • improve the segmentation algorithm . • Moreover, other features of the motion data may be utilized for pattern classification in future work. 35
  36. 36. REFERENCES [1] T. H. Speeter, ―Transformation human hand motion for telemanipulation,‖ Presence, vol. 1, no. 1, pp. 63–79, 1992. [2] S. Zhou, Z. Dong, W. J. Li, and C. P. Kwong, ―Hand-written char- acter recognition using MEMS motion sensing technology,‖ in Proc.IEEE/ASMEInt.Conf. Advanced Intelligent Mechatronics, 2008, pp. 1418–1423. [3] J. K. Oh, S. J. Cho, and W. C. Bang et al., ―Inertial sensor based recognition of 3-D character gestures with an ensemble of classifiers,‖ pre- sented at the 9th Int. Workshop on Frontiers in Handwriting RecogniTion, 2004. [4] W. T. Freeman and C. D. Weissman, ―TV control by hand gestures,‖ presented at the IEEE Int. Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, 1995. 36
  37. 37. THANK YOU 37

×