• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Interactive Wall (Multi Touch Interactive Surface)
 

Interactive Wall (Multi Touch Interactive Surface)

on

  • 5,470 views

...


A graduation project at the Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt.

Interactive
wall Allows users to interact with the computer using his/her hands
gestures,

The application uses an optical camera to detect and track
the hands using image processing techniques,

The desktop is projected
on a wall using a projector, which gives the user the free experience
of interacting with the computer freely.
_________________________________________________________________
Windows Live™: Keep your life in sync. Check it out!
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t1_allup_explore_012009

Statistics

Views

Total Views
5,470
Views on SlideShare
5,436
Embed Views
34

Actions

Likes
1
Downloads
0
Comments
0

2 Embeds 34

http://www.slideshare.net 32
http://webcache.googleusercontent.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Our goal is the development of a more natural interface; a camera-projector system using hand gesture analysis.Optical cameraProjector Framework Hand gesture , ApplicationsA development framework that gives any application using it the ability to allow the user to interact with this application using “Hand Gestures”.A stream of video is captured using low cost webcam then processed in the framework to extract the hand position, recognize the gesture and finally the application handles the fired events.
  • Our goal is the development of a more natural interface; a camera-projector system using hand gesture analysis.Optical cameraProjector Framework Hand gesture , ApplicationsA development framework that gives any application using it the ability to allow the user to interact with this application using “Hand Gestures”.A stream of video is captured using low cost webcam then processed in the framework to extract the hand position, recognize the gesture and finally the application handles the fired events.
  • 1. Initialization (k=0). In this step it is looked for the object in the whole image due we do not know previously the object position. We obtain this way. Also we can considerer initially a big error tolerance.2. Prediction (k>0). In this stage using the Kalman filter we predict the relative position of the object, such position is considered as search center to find the object.3. Correction (k>0). In this part we locate the object (which is in the neighborhood point predicted in the previous stage) and we use its realposition (measurement) to carry out the state correction using the Kalman filter finding this way .The steps 2 and 3 are carried out while the object tracking runs.the equations for the Kalman filter fall into two groups: Time update equations and measurement update equations. The time update equations are responsible for projecting forward (in time) the current state and error covarianceestimates to obtain the a priori estimates for the next time step. The measurement update equations are responsible for the feedback—i.e. for incorporating a new measurement intothe a priori estimate to obtain an improved a posteriori estimate.
  • Change images

Interactive Wall (Multi Touch Interactive Surface) Interactive Wall (Multi Touch Interactive Surface) Presentation Transcript

  • Multi-touch Interactive Surface
    Supervisors:
    Professor Dr. Mohammed Roushdy
    Dr. Haythem El-Messiry
    T.A. Ahmad Salah
    1
  • Team Members
    2
  • Agenda
    Introduction
    Physical Environment and Framework
    Project Modules and Applications
    Challenges
    Conclusion and Future work
    Tools and References
    3
  • Motivation
    • A more natural and direct way of Human Computer Interaction (HCI).
    • Current Multi-touch devices are:
    Expensive
    Heavy
    Fragile
    Consume space
    4
  • Problem Definition
    • It would be more comfortable, effective and user friendly if the user could interact directly with the display device without any hardware equipments, just using his hands’ gestures.
    5
    • Our goal is to deliver an interactive surface characterized by low cost, efficiency and ease of use in real life applications.
  • Overview
    • Optical camera-projector system
    • Generic Framework for Human Computer Interaction (HCI) using hand gestures.
    6
  • Physical Environment
    Physical environment consists of:
    A projector.
    A webcam placed over the projector’s lens capturing the projected surface.
    7
  • 1.25 m
    0.8 m
    8
  • Physical Environment
    Surface
    2.15 m
    Camera
    Projector
    9
  • Framework
    10
    Controller
    Configuration Module
    Input Module
    Hand Tracking
    Hand Segmentation
    Hand gesture Recognition
    Interface
  • Controller Module
    11
  • Detect Corners
    Controller Module
    Color Mapping
    Search for hand in entry point
    Segmentation
    Construct the search window
    Track the hand
    Fire Event
    Gesture Recognition
    12
  • Configuration Module
    13
  • Colors Mapping
    • Maps the colors between the desktop and the captured image colors.
    • A set of colors are projected and captured for the color calibration process.
    Desktop Colors
    Projected Colors
    14
  • Corner Detection
    • The four corners of the image are automatically detected using fast corner detection algorithm.
    15
  • Input Module
    16
    • Calibrate captured image according to the four calibration points.
    Geometric Calibration
    17
    Captured Image
    Calibrated image
  • Hand Tracking Module
    18
  • Kalman Filter
    • The Kalman filter algorithm is essentially a set of recursive equations that implement a predictor-corrector estimator.
    • Steps:
    Initialization
    Prediction
    Correction
    19
  • Hand Segmentation Module
    20
  • Skin Color Detection
    • The hand is affected by the projector’s light which results in generating different texture patterns on the hand’s surface which excludes any skin detection algorithm.
    21
    Captured image
    Skin detection applied
  • Subtraction using Color Calibration
    • Subtract captured image form the desktop image.
    • Convert colors of the desktop image to that of the captured image.
    22
  • Color Calibration
    • Get colors’ training set,
    each desktop color and its
    corresponding projected color.
    • Divide each pair of images into regions (3x3).
    • Calculate the transformation matrix A for each region.
    • b=A * x ; where b is the calibrated color, A is the transformation matrix, x the desktop color.
    23
  • Segmentation Results
    Largest Blob
    Extraction
    Desktop
    Captured
    Segmented
    Global
    thresholding
    24
  • Blob Analysis
    • A heuristic method to extract the hand from the arm is applied using morphological operations.
    • A bounding box (60 * 60) is constructed around the largest blob.
    Original
    Closed
    Original - Closed
  • Hand Gesture Recognition Module
    26
  • Hand Gesture Recognition
    27
    Contour
    Tracing
    Contour
    Re-sampling
    EFD
    DB
    EFD
    Gesture
    Type
  • Elliptical Fourier Descriptors
    • Elliptical Fourier descriptors are a parametric representation of closed contours based on
    28
    harmonically related ellipses.
    • Any closed contour can be constructed from an infinite set of Elliptical Fourier descriptors.
  • Training
    Training Set :
    60 training images from each gesture
    29
  • Testing Results
    30
  • Interface Module
    31
  • Interface Module
    • A gesture event is fired whenever the framework recognizes a gesture, the event contain the position and the gesture type.
    • The interface handles the raised events.
    • The user can map the gestures to events.
    32
  • Main Gestures
    33
  • Applications
    Puzzle game
    34
    Painter
    Image Viewer
  • Demo
    35
  • Challenges
    • Controller Module:
    Multi hands tracking
    • Gesture Recognition Module:
    Similar gestures
    • Segmentation Module
    Dark and complex backgrounds
    Arm Extraction
    36
  • Conclusion
    • Human Computer Interaction field is still an open field.
    • Image processing can be very powerful if used in the appropriate environment.
    37
  • Future Work
    • Using the depth Z- axis besides X and Y axes for determining the hand position.
    • Multi hands’ and multi users’ interaction.
    • Interactive Wall can be used with another surface other than the projector, for example a large screen can be used.
    • Recognize dynamic gestures.
    38
  • Tools
    Software
    • Microsoft Visual Studio 2008
    • MatLab
    • OpenCV
    Hardware
    • Optical Camera
    • Projector
    39
  • References
    • Edward Rosten, Reid Porter, and Tom Drummond, Faster and better: a machine learning approach to corner detection, Los Alamos National Laboratory, Los Alamos, New Mexico, USA, 87544, Cambridge University, Cambridge University Engineering Department, Trumpington Street, Cambridge, UK, CB2 1PZ, October 14, 2008.
     
    • YongwonJeong and Richard J. Radke,Reslicing axially-sampled 3D shapes using elliptic Fourier descriptors, Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute, USA, 2007.
     
    • Louis Patrick Nicoli,Automatic Target Recognition of Synthetic Aperture Radar Images using Elliptical Fourier Descriptors, Florida Institute of Technology, Melbourne, Florida, August, 2007.
     
    • G. Amayeh, G. Bebis, A. Erol, and M. Nicolescu,A New Approach to Hand-Based Authentication, Computer Vision Laboratory, University of Nevada, Reno, 2007.
    • AsanterabiMalima, ErolÖzgür, and MüjdatÇetin, A Fast algorithm for vision-based hand gesture recognition robot control, Faculty of Engineering and Natural Science, Sabancı University, Tuzla, İstanbul, Turkey, 2006.
    40
  • References
    • Greg Welch and Gary Bishop, An Introduction to the Kalman Filter, Department of Computer Science University of North Carolina at Chapel Hill, NC 27599-3175, Updated: Monday July 24, 2006.
    • E. Rosten and T. Drummond,Machine learning for high-speed corner detection, European Conference on Computer Vision, May 2006.
     
    • Rafael C.Gonzalez, Richard E.Woods, Digital Image Processing ,Second Edition, 2006.
    • Erik Cuevas, Daniel Zaldivar and Raul Rojas, Kalman filter for vision tracking, FreieUniversität Berlin, InstitutfürInformatikTakustr. 9, D 14195 Berlin, Germany Universidad de Guadalajara Av. Revolucion No. 1500, C.P. 44430, Guadalajara, Jal., Mexico, August 10, 2005.
     
    • Jason J. Corso, Techniques for vision based Human computer interaction, A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy, Baltimore, Maryland, August 2005.
    .
    41
  • References
    • Marcelo Bernardes Vieira, Luiz Velho, AslaS´a, Paulo CezarCarvalho, A Camera Projector System for Real-Time 3D Video, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Instituto de Matem´aticaPura e Aplicada Est. Dona Castorina, 110, Riode Janeiro, Brazil, 2005
    •  NgonT.Truong, Jae-GyunGwag, Yong-Jin Park, and Suk-Ha Lee, Genetic Diversity of Soybean Pod Shape Based on Elliptical Fourier Descriptors, Dep. of Plant Science, Seoul National University, Seoul 151-742, Korea Dep. Of Crop Sciences, Can Tho University, Can Tho, Viet Nam Genetic Resources Div., National Institute of Agricultural Biotechnology, Suwon 441-707, Korea, 2005.
    • E. Rosten and T. Drummond,Fusing Points and Lines for High Performance Tracking, ICCV, 2005.
     
    • Attila Licsár1, TamásSzirányi, Dynamic Training of Hand Gesture Recognition System, Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04), University of Veszprém, Department of Image Processing and Neurocomputing, H-8200 Veszprém, Egyetem u. 10. Hungary. Analogical & Neural Computing Laboratory, Computer & Automation Research Institute, Hungarian Academy of Sciences, H 1111 Budapest, Kende u. 13-17, Hungary, 2004.
     
    42
  • References
    • Attila Licsár1, TamásSzirányi, lecture notes in computer science, University of Veszprém, Department of Image Processing and Neurocomputing, H-8200 Veszprém, Egyetem u. 10. Hungary. Analogical & Neural Computing Laboratory, Computer & Automation Research Institute, Hungarian Academy of Sciences, H 1111 Budapest, Kende u. 13-17, Hungary, 2004
    • Stephen Wolf,Color Correction Matrix for Digital Still and Video Imaging Systems, U.S. DEPARTMENT OF COMMERCE, December 2003.
    • Qing Chen, Evaluation of OCR Algorithms for Images with Different Spatial Resolutions and Noises, School of Information Technology and Engineering Faculty of Engineering University of Ottawa ©, Ottawa, Canada, 2003.
    • Vladimir Vezhnevets _ VassiliSazonovAllaAndreeva, A Survey on Pixel-Based Skin Color Detection Techniques, Graphics and Media Laboratory † Faculty of Computational Mathematics and Cybernetics Moscow State University, Moscow, Russia, 2003.
     
    • Yasushi HAMADA, Nobutaka SHIMADA, Yoshiaki SHIRAI, Hand Shape Estimation Using Sequence of Multi-Ocular Images Based on Transition Network, Department of Computer-Controlled Mechanical System, Osaka University, Japan, 2002.
    • Dengsheng Zhang and Guojun Lu,A Comparative Study on Shape Retrieval Using Fourier Descriptors with Different Shape Signatures, Gippsland School of Computing and Information Technology Monash University Churchill, Victoria 3842, Australia, 2001.
    43
  • References
    • Douglas Chai and AbdElsalamBouzerdoum,A Bayesian approach to skin color classification in YbCr color space, School of engineering and mathematics, Edith Cowan University, Australia, 2000
    • Kenny Teng, Jeremy Ng, Shirlene Lim, Computer Vision Based Sign Language Recognition for Numbers.
    • Nguyen Dang Binh, Enokida Shuichi, Toshiaki Ejima,Real-Time Hand Tracking and Gesture Recognation System, GVIP 05 Conference, 19-21 December 2005, CICC, Cairo, Egypt, Intelligence Media Laboratory, Kyushu Institute of Technology 680-4, Kawazu, Iizuka, Fukuoka 820, JAPAN.
     
    • G. Amayeh, G. Bebis, A. Erol, and M. Nicolescu, A New Approach to Hand-Based Authentication, Computer Vision Laboratory, University of Nevada, Reno.
    • A. M. Hamad, Fawziashaaban, Mona Gabr, NohaSayed, RababHussien, Robot Vision, Faculty of computer and Information Sciences Ain Shams University, Cairo, Egypt, 2008.
    44
  • Thank You !
    45