This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
hand gesture based interactive photo sildersampada muley
Information conveyed in seminars, project presentation or even in class rooms can be effective when slideshow presentation is used. There are various means to control slides which require devices like mouse, keyboard, or laser pointer etc. The disadvantage is one must have prior knowledge about the devices in order to operate them. This paper proposes two methods to control the slides during a presentation using bare hands and compares their efficiencies. The proposed methods employ hand gestures given by the user as input. The gestures are identified by counting the number of active fingers and then slides are controlled. Unlike the conventional method for hand gesture recognition which makes use of gloves or markers or any other devices, this method does not require any additional devices and makes the user comfortable. The proposed method for gesture recognition does not require any database to identify a particular gesture.
Gesture recognition is the process of understanding and
interpreting meaningful movements of the hands, arms, face,
or sometimes head. It is of great need in designing an
efficient human-computer interface. The technology has
been in study in recent years because of its potential for
application in user interfaces
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
hand gesture based interactive photo sildersampada muley
Information conveyed in seminars, project presentation or even in class rooms can be effective when slideshow presentation is used. There are various means to control slides which require devices like mouse, keyboard, or laser pointer etc. The disadvantage is one must have prior knowledge about the devices in order to operate them. This paper proposes two methods to control the slides during a presentation using bare hands and compares their efficiencies. The proposed methods employ hand gestures given by the user as input. The gestures are identified by counting the number of active fingers and then slides are controlled. Unlike the conventional method for hand gesture recognition which makes use of gloves or markers or any other devices, this method does not require any additional devices and makes the user comfortable. The proposed method for gesture recognition does not require any database to identify a particular gesture.
Gesture recognition is the process of understanding and
interpreting meaningful movements of the hands, arms, face,
or sometimes head. It is of great need in designing an
efficient human-computer interface. The technology has
been in study in recent years because of its potential for
application in user interfaces
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
It is the best and attractive ppt of Gesture Recognition Technology...This is the TOUCHLESS technology...and will surely hit the market...in coming days.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Hand gesture recognition system for human computer interaction using contour ...eSAT Journals
Abstract
Hand gesture recognition is a very challenging topic for real life applications because of its requirements on the robustness, accuracy and efficiency. This paper describes a system that enable a user to perform computer operations using hand gesture with a simple web camera as input device. This system involves four phases namely image acquisition, image pre-processing, feature extraction and gesture recognition. In the first phase, the input image is acquired with the help of a camera. In the second phase, the skin color of hand region is detected using HSV color space and morphological operations such as erosion and dilation are performed to remove noise followed by smoothing and thresholding of hand image. In Feature extraction phase, contours of hand image are detected. Finally, Gesture recognition phase includes recognizing hand gestures using contour analysis by comparing Auto-Correlation Function (ACF) amongst the contours and if they are close, then calculate Inter-Correlation Function (ICF) to truly determine similarity. Each recognized gesture is assigned with the corresponding action.
Keywords: Hand gesture recognition, human computer interaction, contour analysis, HSV color space.
Gestures are expressive, meaningful body motions, i.e., physical movements of the fingers, hands, arms, head, face, or body with the intent to convey information or interact with the environment.
Finger tracking can be more interesting when it's implemented in reality.This power point describes about what is finger tracking,types and algorithm of finger tracking without interface.
Hand Gesture Recognition Based on Shape ParametersNithinkumar P
Hi guys,
I am sharing a new link for code & project report. Hope it help you in your academics. Contact me if you need any help.
https://drive.google.com/drive/folders/1H0p852jfoyQuFig_IoMyVVK-U5o18Mxh?usp=sharing
A real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers and thumb in terms of raised or folded and their respective location in image.
Algorithm is implemented in Matlab v7.10
We use this hand gestures for
1. Sign Language Recognition
2. Human Machine Interaction.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Human machine interaction using Hand gesture recognitionManoj Harsule
The designed system in its first stage i.e. Recognition stage, captures the image. Then it
processes on the image and compare with the database images. Each database image is set to
the command interface mode. If a particular image is identified then a command is sent to the
microcontroller. In its second stage the microcontroller identifies the command and sends
signal to the reference port for operation
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
It is the best and attractive ppt of Gesture Recognition Technology...This is the TOUCHLESS technology...and will surely hit the market...in coming days.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Hand gesture recognition system for human computer interaction using contour ...eSAT Journals
Abstract
Hand gesture recognition is a very challenging topic for real life applications because of its requirements on the robustness, accuracy and efficiency. This paper describes a system that enable a user to perform computer operations using hand gesture with a simple web camera as input device. This system involves four phases namely image acquisition, image pre-processing, feature extraction and gesture recognition. In the first phase, the input image is acquired with the help of a camera. In the second phase, the skin color of hand region is detected using HSV color space and morphological operations such as erosion and dilation are performed to remove noise followed by smoothing and thresholding of hand image. In Feature extraction phase, contours of hand image are detected. Finally, Gesture recognition phase includes recognizing hand gestures using contour analysis by comparing Auto-Correlation Function (ACF) amongst the contours and if they are close, then calculate Inter-Correlation Function (ICF) to truly determine similarity. Each recognized gesture is assigned with the corresponding action.
Keywords: Hand gesture recognition, human computer interaction, contour analysis, HSV color space.
Gestures are expressive, meaningful body motions, i.e., physical movements of the fingers, hands, arms, head, face, or body with the intent to convey information or interact with the environment.
Finger tracking can be more interesting when it's implemented in reality.This power point describes about what is finger tracking,types and algorithm of finger tracking without interface.
Hand Gesture Recognition Based on Shape ParametersNithinkumar P
Hi guys,
I am sharing a new link for code & project report. Hope it help you in your academics. Contact me if you need any help.
https://drive.google.com/drive/folders/1H0p852jfoyQuFig_IoMyVVK-U5o18Mxh?usp=sharing
A real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers and thumb in terms of raised or folded and their respective location in image.
Algorithm is implemented in Matlab v7.10
We use this hand gestures for
1. Sign Language Recognition
2. Human Machine Interaction.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Human machine interaction using Hand gesture recognitionManoj Harsule
The designed system in its first stage i.e. Recognition stage, captures the image. Then it
processes on the image and compare with the database images. Each database image is set to
the command interface mode. If a particular image is identified then a command is sent to the
microcontroller. In its second stage the microcontroller identifies the command and sends
signal to the reference port for operation
Human Computer Interaction, Gesture provides a way for computers to understand human body language, Deals with the goal of interpreting hand gestures via mathematical algorithms, Enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices
This is a presentation I did in spring 2009 for my Psycholinguistics class addressing the topic of the universality of gestures. I presented an experiment that attempted to discover whether or not gestures can be used to identify the speaker's native language.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
At the present time, hand gestures recognition system could be used as a more expected and useable
approach for human computer interaction. Automatic hand gesture recognition system provides us a new
tactic for interactive with the virtual environment. In this paper, a face and hand gesture recognition
system which is able to control computer media player is offered. Hand gesture and human face are the key
element to interact with the smart system. We used the face recognition scheme for viewer verification and
the hand gesture recognition in mechanism of computer media player, for instance, volume down/up, next
music and etc. In the proposed technique, first, the hand gesture and face location is extracted from the
main image by combination of skin and cascade detector and then is sent to recognition stage. In
recognition stage, first, the threshold condition is inspected then the extracted face and gesture will be
recognized. In the result stage, the proposed technique is applied on the video dataset and the high
precision ratio acquired. Additional the recommended hand gesture recognition method is applied on static
American Sign Language (ASL) database and the correctness rate achieved nearby 99.40%. also the
planned method could be used in gesture based computer games and virtual reality.
Hand Gesture Recognition using OpenCV and Pythonijtsrd
Hand gesture recognition system has developed excessively in the recent years, reason being its ability to cooperate with machine successfully. Gestures are considered as the most natural way for communication among human and PCs in virtual framework. We often use hand gestures to convey something as it is non verbal communication which is free of expression. In our system, we used background subtraction to extract hand region. In this application, our PCs camera records a live video, from which a preview is taken with the assistance of its functionalities or activities. Surya Narayan Sharma | Dr. A Rengarajan "Hand Gesture Recognition using OpenCV and Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38413.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38413/hand-gesture-recognition-using-opencv-and-python/surya-narayan-sharma
Hand Gesture Recognition Using Statistical and Artificial Geometric Methods :...caijjournal
Gesture recognition represents the silent language that can be done with robots as well as they done to us,
this overseas language ensures that everyone can understand the meaning of the gesturing as well as can
reply and interact with. Because of that this silent language has chosen for deaf people in which can make
their communication easier between each of them as well as with other people.
In this paper we have brought to the table two different outstanding gesture recognition systems, those two
techniques achieved high ratio of recognition percentage as well as that are invariant-free techniques,
especially rotation perturbation that hinders the achievement of high level recognition percentage, the first
method is the recognition of hand gesture with the help of dynamic circle template and second one using
variable length chromosome generic algorithm, these two methods has been applied to different people and
the main objective was to reduce the database size used for training.
Vision Based Approach to Sign Language RecognitionIJAAS Team
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting user’s hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
Hand gesture classification is popularly used in
wide applications like Human-Machine Interface, Virtual
Reality, Sign Language Recognition, Animations etc. The
classification accuracy of static gestures depends on the
technique used to extract the features as well as the classifier
used in the system. To achieve the invariance to illumination
against complex background, experimentation has been
carried out to generate a feature vector based on skin color
detection by fusing the Fourier descriptors of the image with
its geometrical features. Such feature vectors are then used in
Neural Network environment implementing Back
Propagation algorithm to classify the hand gestures. The set
of images for the hand gestures used in the proposed research
work are collected from the standard databases viz.
Sebastien Marcel Database, Cambridge Hand Gesture Data
set and NUS Hand Posture dataset. An average classification
accuracy of 95.25% has been observed which is on par with
that reported in the literature by the earlier researchers.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...ITIIIndustries
To achieve a natural interaction in augmented reality environment, we have suggested to use markerless visionbased two-handed gestures for the interaction; with an outstretched hand and a pointing hand used as virtual object registration plane and pointing device respectively. However, twohanded interaction always causes mutual occlusion which jeopardizes the hand gesture recognition. In this paper, we present a solution for two-hand occlusion by using watershed transform. The main idea is to start from a two-hand occlusion image in binary format, then form a grey-scale image based on the distance of each non-object pixel to object pixel. The watershed algorithm is applied to the negation of the grey scaled image to form watershed lines which separate the two hands. Fingertips are then identified and each hand is recognized based on the number of fingertips on each hand. The outstretched hand is assumed to contain 5 fingertips and the pointing device contains less than 5 fingertips. An example of applying our result in hand and virtual object interaction is displayed at the end of the paper.
HUMAN COMPUTER INTERACTION ALGORITHM BASED ON SCENE SITUATION AWARENESScsandit
Implicit interaction based on context information is widely used and studied in the virtual scene.In context based human computer interaction, the meaning of action A is well defined. For instance, the right wave is defined turning paper or PPT in context B, And it mean volume up in context C. However, Select object in a virtual scene with multiple objects, context information is not fit. In view of this situation, this paper proposes using the least squares fitting curve beam to
predict the user's trajectory, so as to determine what object the user’s wants to operate .And fitting the starting position of the straight line according to the change of the discrete table. And
using the bounding box size control the Z variable to move in an appropriate location. Experimental results show that the proposed in this paper based on bounding box size to control
the Z variables get a good effect; by fitting the trajectory of a human hand, to predict the object that the subjects would like to operate. The correct rate is 88.6%.
Human Computer Interaction Algorithm Based on Scene Situation Awareness cscpconf
Implicit interaction based on context information is widely used and studied in the virtual scene.
In context based human computer interaction, the meaning of action A is well defined. For
instance, the right wave is defined turning paper or PPT in context B, And it mean volume up in
context C. However, Select object in a virtual scene with multiple objects, context information is
not fit. In view of this situation, this paper proposes using the least squares fitting curve beam to
predict the user's trajectory, so as to determine what object the user’s wants to operate .And
fitting the starting position of the straight line according to the change of the discrete table. And
using the bounding box size control the Z variable to move in an appropriate location.
Experimental results show that the proposed in this paper based on bounding box size to control
the Z variables get a good effect; by fitting the trajectory of a human hand, to predict the object
that the subjects would like to operate. The correct rate is 88.6%.
Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Similar to Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Computer Interaction (20)
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Interface on Usability Testing Indonesia Official Tourism WebsiteWaqas Tariq
Ministry of Tourism and Creative Economy of The Republic of Indonesia must meet the wide audience various needs and should reach people from all levels of society around the world to provide Indonesia tourism and travel information. This article will gives the details in the evolution of one important component of Indonesia Official Tourism Website as it has grown in functionality and usefulness over several years of use by a live, unrestricted community. We chose this website to see the website interface design and usability and to popularize Indonesia tourism and travel highlights. The analysis done by looking at the criteria specified for usability testing. Usability testing measures are the ease of use (effectiveness, efficiency, consistency and interface design), easy to learn, errors and syntax which is related to the human computer interaction. The purpose of this article is to test the usability level of the website, analyze the website interface design, and provide suggestions for improvements in Indonesia Official Tourism Website of analysis we have done before.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Instructions for Submissions thorugh G- Classroom.pptx
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Computer Interaction
1. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 15
Hand Segmentation Techniques to Hand Gesture Recognition for
Natural Human Computer Interaction
Archana S. Ghotkar archana.ghotkar@gmail.com
Department of Computer Engineering
Pune Institute of Computer Technology
University of Pune
Pune 411 043, India
Gajanan K. Kharate gkkharate@yahoo.co.in
Department of Electronics and Telecommunication Engineering
Matoshri College of Engineering and Research Centre
University of Pune
Nasik 422 105, India
Abstract
This work is the part of vision based hand gesture recognition system for Natural Human
Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture
recognition system. The aim of this paper is to develop robust and efficient hand segmentation
algorithm where three algorithms for hand segmentation using different color spaces with
required morphological processing have were utilized. Hand tracking and segmentation algorithm
(HTS) is found to be most efficient to handle the challenges of vision based system such as skin
color detection, complex background removal and variable lighting condition. Noise may contain,
sometime, in the segmented image due to dynamic background. An edge traversal algorithm was
developed and applied on the segmented hand contour for removal of unwanted background
noise.
Keywords: Human Computer Interface, Hand tracking and Segmentation, Hand Gesture Recognition,
Edge Traversal Algorithm.
1. INTRODUCTION
Natural Human Computer Interaction (HCI) is the demand of today’s world. Survey and Sign
language study shows that from various gesture communications modality, the hand gesture is
the most easy and natural way of communication. Real-time vision-based hand gesture
recognition is considered to be more and more feasible for Human-Computer Interaction with the
help of latest advances in the field of computer vision and pattern recognition [1].
There are various applications using Hand Gesture Recognition. Zhao et al. [2] explained virtual
reality system based on hand gesture. Gaussian distribution was used for building complexion
model, YCbCr color space was used for segmentation purpose and Fourier descriptor as a
feature vector. BP neural network was utilized for recognition and all these resulted in an
improved recognition rate. High light and shadow segmentation results were, however, found to
be not perfect. Guan and Zheng [3] introduced a novel approach to pointing gesture recognition
based on binocular stereo vision, in which user needs to wear special clothes or markers and was
found to be suitable for both left and right handed users. Freeman and Weissman [4] explained
television control application by hand gesture. Here, the user uses only one gesture: the open
hand, facing the camera for controlling the television. Sepehri et al. [5] proposed algorithms and
applications for using hand as an interface device in virtual and physical spaces.
In Real-time Vision based Hand Gesture recognition system, hand tracking and segmentation are
most important and challenging steps towards gesture recognition. Uncontrolled environment,
2. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 16
lighting condition, skin color detection, rapid hand motion and self-occlusions are the challenges
need to be considered while capturing and tracking the hand gesture [6]. Various researchers are
working on hand tracking and segmentation to make it robust to achieve natural interface with
machine. Bao et al. [7] introduced a new robust algorithm called Tower method for hand tracking
module where, skin color was considered for hand gesture tracking and recognition.
Alon et al. [8] introduced unified framework for gesture recognition and spatiotemporal gesture
segmentation applied to American Sign Language (ASL). To recognize manual gestures in video,
it is required to do both spatial and temporal gesture segmentations. Spatial gesture
segmentation is the problem of determining where the gesturing hand is located in each video
frame. Temporal gesture segmentation is the problem of determining when gesture starts and
ends.
Various gray-level segmentation techniques, such as use of single threshold value, adaptive
thresholding, P-tile method, edge pixel method, iterative method and use of fuzzy set are
available for object segmentation. Stergiopoulou and Papamarkos [9] proposed hand
segmentation by color segmentation using YCbCr color map. Approximation of the hand
morphology was accomplished by SGONG (self-growing and self organized neural gas) Network.
Algorithm was able to identify finger, palm center, hand center, hand slope as well as finger
features for recognition purpose. Probability based classification method was used for gesture
classification. Skin color detection and complex background are major challenges in hand gesture
recognition. Binary linked objects are groups of pixels that share the same label due to their
connectivity in a binary image. Burande et al. [10] implemented Blobs analysis technique for skin
color detection under complex background. Kalman filtering, HMM and Graph matching algorithm
were used for gesture recognition. Howe et al. [11] had introduced fusion of skin color and motion
segmentation. Face and hand of signer were successfully detected by using skin color
segmentation. False detection of skin region in the uncontrolled background also occurs due to
light variation, so motion segmentation was used to find the difference between the moving
foreground object and the stationery background.
Hand gestures are spontaneous and powerful communication modality for HCI. Various
traditional input devices are available for interaction with computer, such as keyboard, mouse,
joystick as well as touch screen; yet these are not considered natural interface. Proposed system
will consist of desktop/laptop interface, with the help of hand gesture where users need not to
wear any data glove, along with the web camera for capturing the hand image. The primary step
toward any hand gesture recognition (HGR) is hand tracking and segmentation. In the present
work, three techniques for hand segmentation were explored. The objective of this work is to
overcome the vision-based challenges, such as dynamic background removal, skin color
detection for natural human computer interface and variable lighting condition.
The organization of the paper is as follows. In section 2, system model of hand gesture
recognition for desktop/laptop handling operations is described. Color model used in different
algorithms are discussed in section 3. In section 4, techniques of hand detection and
segmentation with anticipated dataset are introduced. In section 5, experimented results are
shown. The results of experimented techniques are compared and discussed in section 6.
Conclusion and future work is discussed in section 7.
2. HAND GESTURE RECOGNITION SYSTEM MODEL
Let S be a hand gesture recognition system that recognizes hand gesture.
S = { I,G,M,F,O }
where, I is a set of input hand gestures;
G represents a set of single-handed anticipated static gestures;
M represents mouse operation such as left click, right click;
3. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 17
F represents feature vector for G;
O represents output with application interface (A);
G = {G1,G2,..G7 }
I = {I1,I2…I5}
G I
M = {center position (x,y), movement of (x,y) position, left click, right click};
G M
F = {f1,f2…f5}
O = {A1, A2 …A7}
Success of the system will be depend upon when
(i) Ii=Fj where Ii € I
Fj € F where 1 <= j <=5
Failure of the system when
(ii) For a gesture(I) no feature vector is found
Ii≠Fj where Ii € I
Fj € F where 1 <= j <=5
(iii) For two different gesture (I) same feature vector found
Ii=Fj and Ik=Fj where Ii,Ik € I
Fj € F where 1 <= j <=5 and
3. COLOR MODELS
The aim of the proposed project is to overcome the challenge of skin color detection for natural
interface between user and machine. So to detect the skin color under dynamic background the
study of various color models was done for pixel based skin detection [12][13][14]. In this paper,
three color spaces has been chosen which are commonly used in computer vision applications.
• RGB: Three primary colors red(R), green(G), and blue(B) are used. The main advantage
of this color space is simplicity. However, it is not perceptually uniform. It does not
separate luminance and chrominance, and the R, G, and B components are highly
correlated.
• HSV (Hue, Saturation, Value): It express Hue with dominant color (such as red, green,
purple and yellow)of an area. Saturation measures the colorfulness of an area in
proportion to its brightness. The “intensity”, “lightness”, or “Values” is related to the color
luminance. This model discriminates luminance from chrominance. This is a more
intuitive method for describing colors, and because the intensity is independent of the
color information this is very useful model for computer vision. This model gives poor
result where the brightness is very low. Other similar color spaces are HSI and
HSL(HLS).
• CIE –Lab: It defined by the International Commission on Illumination. It separates a
luminance variable L from two perceptually uniform chromaticity variable(a,b)
4. HAND SEGMENTATION
Since, efficient hand tracking and segmentation is the key of success towards any gesture
recognition, due to challenges of vision based methods, such as varying lighting condition,
complex background and skin color detection; variation in human skin color complexion required
the robust development of algorithm for natural interface. Color is very powerful descriptor for
4. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 18
object detection. So for the segmentation purpose color information was used, which is invariant
to rotation and geometric variation of the hand [18]. Human perceives characteristics of color
component such as brightness, saturation and hue component than the percentage of primary
color red, green, and blue. Color models are useful for to specify a particular color in standard
way. It is space-coordinated system within which any specified color represented by single point.
Here, three techniques were introduced using different color spaces for robust hand detection
and segmentation. Hand tracking and segmentation (HTS) technique using HSV color space is
identified for the preprocessing of HGR system.
4.1 Anticipated Static Gesture Set
Static gesture is a specific posture assigned with meaning. Following are the static gesture set
specified for the proposed system with the specific meaning. Application interface will be provided
after recognition of specified posture for action. Simplicity and user frendliness were taken into
consideration for the design of anticipated posture set. For the mouse cursor movement the
center of the hand gesture window was passed as a mouse cursor. Figure 1. shows anticipated
static gestures set with defined task.
FIGURE 1: a)Open notepad (b) Open paint (c) Log off (d) Open media player (e) Picture
browsing (f) mouse left click (g) mouse right click
4.2 Hand Segmentation Using HSV Color Space and Sampled Storage Approach [16]
A Novel Approach for Image segmentation algorithm has been developed and tested for green
color glove. In this approach, color based segmentation was attempted using HSV color space.
The H, S and V separation was done using following equations.
V=max{R,G,B}
δ= V-min{R,G,B}
S= δ/V
To obtain value for hue following are the cases
(i) if R=V then H=1/6(G-B)/δ
(ii) if G=V then H=1/6(2+(B-R)/δ)
(iii) if B=V then H=1/6(4+(R-G)/δ)
The input image of green color samples was passed to the algorithm and from H-S histogram the
H_range = [0.4 0.55 0.6 0.6] and S_range = [0.2 1.0] were experimented for segmentation.
Algorithm could able to subtract dynamic background. Skin color samples needed to be passed to
the algorithm for skin color detection. The drawback of this algorithm was training samples of the
color need to be stored. It was sensitive to little variation in color brightness.
4.3 Hand Segmentation Using Lab Color Space (HSL)
The Input captured RGB image was converted to lab color space. In CIE L* a* b* co-ordinates,
where L* defines lightness, a* represent red/green value and b* denotes the blue/yellow color
value. a* axis and +a direction shift towards red while along the b* axis +b movement shift toward
yellow. Once the image gets converted into a* and b* planes, thresholding was done. Convolution
operation was applied on binary images for the segmentation. Morphological processing was
done to get the superior hand shape. This algorithm was found to work for skin color detection but
it was sensitive for complex background. Figure 2 shows the output with intermediate steps.
5. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 19
4.3.1 HSL Algorithm
i) Capture the Image
ii) Read the input image
iii) Convert RGB image into lab color space
iv) Convert the color values in I into color structure specified in cform
v) Compute the threshold value.
vi) Convert Intensity image into binary image
vii) Performing morphological operations such as erosion.
FIGURE 2: (a) Input Image (b) rgb2clb (c) Gray Scale (d) Black and white (e) Image after erosion
4.4 Hand Tracking and Segmentation (HTS) Algorithm
The prime objective of this algorithm was robust skin color detection and removal of complex
background. The limitation of previous two techniques was overcome with the hand tracking
algorithm. In this approach, hand detection and segmentation were attempted. Hand tracking was
done using mean shift algorithm [17]. Odd frame has been considered for fast processing. For
robust performance user’s skin color sample was passed and HSV histogram was created.
Experimented threshold value has been used for the segmentation. Algorithm was found to be
working for dynamic color tracking under complex background and able to segment required
hand image on which image morphological operations were applied to get the contour. CamShift
function (a variation on mean shift algorithm) within the OpenCV library is used for tracking and
detection. Edge traversal algorithm was applied for getting fine contour of the hand shape. As
dynamic background was considered, while capturing the user’s gesture after edge detection,
there was a possibility to detect unwanted edges from the background. In an attempt to only
identify the boundary of user’s hand edge, traversal algorithm was devised. Figure 3 explains the
flow chart of HTS algorithm and Figure 4(a) shows that at the run time, skin color sample is
passed to the algorithm. Figure 4(b) shows the color of the hand is tracked, circle is drawn for the
extraction purpose and Figure 4(c) shows the segmented result after applying edge traversal
algorithm. This algorithm was found to work for skin color tracking including for any color glove
hand tracking.
4.4.1 HTS Algorithm
i) Capture the image frames from camera.
ii) Process odd frames, tracked the hand using CamShift function by providing skin color
samples at the run time.
iii) HSV histogram is created and the experimented threshold value is passed to the
CamShift function for tracking required hand portion.
iv) Segment the required hand portion from Image
v) Find the edges by using Canny edge detection.
vi) Dilate the image.
6. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 20
vii) Erode the image.
viii)Apply edge traversal algorithm to get final contour.
ix) Stop
FIGURE 3: Hand tracking and segmentation (HTS)
4.4.2 Edge Traversal Algorithm
i) Take the center of the window reference
ii) Traverse to the left of the image till we get the first white pixel and record this pixel.
iii) Consider the 3*3 matrix, with the white pixel at the center of the matrix.
iv) Find the previous black pixel, which helped to locate the first white pixel.
v) Starting from the previous black pixel, traverse the matrix in anti-clock wise direction till
the next white pixel is obtained.
vi) Compare this white pixel with first white pixel recorded.
vii) If new white pixel is same as the first white pixel then go to step (viii)
Hand tracking
Hand segmentation
Edge detection
Dilate Image
Erode image
Capture Image frames
Edge traversal Algorithm
Cam shift tracking
algorithm
Learned
histogram
Frames
7. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 21
viii) else record this white pixel and go to step (iii)
ix) Stop
FIGURE 4: (a) Learning color histogram (b) Hand tracking
FIGURE 4: (c) Hand tracking and final contour
4.5 Mouse Cursor Movement
Movement of the mouse cursor is done on the logic where center point of the segmented region
is calculated and tracked with respect to x,y position.
i) Capture frames from webcam
ii) Perform hand segmentation(using HTS algorithm)
iii) Centre point detection and tracking
iv) After tracking the centre point, pass the centre point coordinates to the SetCursorPos()
Function.
v) Move the cursor position according to the centre point.
vi) In order to perform any operations from mouse, give the appropriate gestures for right
and left click.
vii) Stop
5. RESULTS
HTS algorithm was tested on four users for five gestures. Instances of three to five gestures on
same users were tested. Algorithm was tested on green color glove and skin color on 70
samples. It was observed that using green color glove segmentation process is faster than using
skin color due the varying lighting condition. Figure 5 shows segmented results using green color
gloves and Figure 6 shows segmented results on skin color.
8. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 22
FIGURE 5: Segmentation results on green color glove
FIGURE 6: Segmentation results on skin color
5.1 Software/Hardware Used
Sample storage (using HSV) and HSL (using Lab) algorithms, was done using MATLAB 2010Ra
and HTS algorithm was implemented using OpenCV. It was found that for real time interfacing
and object tracking, OpenCV was a good choice. All algorithms were found to be working on low-
resolution web camera (2.0 M pixel and more). For constant lighting source night vision, 8 M pixel
camera was used for desired results.
6. DISCUSSION
There are various approaches are found in survey for hand tracking and segmentation. Different
researchers are using different color spaces such as YCbCr[9], HSI[10], YIQ[15] and various
techniques, such as gray level segmentation techniques, for getting hand contour [18]. Basically
there are two methods for skin detection i) Pixel-Based Segmentation and ii) Region Based
Segmentation. Pixel based segmentation uses color based methods. The choice of the color
model should consider separability between skin and non skin color and should decrease
separability among skin tones. Color distribution has modeled into non parametric and parametric
model. However Non parametric Model uses Histogram and Bays classifier for skin color
distribution and parametric model derives Gaussian model from training set. In the present study,
pixel based non parametric methods were used. Three techniques were implemented and
compared. HTS algorithm was giving best result under complex background and skin color
detection. It was robust than first two techniques. Using first technique, with HSV color space, the
color samples needed to be stored and then threshold value was calculated for segmentation
purpose. The limitation of this technique was for the slight variation of the hand color, desired
segmentation result could not be achieved. We could segment hand on various skin tone using
Lab approach but sometimes result was varying for complex background. Table 1 shows the
comparison of discussed techniques with respect to challenging factor.
From the comparison between three techniques it was observed that hand tracking with learned
histogram is giving better result than other two for skin color detection under complex
background. Algorithms were tested for skin as well as non skin color (with color glove).
9. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 23
Techniques Skin color
detection
Dynamic
background
subtraction
Different lighting
condition
Storage sampled based
technique using HSV
working Working but if
background is
having the
same
hand/glove
color then fail
Sensitive
Lab based technique working sensitive Sensitive
Object Tracking with HSV working working More robust than previous
two
TABLE 1: Comparisons of three techniques.
Since, for the detection of the skin color various methods were presented by various researchers
such as Bayesian classification with the histogram techniques using various color
spaces[12][13][19] ,Bayesian theorem and Kernel Density estimation[20], Using parametric color
modeling using Gaussian Mixture Model [21] ,and using unsupervised learning such as K-mean
clustering[22]. However for all the classification method, correct choice of color space is very
important. Any real time HCI application requires robustness, accuracy and fast processing, for
skin color detection, as color varies with illumination the choice of the color space and
classification method is really challenge.
7. CONCLUSION AND FUTURE WORK
This is an ongoing project and first phase of the project where we tried to implement and tested
various techniques for efficient and robust segmentation. Better results on skin color detection
using HTS algorithm were achieved, where for fast hand tracking, odd frames for processing
were considered. The next phase of the project is creation of multiple feature vectors for desired
classification and recognition accuracy. Geometric features, such as line identification using
Hough transform , Fourier descriptor, and Image Hu moments will be used due to the properties
of rotation, scale and translation invariant. Genetic algorithm will be used for gesture recognition.
Window APIs (Application Program Interface) will be used for specified action after Gesture
recognition. Further the system will be extended to manual alphabet Indian sign language
Interpretation.
ACKNOWLEDGEMENT
The authors are thankful to the Department of Science and Technology, Ministry of Science and
Technology, New Delhi for supporting this non-commercial research under the Fast Track
Scheme for Young Scientists (SR/FTP/ETA-18/08).
REFERENCES
[1] V. Pavlovis,R. Sharma and T. huang, “Visual Interpretation of Hand Gesture for Human-
Computer Interaction: A Review”, IEEE Transaction on pattern Analysis and Machine
Intelligence, Vol. 19, No.7, Jul 1997.pp.,677-695.
[2] S. Zhao, W. Tan, C. Wu, L. Wen, “A Novel Interactive Method of Virtual Reality System
Based on Hand Gesture Recognition”, IEEE-978-1-4244-2723-9/09,pp., 5879-5882,2009.
[3] Y. Guan, M. Zheng, “Real-time 3D pointing gesture recognition for natural HCI”,
Proceedings of the world congress on Intilligent Control and Automation, China, pp,
.2433-2436.,2008.
10. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 24
[4] W. Freeman, C. Weissman, “Television control by hand gesture”, IEEE international
workshop on Automatic Face and Gesture Recognition, Zurich, 1995.
[5] A. Sepehri, Y. Yacoob, L. Davis, “Employing the Hand as an Interface Device”, Journal of
Multimedia, Vol. 1, No.7, pp., 18-2, 2006.
[6] A. Erol, G. Bebis, M. Nicolescu, R.Boyle and X.Twombly, “Vision-based hand pose
estimation: A review”, Science Direct, Computer Vision and Image Understanding
108,pp., 52-73,2007.
[7] P. Bao, N. Binh, T. Khoa, “A new Approach To Hand Tracking and Gesture Recognition
By A New Feature Type And HMM”, International Conference on Fuzzy Systems and
Knowledge Discovery, IEEE Computer Society, 2009.
[8] J. Alon, V. Athitsos, Q. Yuan, S. Sclaroff, “A Unified Framework for Gesture Recognition
and Spatiotemporal Gesture Segmentation”, IEEE Transaction of Pattern Analysis and
Machine Intelligence,2008.
[9] E. Stergiopoulou, N. Papamarkos, “A New Technique for Hand Gesture
Recognition”,IEEE-ICIP,pp., 2657-2660,2006.
[10] C. Burande,R. Tugnayat, N. Choudhary, “Advanced Recognition Techniques for Human
Computer Interaction”, IEEE, Vol 2 pp., 480-483,2010
[11] L. Howe, F. Wong,A. Chekima, “Comparison of Hand Segmentation Methodologies for
Hand Gesture Recognition”,IEEE-978-4244-2328-6,2008
[12] V. Vezhnevets, V. Sazonov. and A. Andreeva, “ A Survey on Pixel-Based Skin color
Detection Techniques”
[13] S. Phung, A. Bouzerdoum and D. Chai, “ Shin Segmentation Using Color Pixel
Classification: Analysis and Comparision” IEEE Transaction on Pattern Analysis and
Machine Intelligence, Vol.27, No. 1,pp., 148-154,2005.
[14] A. Elgammal, C. Muang and D. Hu, “Skin Detection – a Short Tutorial”, Encyclopedia of
Biometrics , Springer-Verlag Berlin Heidelberg, 2009.
[15] R. Rokade, D. ,Kokare,“ Gesture Recognition by Thinning Method”, International
Conference on Digital Image Processing, IEEE Computer Society, pp., 284-287,2009.
[16] A. Ghotkar, “A Novel Approach for Image Segmentation in Real Time Hand Gesture
Recognition for HCI”, International Conference(ICSCI-2011),Pentagram Research,
Hyderabad ,Jan-2011.
[17] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using
mean shift” Computer Vision and Pattern Recognition, 2000.Proceddings. IEEE
Conference on, 2:142-149 vol.2,2000.
[18] A. Martin and S. Tosunoglu, “Image Processing Techniques for Machine Vision”
[19] C.Jung, C.Kim, S.Chae, and S. Oh, “Unsupervised Segmentation of Overlapped Nuclei
Using Bayesian Classification”, IEEE Transaction on Biomedical Engineering, Vol. 57,
No.12,2010.
11. Archana S. Ghotkar & Gajanan K. Kharate
International Journal of Human Computer Interaction (IJHCI), Volume (3) : Issue (1) : 2012 25
[20] H. Rahimizadeh,, M. Marhaban, R. Kamil, and N. Ismail, “Color Image Segmentation
Based on Bayesian Theorem and Kernel Density Estimation”, European Journal of
Scientific Research, ISSN 1450-216 vol.26 No.3, pp., 430-436,2009.
[21] R. Hassanpour, A. Shahbahrami, and S. Wong, “Adaptive Guassian Mixture Model for
Skin Color Segmentation”, World Academy of science, Engineering and Technology 41,
2008.
[22] A. Chitade, S. Katiyar, “Color Based Image Segmentation Using K-means Clustering”,
International Journal of Engineering Science and Technology, Vol.2(10), pp., 5319-
5325,2010.