Real time hand gesture recognition system for dynamic applicationsijujournal
Â
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Â
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
Human Computer Interaction Based HEMD Using Hand GestureIJAEMSJORNAL
Â
Hand gesture based Human-Computer-Interaction (HCI) is one of the most normal and spontaneous ways to communicate between people and apparatus to present a hand gesture recognition system with Webcam, Operates robustly in unrestrained environment and is insensible to hand variations and distortions. This classification consists of two major modules, that is, hand detection and gesture recognition. Diverse from conventional vision-based hand gesture recognition methods that use color-markers for hand detection, this system uses both the depth and color information from Webcam to detect the hand shape, which ensures the sturdiness in disorderly environments. Assurance its heftiness to input variations or the distortions caused by the low resolution of webcam, to apply a novel shape distance metric called Handle Earth Mover's Distance (HEMD) for hand gesture recognition. Consequently, in this paper concept operates accurately and efficiently. The intend of this paper is to expand robust and resourceful hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required thresholds have were utilized. Hand tracking and segmentation algorithm is found to be most resourceful to handle the challenge of apparition based organization such as skin dye detection. Noise may hold, for a moment, in the segmented image due to lively background. Tracking algorithm was developed and applied on the segmented hand contour for elimination of unnecessary background noise
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
Â
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Real time hand gesture recognition system for dynamic applicationsijujournal
Â
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Â
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
Human Computer Interaction Based HEMD Using Hand GestureIJAEMSJORNAL
Â
Hand gesture based Human-Computer-Interaction (HCI) is one of the most normal and spontaneous ways to communicate between people and apparatus to present a hand gesture recognition system with Webcam, Operates robustly in unrestrained environment and is insensible to hand variations and distortions. This classification consists of two major modules, that is, hand detection and gesture recognition. Diverse from conventional vision-based hand gesture recognition methods that use color-markers for hand detection, this system uses both the depth and color information from Webcam to detect the hand shape, which ensures the sturdiness in disorderly environments. Assurance its heftiness to input variations or the distortions caused by the low resolution of webcam, to apply a novel shape distance metric called Handle Earth Mover's Distance (HEMD) for hand gesture recognition. Consequently, in this paper concept operates accurately and efficiently. The intend of this paper is to expand robust and resourceful hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required thresholds have were utilized. Hand tracking and segmentation algorithm is found to be most resourceful to handle the challenge of apparition based organization such as skin dye detection. Noise may hold, for a moment, in the segmented image due to lively background. Tracking algorithm was developed and applied on the segmented hand contour for elimination of unnecessary background noise
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
Â
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
Â
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
An HCI Principles based Framework to Support Deaf CommunityIJEACS
Â
Sign language is a communication language preferred and used by a deaf person to converse with the common people in the community. Even with the existence of the sign language, there exist a communication gap between the normal and the disable/deaf person. Some solutions such as sensor gloves already are in place to address this problem area of communication, but they are limited and are not covering all parts of the language as required by the deaf person for the ordinary person to understand what is said and wanted? Due to the lack of credibility of the existing solutions for sign language translation, we have proposed a system that aims to assist the deaf people in communicating with the common people of the society and helping, in turn, the disabled people to understand the healthy (normal people) easily. Knowing the needs of the users will help us in focusing on the Human Computer Interaction technologies for deaf people to make it further more a user-friendly and a better alternative to the existing technologies that are in place. The Human Computer Interface (HCI) concept of usability, empirical measurement and simplicity are the key consideration in the development of our system. The proposed Kinect System removes the need for physical contact to operate by using Microsoft Kinect for Windows SDK beta. The result shows that the It has a strong, positive and emotional impact on persons with physical disabilities and their families and friends by giving them the ability to communicate in an easy manner and non-repetitive gestures.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting user’s hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
Haptic Technology- Interaction with Virtualityvivatechijri
Â
Our daily routine activities revolves on our sense of "touch" or "feel". And a technology that helps in experiencing this sense through a device is called "haptic technology". Haptic tries to evaluate the sensation of touch through various mechanisms. Haptic devices make virtual objects seam real when they are touched. Haptic technology uses force, motions and vibrations to the user to initiate interaction between virtual environment and user. With this technology we can now touch the objects that is only in the mind of a computer system but does not exist in reality. Users can receive feedbacks in the form of felt sensations in the hand or other parts of the body from computer applications using various input and output devices. The idea of haptic technology evolved from virtual reality. Haptic is a technology that is evolving day-by-day. As the demand for virtual reality is being intensified the haptic technology will also intensify. In this paper we will study the idea and evolution of haptic technology, haptic devices and their applications. Finally we will conclude with the future of haptic in our daily life.
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
Â
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
Â
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
An HCI Principles based Framework to Support Deaf CommunityIJEACS
Â
Sign language is a communication language preferred and used by a deaf person to converse with the common people in the community. Even with the existence of the sign language, there exist a communication gap between the normal and the disable/deaf person. Some solutions such as sensor gloves already are in place to address this problem area of communication, but they are limited and are not covering all parts of the language as required by the deaf person for the ordinary person to understand what is said and wanted? Due to the lack of credibility of the existing solutions for sign language translation, we have proposed a system that aims to assist the deaf people in communicating with the common people of the society and helping, in turn, the disabled people to understand the healthy (normal people) easily. Knowing the needs of the users will help us in focusing on the Human Computer Interaction technologies for deaf people to make it further more a user-friendly and a better alternative to the existing technologies that are in place. The Human Computer Interface (HCI) concept of usability, empirical measurement and simplicity are the key consideration in the development of our system. The proposed Kinect System removes the need for physical contact to operate by using Microsoft Kinect for Windows SDK beta. The result shows that the It has a strong, positive and emotional impact on persons with physical disabilities and their families and friends by giving them the ability to communicate in an easy manner and non-repetitive gestures.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting user’s hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
Haptic Technology- Interaction with Virtualityvivatechijri
Â
Our daily routine activities revolves on our sense of "touch" or "feel". And a technology that helps in experiencing this sense through a device is called "haptic technology". Haptic tries to evaluate the sensation of touch through various mechanisms. Haptic devices make virtual objects seam real when they are touched. Haptic technology uses force, motions and vibrations to the user to initiate interaction between virtual environment and user. With this technology we can now touch the objects that is only in the mind of a computer system but does not exist in reality. Users can receive feedbacks in the form of felt sensations in the hand or other parts of the body from computer applications using various input and output devices. The idea of haptic technology evolved from virtual reality. Haptic is a technology that is evolving day-by-day. As the demand for virtual reality is being intensified the haptic technology will also intensify. In this paper we will study the idea and evolution of haptic technology, haptic devices and their applications. Finally we will conclude with the future of haptic in our daily life.
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
Â
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
Hand Gesture Recognition using OpenCV and Pythonijtsrd
Â
Hand gesture recognition system has developed excessively in the recent years, reason being its ability to cooperate with machine successfully. Gestures are considered as the most natural way for communication among human and PCs in virtual framework. We often use hand gestures to convey something as it is non verbal communication which is free of expression. In our system, we used background subtraction to extract hand region. In this application, our PCs camera records a live video, from which a preview is taken with the assistance of its functionalities or activities. Surya Narayan Sharma | Dr. A Rengarajan "Hand Gesture Recognition using OpenCV and Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38413.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38413/hand-gesture-recognition-using-opencv-and-python/surya-narayan-sharma
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
At the present time, hand gestures recognition system could be used as a more expected and useable
approach for human computer interaction. Automatic hand gesture recognition system provides us a new
tactic for interactive with the virtual environment. In this paper, a face and hand gesture recognition
system which is able to control computer media player is offered. Hand gesture and human face are the key
element to interact with the smart system. We used the face recognition scheme for viewer verification and
the hand gesture recognition in mechanism of computer media player, for instance, volume down/up, next
music and etc. In the proposed technique, first, the hand gesture and face location is extracted from the
main image by combination of skin and cascade detector and then is sent to recognition stage. In
recognition stage, first, the threshold condition is inspected then the extracted face and gesture will be
recognized. In the result stage, the proposed technique is applied on the video dataset and the high
precision ratio acquired. Additional the recommended hand gesture recognition method is applied on static
American Sign Language (ASL) database and the correctness rate achieved nearby 99.40%. also the
planned method could be used in gesture based computer games and virtual reality.
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Â
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Paper Writing Service - HelpWriting.net 👈
✅ Quality
You get an original and high-quality paper based on extensive research. The completed work will be correctly formatted, referenced and tailored to your level of study.
✅ Confidentiality
We value your privacy. We do not disclose your personal information to any third party without your consent. Your payment data is also safely handled as you process the payment through a secured and verified payment processor.
✅ Originality
Every single order we deliver is written from scratch according to your instructions. We have zero tolerance for plagiarism, so all completed papers are unique and checked for plagiarism using a leading plagiarism detector.
✅ On-time delivery
We strive to deliver quality custom written papers before the deadline. That's why you don't have to worry about missing the deadline for submitting your assignment.
✅ Free revisions
You can ask to revise your paper as many times as you need until you're completely satisfied with the result. Provide notes about what needs to be changed, and we'll change it right away.
✅ 24/7 Support
From answering simple questions to solving any possible issues, we're always here to help you in chat and on the phone. We've got you covered at any time, day or night.
Essay On Recognition
As science and technology has improved on a large scale, we expect our computer system to interact with more naturally and more easily. Previously command line interface was used to interact with computer [1].But for ease of using interface the graphical user interface is introduced .Most of the input devices mouse, keyboard, joystick as well as touch screen is very common and traditional device for interacting .But they do not offer natural interface. Hand gesture recognition will provide a striking alternative and which will be more natural. Gesture is not a very new thing to us as we use hand gesture for waving goodbye [2]. With passage of time and exploring new necessity of humans its utility has been amplified. It provides the...show more content...Template matching is done to check whether the stored data matches to the given data. Collecting data for creating template for each of hand posture, and then closely matched current data will leads to find posture.
Advantage is it is simplest, accurate for small set posture and required small amount of calibration.
But the problem with template matching is that it will always not work for hand gesture, and won t work for large posture sets as well.
III. Feature Extraction and Analysis:
Feature Extraction and analysis is the method of raw data analysis. Using the information of raw data the high level semantic information can be produced which can be used for identifying postures and gesture. Feature extraction can recognize posture and posture with 97% accuracy [7]. The technique is very robust [2], that it can recognize complex and simple hand gesture.
Advantage is it can recognize both gesture and posture bu
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Â
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
1. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
Gesture Based Computing as an Alternative to Mouse
by Calibrating Principal Contour Process Actions
198
Chinnu Thomas1, D.Lakshmi2
PG Student of CSE1, Associate Professor of CSE2
Adithya Institute of Technology1, 2
Chinnuthomas13@gmail.com1, lakshmi.lifefordivine@gmail.com2
Abstract- Computing is no longer a discrete activity bound to a desktop; network computing and mobile
computing are fast becoming a part of everyday life and so is the Internet. In the present day framework of
interactive and intelligent computing, an efficient human computer interaction is assuming utmost importance.
The topic of user experience and interaction has been increasingly popular and widespread lately. Unlike
decades ago, when people place most of their attentions on the quality and functionality, or brand of a product,
at the present time, the user interaction experience and usability seems to be the vital element when people are
considering and selecting a product. Gesture based computing enables humans to interface with the machine
(HMI) and interact naturally without any dedicated devices. Building a richer bridge between machines and
humans than primitive text user interface or even (graphical user interfaces) GUIs, which still limit the majority
of input to keyboard and mouse. In fact we are bridging this gap by bringing intangible, digital information out
into the tangible world, and allowing us to interact with this information via natural hand gestures. Gesture
Based Computing provides an attractive alternative for human computer interaction (HCI). A novel approach is
proposed for implementing a real-time Human interaction system capable of understanding commands with
emphasis on hand gestures by analyzing the principal contour and fingertips.
Index Terms- Human Machine Interface; Human Computer interaction; Hand Gesture; Gesture based
Computing.
1. INTRODUCTION
No doubt, Present world framework of technology is
seeking for an interesting interface for Human
Computer Interaction (HCI) which can provide an
environment where a user can interact with a
computer or roughly speaking a machine in a more
user friendly way. We are bridging the gap of
interaction by bringing immaterial, digital information
out into the material world, and allowing us to interact
with this information via natural hand gestures.
Gestures considered as a natural way of
communication among human especially for hear-impaired,
since it is a physical movement of hands
,arms, or body which conveying meaningful
information by delivering an expressive message.
Gesture recognition then, is the interpretation of that
movement as semantically meanings command.
Vision Based approach of hand gesture
recognition has received much attention from
academia and industry in recent years, largely due to
the development of human-computer interaction
(HCI) technologies and the growing popularity of
smart devices such as smart phones. The essential
intention of building hand gesture recognition system
is to create a natural interaction between human and
computer where the recognized gestures can be used
for controlling a robot [15] or conveying meaningful
information [1]. How to form the resulted hand
gestures to be understood and well interpreted by the
computer considered as the problem of gesture
interaction.
Hand gestures recognition (HGR) is one of the
major innovative areas of research for the
engineers,scientists and bioinformatics. Today many
researchers in the academia and industry are working
on different application without wearing any extra and
dedicated device to make interactions more easy,
natural and convenient. However, human-robot
interaction using hand gestures provides a remarkable
challenge. For the vision part, the complex and
cluttered backgrounds, dynamic lighting conditions
and a deformable human hand shape, extraction of
wrong object can cause a machine to misunderstand
the gesture. If the machine is mobile, the hand gesture
recognition system also needs to satisfy other
constraints, such as the size of the gesture in the
image and adaptation to motion. Moreover, to be
natural, the machine must be person-independent and
give feedback in real time.
The main focusing point of the paper is to presents a
Hand gesture recognition system emphasizing the
detection of the motion of the hand in a real-time
scenario, track it, identify its features such as
fingertips and use those to create a simple application,
which is measured to be challenging problem in the
human-computer interaction context and promising as
well. Although recent reviews [1,6-8,15,19,20] in
computer vision based have explained the importance
of gesture recognition system for human computer
interaction (HCI), this work concentrates on vision
based techniques method and it’s up-to-date. With
expecting to point out various research developments
as well as it emphasis good beginning for interested
2. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
199
persons in hand gesture recognition area.
Conclusively, we give some discussion on the current
challenges and open questions in this area and point
out a list of possible directions for future work.
The rest of the paper is organized as follows.
Section 2 explains the basic concepts and approaches
of the vision based hand gesture approaches. Section
3 includes several related works. The proposed work
is detailed in Section 4. Finally the work is settled in
Section 5.
2. HAND GESTURE TECHNOLOGY
To enable hand gesture recognition, numerous
approaches have been proposed, which can be
classified into various categories. For any system the
first step is to collect the data necessary to accomplish
a specific task. In order to acquire input data for hand
posture and gesture recognition system different
technologies are used. A common taxonomy is based
on whether extra devices are required for raw data
collecting. In this way, they are categorized into data
glove based hand gesture recognition, vision based
hand gesture recognition [5], and color glove based
hand gesture recognition [6]. Figure 1gives an
example of these technologies.
(a) Data-Glove based. (b) Colored marker (c) Vision based.
Fig 1: Examples of hand gesture recognition input technologies.
2.1 Data glove based approaches
Data glove based approaches require the user to
wear a cumbersome glove-like device, which is
equipped with sensors that can sense the movements
of hand(s) and fingers, and pass the information to the
computer. Hence it can be referred as Instrumented
glove approach. These approaches can easily provide
exact coordinates of palm and finger’s location and
orientation, and hand configurations, The advantages
of these approaches are high accuracy and fast
reaction speed. However these approaches require the
user to be connected with the computer physically
which obstacle the ease of interaction between users
and computers, besides the cost of these devices are
quite expensive.
2.2 Colored Markers based approaches
Color glove based approaches represent a
compromise between data glove based approaches
and vision based approaches. Marked gloves or
colored markers are gloves that worn by the human
hand [6] with some colors to direct the process of
tracking the hand and locating the palm and fingers
[6], which provide the ability to extract geometric
features necessary to form hand shape [6]. Compared
with instrumented data glove the amenity of this
technology is its simplicity in use, and it costs low
price. Intrinsically, they are similar to the latter,
except that, with the help of colored gloves, the image
preprocessing phase (e.g., segmentation, localization
and detection of hands) can be greatly simplified. The
disadvantages are similar to data glove based
approaches: they are unnatural and not suitable for
applications with multiple users due to hygiene issues.
2.3 Vision Based approaches:
Vision based approaches do not require the user to
wear anything (naked hands). Instead, video
camera(s) are used to capture the images of hands,
which are then processed and analyzed using
computer vision techniques. This type of hand gesture
recognition is simple, natural and convenient for users
and at present they are the most popular approaches to
gesture recognition. Although these approaches are
simple but a lot of gesture challenges are raised such
as the complex and cluttered background, lighting
variation, and other skin patches with the hand object,
besides system requirements such as robustness
,velocity, recognition time, throughput, and
computational efficiency [1][10].
3. RELATED WORK
William F. and Michal R applied orientation
histogram in [3] a method for recognizing gestures
based on pattern recognition. Mentions some issues
include similar gestures might have different
orientation histograms and different gestures could
have similar orientation histograms, besides that, the
proposed method achieved well for any objects that
dominate the image even if it is not the hand gesture.
Xingyan L. In [5] presented fuzzy c-means clustering
algorithm to recognize hand gestures in a mobile
remote. There are some restrictions with current
system as the recognition accuracy drops quickly
when the distance between the user and the camera is
greater than 1.5 meters or when the lighting is too
strong. The system cannot deal with an image that has
two or more patches of skin with similar size.
3. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
METHOD MERITS LIMITATIONS APLLICATIONS
200
Stergiopoulou E. [1] recognized static hand gestures
using Self-Growing and Self-Organized Neural Gas
(SGONG) network. Three geometric features was
Table1: comparison between various methods of hand gesture recognition system
extracted, two angles based on hand slope and the
distance from the palm center was determined, where
these features used to determine the number of the
raised fingers class. The system recognized 31
predefined gestures with recognition rate 90.45%, in
processing time 1.5 second, but it is time consuming
and when the number of training data increase, the
time needed for classification are increased too. Hasan
[4] applied multivariate Gaussian distribution to
recognize hand gestures using nongeometric features.
The input hand image is segmented using two
different methods ; skin color based segmentation by
applying HSV color model and clustering based
thresholding techniques. Lamberti [6] presents a real-time
hand gesture recognizer based on a color glove.
Color markers are employed to get the features as five
distances from palm to all fingers and four angles
between those distances. Daeho Lee and SeungGwan
Lee [11] presents a novel vision based method in
which , fingertips are detected by a novel scale-invariant
angle detection based on a variable k-cosine.
Table1 shows the comparisons between different hand
gesture recognition techniques.
a novel distance metric,
Finger-EarthMover’s
Distance (FEMD),
Thresholding
decomposition
robust to hand articulations,
distortions and orientation
or scale changes, and can
work in uncontrolled
environments
False and confusing gesture
orientations
Rock-Paper-Scissors
Game, Arithmetic
Computation
Learning Vector
Quantization, HSI color
space.
Cluttered backgrounds can
be managed
Accuracy Of Capturing.
Color variations upon
lighting changes
Multimedia presentation
command
Fuzzy C-Means
algorithm , Threshold.
Enough speed and sufficient
reliability
Recognition Rate : 85.83%
wrong object extraction,
environment lighting
changes, Distance variations,
Cannot distinguish two or
more patches of skin.
mobile robot
Self-Growing and Self-
Organized Neural Gas
(SGONG) network,
YCbCr color space,
Gaussian distribution
Reliable since it is relatively
immune to changing
lighting conditions and
provides good coverage of
the human skin color. Fast -
does not require post-processing
of the hand
image Recognition Rate :
90.45%
Time consuming
Complex
3D Modeling, Numbers
Recognition
Laplacian filter
Euclidian distance
metric, HSV color
model
Promising solution for
illumination changes
Recognition Rate : 91%
Different orientation
histograms Background is
plane and uniform,
Computational complexity
Helicopter Signaller for
Marshaling Operations,
Television Control.
Orientation histogram,
Euclidean distance
metric
simple and fast to compute
offers some robustness to
scene illumination changes
similar gestures might have
different orientation
histograms different gestures
could have similar
orientation histograms
wrong object detection
Pattern Recognition,
Graphic Editor Control
4. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
201
4. PROPOSED WORK
Gesture based computing interface makes the user
more interactive with the PC. The power of
controlling the PCs will be in our hands. With the
help of mere bare hand and its gestures we can get the
power of pointing device like mouse. Gesture based
computing makes the interface more advanced
compared to pointing device like mouse. We can get
full access of our personal data on our finger tips.
The proposed system is illustrated in the Fig
1,consists of 3 main components viz Image Capturing,
Image Processing And Application Handler. The
Inage capturing mainly initiates the webcam to
capture the image. After getting the captured image
processing of the particulars are facilitated. Includes
configuring skin color values and detecting the
concerned skin color bands. Next emphasis the
finding of different contours in the image and storing
it in an array. From the contours detected we must
identify the Principal Contour this will reduce the
wrong object detection. Principal contour helps us to
detect the finger tips ,to track the gesture actions and
to find the current positions. This identified gesture
action are send to application handler to control
different applications.
4.1 Module Description
4.1.1 Image Capture Module
We need to capture the gestures provided by the
hands .In order to do that we use an Image capturing
device (Web cam) to pictures every movement of the
hands, and this is marked as the input of our program
to control the system. The Java Media Framework
(JMF) is a Java library that enables audio, video and
other time-based media to be added to Java
applications and applets. This optional package,
which can capture, play, stream, and transcode
multiple media formats, extends the Java Platform,
Standard Edition (Java SE) and allows development
of cross-platform multimedia applications. Using this
key feature in JMF it is used to detect the user hand
movements and it is directed to the Computing
device.
4.1.2 Color Detection Module
It should be processed in such a way that we have to
identify the colors that are required by analyzing each
Fig 2. Block diagram of the proposed work
pixel. In order to identify color the pixels in the image
is arranged in a one dimensional array. It makes the
color analysis much easier. The RGB components in
each pixel are calculated. The color value of each
pixel can range from 0-255. Depending on this value
of red, blue and green we can identify the pixel color.
I.e. to get red color, the R value should be greater than
150 and the other must be less than 40.
This is the method that we are using here for
processing image. In java a class named pixelGrabbe
is used for this process. Here we are comparing the
obtained RGB value with the already decided color
values. If we are obtaining such an RGB value in the
prescribed limit, the analysis will be terminated and
the obtained color is taken as the input for the next
process. Hence this module is considered as the
5. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
202
learning phase in which we should able to detect the
skin color and make that recognizable by the HMI.
4.1.3 Image Comparison Module
From the image, the Skin color is detected and the
coordinate value of that color pixel is obtained. This is
done for each frame. From the coordinate value thus
obtained, the position mouse is calculated. The
position is calculated as the difference in coordinate
value of that color in consecutive frames. Depending
upon this value the different mouse operation such as
mouse pointer movement, left click, right click and
double click is performed.
4.1.4 ADD-ONS
GBC has got four applications. They are:
(1) Mouse activity handler module.
(2) Image viewer module.
(3) File transfer module
(4) Mail dispatcher.
(1) Mouse Activity Handler Module
Skin color and a concerned gestures are assigned for
basic mouse action. Gesture1 – Movement,
Gesture232 – Right Click, Gesture121 – Left Click.
The coordinates of mouse pointer changes as the
projected gesture move over the screen. This is done
by calculating the co-ordinate axis frame by frame.
The captured position of skin color is stored in a
variable (current x & current y).if the same color is
detected in the next frame at different position the
present values of current x & current y will be moved
to previous x & previous y respectively. The new co-ordinate
value is moved to the current x& current y.
By calculating the displacement of these two axes the
movement of mouse pointer is done.
The rest mouse actions are done by other respective colors and it is managed by Robot class which is provided by java. This class implementations. Using the class to generate input
events differs from posting events to the AWT event
queue or AWT components in that the events are
generated in the platform's native input queue. For
example, Robot.mouseMove will essentially move the
mouse cursor instead of just generating mouse move
events.
(2) Image Viewer
Image viewer is a application which helps us to view
digital images that are stored in our computer. To
change the magnification level to zoom in on the
current picture click the magnifier and then drag the
slider to zoom in on the picture. This lets us to get
close up view of the image. The same way zoom out
also can be done. To view next or previous image that
is in the same folder as the current picture, can click
on next or previous button.
In GBC instead of these buttons we use Gesture2 for
viewing previous and next images. If skin color is
moved from left to right, previous image can be
viewed. For next image, skin color is moved from
right to left. For zoom in and zoom out Gesture5 is
used a time. As the distance between these color
increases the picture will get enlarged and if the
distance between the color decreases the picture is
shortened. So by using GBC, it helps us to view
images in a different way. Instead of sitting close to a
system and always clicking on button to view
previous and next images and to zoom in and zoom
out we need only to show the specified gestures with
specified skin color and gesture(count of fingertips).
(3) File Transfer
File transfer is the process of transferring a file
from one system to another system. Here we use
gestures and skin color band for this purpose. For that
we need a server to control and manage those
activities. More than two clients can take part in this.
Two java classes, fileInputStream and file
OutputStream are used. The system participating in
this must be connected locally via Ethernet cable or
by using switch.
The server initiates the application. The client
program is run on those computers where the actual
file is present and on those where the file needs to be
copied. The file which is to e transferred between
computers is selected first. When skin color band
along with Gesture121 are showed on the system
which is having the selected file, it gets copied. Now
we need to show skin color band and Gesture5 on
those computer to which the file is to be transferred.
And the file will be pasted on the specified destination
or folder.
(4) Mail Dispatcher
6. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
203
It is just like Gmail and Yahoo mail, we can
access mail completely. We can compose, store, read
and send mail. We can store email addresses and other
confidential data here. Since the data must not be read
by others we set password for such documents.
Usually passwords are set of characters or numbers or
combination of both. For a hacker it can be easily
hacked and the data can be read. The data are no ore
confidential.
But now a day instead of password, patterns are used.
It is common in android phones. We just need to set a
pattern by drawing a pattern. Using only this pattern
one can open the phone. Just like this application, we
use pattern based locking technique for login to the
mail dispatcher. When the user first opens the mail
dispatcher, he is asked for login, where he has to draw
the pattern. It contains 9 rectangular boxes arranged in
a 3*3 matrix form. By using red color glow he can
move from one block to another and by using green
glow he can select the box of his pattern. If the pattern
is correct then only he can login into his account.
Thus it ensures security.
5. CONCLUSIONS
Hand gesture recognition is considered to be very
challenging for real-life applications because of its
requirements on the robustness, accuracy and
efficiency. In this paper , we described real-time hand
gesture recognition system based on principal contour
action analysis. Primarily a comprehensive review on
tools and techniques of gesture recognition system has
been provided with emphasis on hand gesture
expressions. The major tools surveyed include
Orientation Histograms, fuzzy clustering HMMs,
ANN, HSV and Geometrical method have been
reviewed and analyzed. Nearly all researchers are
using colored images for achieving better results.
Comparison between diverse gesture recognition
systems have been presented with explaining the
important parameters needed for any recognition
system which include: the segmentation process,
features extraction, and the classification algorithm.
Despite the fact that the recognition rate increases
gradually, the concerned algorithms experience
several issues. The challenges include wrong object
extraction, complex and nonuniform background,
partial-occlusion, background disturbance, object
reappearance, illumination change etc. The assortment
of specific algorithm for recognition depends on the
application needed. In this work, we proposed cost
effective and user friendly Hand Gesture Recognition
System by Principal Contour Action analysis which
finds the principal contour from the background and
analyses the position and contour of the detected
fingertips, various interface actions such as clicking,
moving ,zooming and pointing are recognized. Setting
a threshold value of time in capturing gestures
employed for differentiating the concerned gestures.
Two-handed dynamic-gesture multimodal interaction
is thus a promising area for future research.
Customization of gestures also can be employed to
make it more user friendly.
REFERENCES
[1] Stergiopoulou, E., & Papamarkos, N. (2009).
‘Hand gesture recognition using a neural network
shape fitting technique’. Elsevier Engineering
Applications of Artificial Intelligence 22, 1141-
1158.
http://dx.doi.org/10.1016/j.engappai.2009.03.008
[2] Malima, A., Özgür, E., & Çetin, M. (2006). ‘A
fast algorithm for vision-based hand gesture
recognition for robot control’. IEEE 14th
conference on Signal Processing and
Communications Applications, pp. 1- 4.
http://dx.doi.org/10.1109/SIU.2006.1659822
[3] W. T. Freeman and Michal R., (1995)
‘Orientation Histograms for Hand Gesture
Recognition’, IEEE International Workshop on
Automatic Face and Gesture Recognition.
[4] M. M. Hasan, P. K. Mishra, (2011). ‘HSV
Brightness Factor Matching for Gesture
Recognition System’, International Journal of
Image Processing (IJIP), Vol. 4(5).
[5] Xingyan Li. (2003). ‘Gesture Recognition Based
on Fuzzy C-Means Clustering Algorithm’,
Department of Computer Science. The University
of Tennessee Knoxville.
[6] Luigi Lamberti& Francesco Camastra, (2011).
‘Real-Time Hand Gesture Recognition Using a
Color Glove’, Springer 16th international
conference on Image analysis and processing:
Part I (ICIAP'11), pp. 365-373.
[7] C. Keskin, F. Krac, Y. Kara, and L. Akarun,
(2011). ‘Real time hand pose estimation using
depth sensors,’ in Proc. IEEE Int. Conf. Comput.
Vis. Workshops, Nov. 2011, pp. 1228–1234.
[8] Z. Pan,Y. Li,M. Zhang, C. Sun,K. Guo,X. Tang,
and S. Zhou, (2010) ‘A realtime multi-cue hand
tracking algorithm based on computer vision,’ in
IEEE Virt. Real. Conf., 2010, pp. 219–222.
[9] G. Hackenberg, R.McCall, andW. Broll, (2011).
‘Lightweight palm and finger tracking for real-time
3-D gesture control,’ in IEEE Virtual
Reality Conf., Mar. 2011, pp. 19–26.
[10]A. Chaudhary, J. Raheja, and S. Raheja, (2012).
‘A vision based geometrical method to find
fingers positions in real time hand gesture
recognition,’ J. Software, vol. 7, no. 4, pp. 861–
869.
7. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
204
[11]D. Lee and S. Lee, (2011). ‘Vision-based finger
action recognition by angle detection and contour
analysis,’ ETRI J., vol. 33, no. 3, pp. 415–422.
[12] J. Raheja, A. Chaudhary, and K. Singal, (2011).
‘Tracking of fingertips and centers of palm using
Kinect,’ in Proc. Int. Conf. Computat. Intell.,
Modell. Simulat., pp. 248–252.
[13]V. Frati and D. Prattichizzo, (2011). ‘Using
Kinect for hand tracking and rendering in
wearable haptics,’ in IEEE World Haptics Conf.
WHC, Jun. 2011, pp. 317–321.
[14]H. Liang, J. Yuan, and D. Thalmann,(2011). ‘3-D
fingertip and palm tracking in depth image
sequences,’ in Proc. 20th ACM Int. Conf.
Multimedia, pp. 785–788.
[15]Ghobadi, S. E., Loepprich, O. E., Ahmadov, F.,
Bernshausen, J., Hartmann, K., & Lo®eld, O.
(2008).‘Real time hand based robot control using
multimodal images’. International Journal of
Computer Science IJCS.Vol 35(4). Retrieved
from
www.iaeng.org/IJCS/issues_v35/issue_4/IJCS_3
5_4_08.pdf
[16]Garg, P., Aggarwal, N., & Sofat, S.
(2009).‘Vision based hand gesture recognition’.
World Academy of Science, Engineering and
Technology. Retrieved from
www.waset.org/journals/waset/v49/v49-173.pdf
[17]K. Grauman and T. Darrell (2004) . ‘Fast contour
matching using approximate earth mover's
distance’. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition,
Washington DC.
[18]Zhou Ren, Junsong Yuan, Wenyu Liu, (2013).
‘Minimum Near-Convex Shape Decomposition’,
IEEE transactions on pattern analysis and
machine intelligence, vol. 35, no. 10, october.
[19]Marco Maisto, Massimo Panella, Member, IEEE,
Luca Liparulo, and Andrea Proietti (2013). ‘An
Accurate Algorithm for the Identification of
Fingertips Using an RGB-D Camera’, In IEEE
journal on emerging and selected topics in
circuits and systems, vol. 3, no. 2, june.
[20]Zhou Ren, Junsong Yuan,(2013). ‘Robust Part-
Based Hand Gesture Recognition Using Kinect
Sensor’, IEEE transactions on multimedia, vol.
15, no. 5, August.
[21] Chinnu Thomas, S. Pradeepa, (2014), ‘A
Comprehensive Review on Vision Based Hand
Gesture Technology, in International Journal of
Research in Advent Technology, Volume 2, Issue
1, January.