This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
HSV Brightness Factor Matching for Gesture Recognition SystemCSCJournals
The main goal of gesture recognition research is to establish a system which can identify specific human gestures and use these identified gestures to be carried out by the machine, In this paper, we introduce a new method for gesture recognition that based on computing the local brightness for each block of the gesture image, the gesture image is divided into 25x25 blocks each of 5x5 block size, and we calculated the local brightness of each block, so, each gesture produces 25x25 features value, our experimental shows that more that %60 of these features are zero value which leads to minimum storage space, this brightness value is calculated from the HSV (Hue, Saturation and Value) color model that used for segmentation operation, the recognition rate achieved is %91 using 36 training gestures and 24 different testing gestures. This Paper focuses on the hand gesture instead of the whole body movement since hands are the most flexible part of the body and can transfer the most meaning, we build a gesture recognition system that can communicate with the machine in natural way without any mechanical devices and without using the normal input devices which are the keyboard and mouse and the mathematical equations will be the translator between the gestures and the telerobotic.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
VISION BASED HAND GESTURE RECOGNITION USING FOURIER DESCRIPTOR FOR INDIAN SIG...sipij
Indian Sign Language (ISL) interpretation is the major research work going on to aid Indian deaf and dumb people. Considering the limitation of glove/sensor based approach, vision based approach was considered for ISL interpretation system. Among different human modalities, hand is the primarily used modality to any sign language interpretation system so, hand gesture was used for recognition of manual
alphabets and numbers. ISL consists of manual alphabets, numbers as well as large set of vocabulary with grammar. In this paper, methodology for recognition of static ISL manual alphabets, number and static symbols is given. ISL alphabet consists of single handed and two handed sign. Fourier descriptor as a feature extraction method was chosen due the property of invariant to rotation, scale and translation. True
positive rate was achieved 94.15% using nearest neighbourhood classifier with Euclidean distance where
sample data were considered with different illumination changes, different skin color and varying distance
from camera to signer position.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
HSV Brightness Factor Matching for Gesture Recognition SystemCSCJournals
The main goal of gesture recognition research is to establish a system which can identify specific human gestures and use these identified gestures to be carried out by the machine, In this paper, we introduce a new method for gesture recognition that based on computing the local brightness for each block of the gesture image, the gesture image is divided into 25x25 blocks each of 5x5 block size, and we calculated the local brightness of each block, so, each gesture produces 25x25 features value, our experimental shows that more that %60 of these features are zero value which leads to minimum storage space, this brightness value is calculated from the HSV (Hue, Saturation and Value) color model that used for segmentation operation, the recognition rate achieved is %91 using 36 training gestures and 24 different testing gestures. This Paper focuses on the hand gesture instead of the whole body movement since hands are the most flexible part of the body and can transfer the most meaning, we build a gesture recognition system that can communicate with the machine in natural way without any mechanical devices and without using the normal input devices which are the keyboard and mouse and the mathematical equations will be the translator between the gestures and the telerobotic.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
VISION BASED HAND GESTURE RECOGNITION USING FOURIER DESCRIPTOR FOR INDIAN SIG...sipij
Indian Sign Language (ISL) interpretation is the major research work going on to aid Indian deaf and dumb people. Considering the limitation of glove/sensor based approach, vision based approach was considered for ISL interpretation system. Among different human modalities, hand is the primarily used modality to any sign language interpretation system so, hand gesture was used for recognition of manual
alphabets and numbers. ISL consists of manual alphabets, numbers as well as large set of vocabulary with grammar. In this paper, methodology for recognition of static ISL manual alphabets, number and static symbols is given. ISL alphabet consists of single handed and two handed sign. Fourier descriptor as a feature extraction method was chosen due the property of invariant to rotation, scale and translation. True
positive rate was achieved 94.15% using nearest neighbourhood classifier with Euclidean distance where
sample data were considered with different illumination changes, different skin color and varying distance
from camera to signer position.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
The increasing popularity of animes makes it vulnerable to unwanted usages like copyright violations and pornography. That’s why, we need to develop a method to detect and recognize animation characters. Skin detection is one of the most important steps in this way. Though there are some methods to detect human skin color, but those methods do not work properly for anime characters. Anime skin varies greatly from human skin in color, texture, tone and in different kinds of lighting. They also vary greatly among themselves. Moreover, many other things (for example leather, shirt, hair etc.), which are not skin, can have color similar to skin. In this paper, we have proposed three methods that can identify an anime character’s skin more successfully as compared with Kovac, Swift, Saleh and Osman methods, which are primarily designed for human skin detection. Our methods are based on RGB values and their comparative relations.
The increasing popularity of animes makes it vulnerable to unwanted usages like copyright violations and
pornography. That’s why, we need to develop a method to detect and recognize animation characters. Skin
detection is one of the most important steps in this way. Though there are some methods to detect human
skin color, but those methods do not work properly for anime characters. Anime skin varies greatly from
human skin in color, texture, tone and in different kinds of lighting. They also vary greatly among
themselves. Moreover, many other things (for example leather, shirt, hair etc.), which are not skin, can
have color similar to skin. In this paper, we have proposed three methods that can identify an anime
character’s skin more successfully as compared with Kovac, Swift, Saleh and Osman methods, which are
primarily designed for human skin detection. Our methods are based on RGB values and their comparative
relations.
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTIONsipij
Human face detection plays an important role in many applications such as face recognition , human
computer interface, biometrics, area of energy conservation, video surveillance and face image database
management. The selection of accurate color model is the first need of the face detection. In this paper, a
study on the various color models for face detection i.e. RGB, YCbCr, HSV and CIELAB are included. This
paper compares different color models based on the detection rate of skin regions. The results shows that
YCbCr as compared to other color models yields the best output even in varying lightening conditions.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Skin Detection Based on Color Model and Low Level Features Combined with Expl...IJERA Editor
Skin detection is active research area in the field of computer vision which can be applied in the application of
face detection, eye detection, etc. These detection helps in various applications such as driver fatigue monitoring
system, surveillance system etc. In Computer vision applications, the color model and representations of the
human image in color model is one of major module to detect the skin pixels. The mainstream technology is
based on the individual pixels and selection of the pixels to detect the skin part in the whole image. In this thesis
implementation, we presents a novel technique for skin color detection incorporating with explicit region based
and parametric based approach which gives the better efficiency and performances in terms of skin detection in
human images. Color models and image quantization technique is used to extract the regions of the images and
to represent the image in a particular color model such as RGB and HSV, and then the parametric based
approach is applied by selecting the low level skin features are applied to extract the skin and non-skin pixels of
the images. In the first step, our technique uses the state-of-the-art non-parametric approach which we call the
template based technique or explicitly defined skin regions technique. Then the low level features of the human
skin are being extracted such as edge, corner detection which is also known as parametric method. The
experimental results depict the improvement in detection rate of the skin pixels by this novel approach. And in
the end we discuss the experimental results to prove the algorithmic improvements.
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
Content-based Image Retrieval is all about generating signatures of images in database and comparing the signature of the query image with these stored signatures. Color histogram can be used as signature of an image and used to compare two images based on certain distance metric. Distance metrics Manhattan distance (L1 norm) and Euclidean distance (L2 norm) are used to determine similarities between a pair of images. In this paper, Corel database is used to evaluate the performance of Manhattan and Euclidean distance metrics. The experimental results showed that Manhattan showed better precision rate than Euclidean distance metric. The evaluation is made using Content based image retrieval application developed using color moments of the Hue, Saturation and Value(HSV) of the image and Gabor descriptors are adopted as texture features.
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Computational Engineering Research(IJCER) ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...Mohammad Liton Hossain
Character recognition technique, associates a symbolic identity with the image of the character, is an important area in pattern recognition and image processing. The principal idea here is to convert raw images (scanned from document, typed, pictured etcetera) into editable text like html, doc, txt or other formats. There is a very limited number of Bangla Character recognition system, if available they can’t recognize the whole alphabet set. Motivated by this, this paper demonstrates a MATLAB based Character Recognition system from printed Bangla writings. It can also compare the characters of one image file to another one. Processing steps here involved binarization, noise removal and segmentation in various levels, features extraction and recognition.
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
Translation of sign language using generic fourier descriptor and nearest nei...ijcisjournal
Sign languages are used all over the world as a primary means of communication by deaf people. Sign
language translation is a promising application for vision-based gesture recognition methods. Therefore, it
is need such a tool that can translate sign language directly. This paper aims to create a system that can
translate static sign language into textual form automatically based on computer vision. The method
contains three phases, i.e. segmentation, feature extraction, and recognition. We used Generic Fourier
Descriptor (GFD) as feature extraction method and K-Nearest Neighbour (KNN) as classification
approach to recognize the signs. The system was applied to recognize each 120 stored images in database
and 120 images which is captured real time by webcam. We also translated 5 words in video sequences.
The experiment revealed that the system can recognized the signs with about 86 % accuracy for stored
images in database and 69 % for testing data which is captured real time by webcam.
Real time Myanmar Sign Language Recognition System using PCA and SVMijtsrd
Communication is the process of exchanging information, views and expressions between two or more persons, in both verbal and non verbal manner. The sign language is a visual language used by the people with the speech and hearing disabilities for communication in their daily conversation activities. Myanmar Sign Language MSL is the language of choice for most deaf people in this country. In this research paper, Real time Myanmar Sign Language Recognition System RMSLRS is proposed. The major objective is to accomplish the translation of 30 static sign gestures into Myanmar alphabets. The input video stream is captured by webcam and is inputed to computer vision. The incoming frames are converted into YCbCr color space and skin like region is detected by YCbCr threshold technique. The hand region is also segmented and converted into grayscale image and morphological operation is applied for feature extraction. In order to translate the signs of ASL into the corresponding alphabets, PCA is used for feature extraction and SVM is used for recognition of MSL signs. Experimental results show that the proposed system gives the successful recognition accuracy of static sign gestures of MSL alphabets with 89 . Myint Tun | Thida Lwin "Real-time Myanmar Sign Language Recognition System using PCA and SVM" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26797.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26797/real-time-myanmar-sign-language-recognition-system-using-pca-and-svm/myint-tun
The increasing popularity of animes makes it vulnerable to unwanted usages like copyright violations and pornography. That’s why, we need to develop a method to detect and recognize animation characters. Skin detection is one of the most important steps in this way. Though there are some methods to detect human skin color, but those methods do not work properly for anime characters. Anime skin varies greatly from human skin in color, texture, tone and in different kinds of lighting. They also vary greatly among themselves. Moreover, many other things (for example leather, shirt, hair etc.), which are not skin, can have color similar to skin. In this paper, we have proposed three methods that can identify an anime character’s skin more successfully as compared with Kovac, Swift, Saleh and Osman methods, which are primarily designed for human skin detection. Our methods are based on RGB values and their comparative relations.
The increasing popularity of animes makes it vulnerable to unwanted usages like copyright violations and
pornography. That’s why, we need to develop a method to detect and recognize animation characters. Skin
detection is one of the most important steps in this way. Though there are some methods to detect human
skin color, but those methods do not work properly for anime characters. Anime skin varies greatly from
human skin in color, texture, tone and in different kinds of lighting. They also vary greatly among
themselves. Moreover, many other things (for example leather, shirt, hair etc.), which are not skin, can
have color similar to skin. In this paper, we have proposed three methods that can identify an anime
character’s skin more successfully as compared with Kovac, Swift, Saleh and Osman methods, which are
primarily designed for human skin detection. Our methods are based on RGB values and their comparative
relations.
COMPARATIVE ANALYSIS OF SKIN COLOR BASED MODELS FOR FACE DETECTIONsipij
Human face detection plays an important role in many applications such as face recognition , human
computer interface, biometrics, area of energy conservation, video surveillance and face image database
management. The selection of accurate color model is the first need of the face detection. In this paper, a
study on the various color models for face detection i.e. RGB, YCbCr, HSV and CIELAB are included. This
paper compares different color models based on the detection rate of skin regions. The results shows that
YCbCr as compared to other color models yields the best output even in varying lightening conditions.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Skin Detection Based on Color Model and Low Level Features Combined with Expl...IJERA Editor
Skin detection is active research area in the field of computer vision which can be applied in the application of
face detection, eye detection, etc. These detection helps in various applications such as driver fatigue monitoring
system, surveillance system etc. In Computer vision applications, the color model and representations of the
human image in color model is one of major module to detect the skin pixels. The mainstream technology is
based on the individual pixels and selection of the pixels to detect the skin part in the whole image. In this thesis
implementation, we presents a novel technique for skin color detection incorporating with explicit region based
and parametric based approach which gives the better efficiency and performances in terms of skin detection in
human images. Color models and image quantization technique is used to extract the regions of the images and
to represent the image in a particular color model such as RGB and HSV, and then the parametric based
approach is applied by selecting the low level skin features are applied to extract the skin and non-skin pixels of
the images. In the first step, our technique uses the state-of-the-art non-parametric approach which we call the
template based technique or explicitly defined skin regions technique. Then the low level features of the human
skin are being extracted such as edge, corner detection which is also known as parametric method. The
experimental results depict the improvement in detection rate of the skin pixels by this novel approach. And in
the end we discuss the experimental results to prove the algorithmic improvements.
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
Content-based Image Retrieval is all about generating signatures of images in database and comparing the signature of the query image with these stored signatures. Color histogram can be used as signature of an image and used to compare two images based on certain distance metric. Distance metrics Manhattan distance (L1 norm) and Euclidean distance (L2 norm) are used to determine similarities between a pair of images. In this paper, Corel database is used to evaluate the performance of Manhattan and Euclidean distance metrics. The experimental results showed that Manhattan showed better precision rate than Euclidean distance metric. The evaluation is made using Content based image retrieval application developed using color moments of the Hue, Saturation and Value(HSV) of the image and Gabor descriptors are adopted as texture features.
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Computational Engineering Research(IJCER) ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
DEVELOPMENT OF AN ALPHABETIC CHARACTER RECOGNITION SYSTEM USING MATLAB FOR BA...Mohammad Liton Hossain
Character recognition technique, associates a symbolic identity with the image of the character, is an important area in pattern recognition and image processing. The principal idea here is to convert raw images (scanned from document, typed, pictured etcetera) into editable text like html, doc, txt or other formats. There is a very limited number of Bangla Character recognition system, if available they can’t recognize the whole alphabet set. Motivated by this, this paper demonstrates a MATLAB based Character Recognition system from printed Bangla writings. It can also compare the characters of one image file to another one. Processing steps here involved binarization, noise removal and segmentation in various levels, features extraction and recognition.
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSISijcseit
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
Translation of sign language using generic fourier descriptor and nearest nei...ijcisjournal
Sign languages are used all over the world as a primary means of communication by deaf people. Sign
language translation is a promising application for vision-based gesture recognition methods. Therefore, it
is need such a tool that can translate sign language directly. This paper aims to create a system that can
translate static sign language into textual form automatically based on computer vision. The method
contains three phases, i.e. segmentation, feature extraction, and recognition. We used Generic Fourier
Descriptor (GFD) as feature extraction method and K-Nearest Neighbour (KNN) as classification
approach to recognize the signs. The system was applied to recognize each 120 stored images in database
and 120 images which is captured real time by webcam. We also translated 5 words in video sequences.
The experiment revealed that the system can recognized the signs with about 86 % accuracy for stored
images in database and 69 % for testing data which is captured real time by webcam.
Real time Myanmar Sign Language Recognition System using PCA and SVMijtsrd
Communication is the process of exchanging information, views and expressions between two or more persons, in both verbal and non verbal manner. The sign language is a visual language used by the people with the speech and hearing disabilities for communication in their daily conversation activities. Myanmar Sign Language MSL is the language of choice for most deaf people in this country. In this research paper, Real time Myanmar Sign Language Recognition System RMSLRS is proposed. The major objective is to accomplish the translation of 30 static sign gestures into Myanmar alphabets. The input video stream is captured by webcam and is inputed to computer vision. The incoming frames are converted into YCbCr color space and skin like region is detected by YCbCr threshold technique. The hand region is also segmented and converted into grayscale image and morphological operation is applied for feature extraction. In order to translate the signs of ASL into the corresponding alphabets, PCA is used for feature extraction and SVM is used for recognition of MSL signs. Experimental results show that the proposed system gives the successful recognition accuracy of static sign gestures of MSL alphabets with 89 . Myint Tun | Thida Lwin "Real-time Myanmar Sign Language Recognition System using PCA and SVM" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26797.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26797/real-time-myanmar-sign-language-recognition-system-using-pca-and-svm/myint-tun
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
This paper proposes a system gives for explicit content image detection based on Computer Vision Algorithms, pattern recognition and FTK software Explicit Image Detection. In the first stage, HSV color model is used for the input images for the purpose of discriminating elements that are not human skin images. Then the image is filtered using skin detection. The output image only contains the areas of which it is composed. The results show a comparison between the proposed system and the company software Access Data called Forensic Toolkit 3.1 Explicit Image Detection isperformed.
A CHINESE CHARACTER RECOGNITION METHOD BASED ON POPULATION MATRIX AND RELATIO...Teady Matius
A Chinese character has many different forms, information in its features will have many variations. Therefore, it needs a relational database to store many variations of their features. The use of the relational database to store the sets of features enables the use of distance measurements methods while measuring the sets of feature that owned by a Chinese character to recognize a Chinese character image inputted. The feature used in this thesis is the pixel population matrix. The sets of the features are stored and queried by using the relational database.
This paper discusses about how to recognize the Chinese character image and Chinese radical image by using relational database and pixel population matrix.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXINGIJCSEA Journal
Most of the data stored in libraries are in digital form will contain either pictures or video, which is tough to search or browse. Methods which are automatic for searching picture collections made large use of color histograms, because they are very strong to wide changes in viewpoint, and can be calculated trivially. However, color histograms unable to present spatial data, and therefore tend to give lesser results. By using combination of color information with spatial layout we have developed several methods, while retrieving the advantages of histograms. A method computes a given color as a function of the distance between two pixels, which we call a color correlogram. We propose a color-based image descriptor that can be used for image indexing based on high-level semantic concepts. The descriptor is
based on Kobayashi’s Color Image Scale, which is a system that includes 130 basic colors combined in 1180 three-color combinations. The words are represented in a two dimensional semantic space into groups based on perceived similarity. The modified approach for statistical analysis of pictures involves transformations of ordinary RGB histograms. Then a semantic image descriptor is derived, containing semantic data about both color combinations and single colors in the image.
Hand gesture classification is popularly used in
wide applications like Human-Machine Interface, Virtual
Reality, Sign Language Recognition, Animations etc. The
classification accuracy of static gestures depends on the
technique used to extract the features as well as the classifier
used in the system. To achieve the invariance to illumination
against complex background, experimentation has been
carried out to generate a feature vector based on skin color
detection by fusing the Fourier descriptors of the image with
its geometrical features. Such feature vectors are then used in
Neural Network environment implementing Back
Propagation algorithm to classify the hand gestures. The set
of images for the hand gestures used in the proposed research
work are collected from the standard databases viz.
Sebastien Marcel Database, Cambridge Hand Gesture Data
set and NUS Hand Posture dataset. An average classification
accuracy of 95.25% has been observed which is on par with
that reported in the literature by the earlier researchers.
Similar to ttA sign language recognition approach for (20)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Generating a custom Ruby SDK for your web service or Rails API using Smithy
ttA sign language recognition approach for
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
A SIGN LANGUAGE RECOGNITION APPROACH FOR
HUMAN-ROBOT SYMBIOSIS
Afzal Hossian1, Shahrin Chowdhury2, and Asma-ull-Hosna3
1IIT (Institute of Information Technology), University of Dhaka, Bangladesh
2Chalmers University of Technology, Gothenburg, Sweden
3Sookmyung Women’s University, Seoul, South-Korea
ABSTRACT
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
KEYWORDS
American Sign Language, Histogram equalization, Human-robot symbiosis, Moto-Robo, Skin colour
segmentation.
1. INTRODUCTION
In the dictionary of American Cultural Heritage the word “symbiosis” is defined as following: “A
close, prolonged association among two or more different organisms of different species that may
but does not necessarily benefit each member” [1]. In recent times this biological term has been
used to define similar relations among wider collection of entities. In this research, the main
purpose is to establish a symbiotic relationship between robots and human beings for their
coexistence and co-operative work and consolidate their relationship for the benefit of each other.
Image understanding concerns the issues of finding interpretations of images. These
interpretations would explain the meaning of the contents of the images. In order to establish a
human-robot symbiotic society, different kinds of objects are being interpreted using the visual,
geometrical and knowledge-based approaches. When the robots are working cooperatively with
human beings, it is necessary to share and exchange their ideas and thoughts. Human hand
gesture is, therefore, immerging tremendous interest in the advancement of human-robot interface
since it provides a natural and efficient way of exploring expressions.
The sign language is the fundamental communication method between people who suffer from
hearing defects. In order for an ordinary person to communicate with deaf people, a translator is
usually needed to translate sign language into natural language and vice versa [2]. As a primary
DOI : 10.5121/ijcseit.2014.4402 11
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
component of many sign languages and in particular the American Sign Language (ASL), hand
gestures and finger-spelling language plays an important role in deaf learning and their
communication. Therefore, sign language can be considered as a collection of gestures,
movements, postures, and facial expressions corresponding to letters and words in natural
languages.
American Sign Language (ASL) is considered to be a complete language which includes signs
using hands, other gesture with the support of facial expression and postures of the body [2]. ASL
follows different grammar pattern compare to any other normal languages. Near about 6000
gestures of common words are represented using finger spelling by ASL. 26 individual alphabets
are signified by 26 different gesture with the use of single hand. These 26 alphabets of ASL are
presented in Fig. 1.
Charayaphan and Marble [3] investigated a way using image processing to understand ASL. Out
of 31 ASL symbols 27 can correctly recognize by their suggested system. Fels and Hinton [4]
have developed a system. VPL DataGlove Mark II along with a Polhemus tracker was used as
input devices in their developed system. For categorized hand gestures neural network method
was applied. For the input of HMMs, two-dimensional features of a solo camera along with view-based
approach were applied by Starner and Pentland [5]. Using HMMs and considering 262-sign
vocabulary, 91.3% accuracy was achieved for recognized isolated signs by Grobel and Assan [6].
While collecting sample features from the video recordings of users, they were using colour
gloves. Bowden and Sarhadi [7] developed a non-linear model of shape and motion for tracking
finger spelt American Sign Language. This approach is similar to HMM where ASL’s are
projected into shape space to guess the models and also to follow them.
12
A B C D E
F G H I J (dynamic)
K L M N O
P Q R S T
U V W X Y
Z (dynamic)
Figure 1. Alphabets of American Sign Language
This system is capable of visually detecting all static signs of the American Sign Language
(ASL): alphabets, numerical digits as well as general words for example: like, love, not agreed
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
etc. can also be represent using ASL. Fortunately the users can interact using his/her fingers only;
there is no need to use additional gloves or any other devices. Still, variation of hand shapes and
operational habit leads to recognition difficulties. Therefore, we realized the necessity to
investigate the signer independent sign language recognition to improve the system robustness
and practicability. Since the system is based on Affine transformation, our method relies on
presenting the gesture as a feature vector that is translation, scale and rotation invariant.
13
2. SYSTEM DESIGN
The ASL recognition system has two phases: the feature extraction phase and the classification
phase, as shown in Fig. 2.
The image samples are resized and then converted from RGB to YIQ colour model. Afterwards
the images are segmented to detect and digitize the sign image.
In the classification stage, a 3-layer, feed-forward backpropagation neural network is constructed.
It consists
start
Resize the Image
Convert RGB to YIQ
Neural Network
Is gesture?
Figure 2. System overview
Feature
Extraction
stage
Classifica-tion
stage
Gesture command
generation
Robot action
Yes
No
Skin color segmentation
Connected Component
analysis
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
of (40×30) neurons in the input layer, 768 (70% of input) neurons for the hidden layer, and 30
(total number of ASL image for the classification network) for the neurons in the output layer.
( ) = of a
14
2.1 Features to analyse images
Normalization of sample images, equalization of histogram, image filtering, and skin colour
segmentation are highlighted in this phase.
2.1.1 Normalization of sample images
A low pass filter is used in order to reduce aliasing the nearest neighbour interpolation method, to
find out the values of pixels in the output image where images are resized to 160 by 120.
2.1.2. Equalization of Histogram
Equalization of Histogram is used to improve the lighting conditions and the contrast of image as
the hand images contrast depends on the lighting condition. Let the histogram
p
n
h r
i
i
digital hand image consists of the colour bins in the range [0, C − 1] , where i r is the I th colour bin,
i p is the number of pixels in the image with that colour bin and n is the total number of pixels in
the image.
Some scaling constant are calculated using the cumulative sum of bins for any interval [0,1] of r
[8]. For individual pixel value r in the original images of level s and 0 £ T (r ) £ 1 for 0 £ r £ 1. is
used to yield the mapping to perform the function s = T (r ), of transforming by allowing the range.
The histogram equalization process is illustrated in Fig. 3.
2.1.3. Image Filtering
Prewitt filter provides the advantage of suppressing the noise which collected from various
sources without erasing some of the image details like low-pass filter.
2.1.4. Skin colour segmentation
Skin colour segmentation is based on visual information of the human skin colours from the
image sequences in YIQ colour space. The image samples are converted from RGB to YIQ
colour model. To check, the amount of skin colour value to identify the specific colour that have
dominance over the image by searching in YIQ space.
In the following matrix, luminance channel and two chrominance channels are represented with Y
and (I,Q) respectively where linear transformation of RGB is produced from YIQ.
Luminance, hue and saturation these three attributes are described using YIQ colour model [8]:
Y
0.25 0.587 0.49
= − −
0.45 0.384 0.320
−
R
G
B
I
Q
0.212 0.639 0.79
(1)
Here red, green, and blue component values are denoted with R, G, B between the range of
[0,255].
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
Since the human skin colours are clustered in colour space and differ from person to person and
of races, so in order to detect the hand parts in an image, the skin pixels are thresholded
empirically [9],[10].
15
The threshold value is calculated using following equation:
(60 Y 200) and (20 I 50) (2)
The detection of hand region boundaries by such a YIQ segmentation process is illustrated in Fig.
4.
The exact location of the hand is then determined from the image with largest connected region of
skin-coloured pixels. For uneven segment image detection of connected components, the Region-growing
algorithm is applied.
In this experiment, 8-pixel neighbourhood connectivity is employed. In order to remove the false
regions from the isolated blocks, smaller connected regions are assigned by the values of the
background pixels
2.2 Classification phase
The classification phase includes neural network training for the recognition of binary image
patterns of the hand. In the neural networks the result will be not perfect. Sometimes practice
represents the best solution. Decision in this field is very difficult; we had to examine different
architectures and decide according to their results
Figure 3. Histogram equalization
Figure 4. Skin colour segmentation
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
Therefore, after several experiments, it has been decided that the proposed system should be
based on supervised learning in which the learning rule is provided with the set of examples (the
training set). When the parameters, weights and biases of the network are initialized, the network
is ready for training. The multi-layer perceptron, as shown in Fig. 5, with backpropagation
algorithm has been employed for this research.
16
M
y1
wij
wjk
xn yn
Figure 5. Network architecture of BPNN
x1
M
M
The number of epochs was 10,000 and the goal was 0.0001. The back-propagation training
algorithm is given below.
Step 1 Initial Phase
To ensure the uniform distribution, the random numbers are generated using weight and threshold
from network levels
,
.
− +
2 4 2 4
,
.
Fi Fi
where Fi is the total number of inputs of neurons I in the network.
Step 2 Active Phase
Back-propagation neural network is activated to get desire yields ( ), ( ), ..., ( ). y , t y , t y , t d 1 d 2 d n by
applying inputs x1(t), x2 (t), ...,xn (t) .
(a) The hidden layer of authentic output of neurons, is calculated using below function:
1
n
= × −
y ( t ) sigmoid xi ( t ) wij ( t ) q j j ,
(3)
=
i
where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid
activation function.
(b) The output layer of authentic outputs of the neurons, is calculated using below function:
m
= × −
y ( t ) sigmoid y j ( t ) w ( t ) q ,
(4)
k jk k 1
=
j
where m is the number of inputs of neuron k in the hidden layer.
Step 3 Training of Weight
The following equation is used to propagate errors related with output neurons to update the
weights in the back-propagation network:
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
17
w ( p 1) w ( p) w ( p) jk
jk
jk
+ = + D , (5)
D = a d , (6)
where w ( p) y ( p) ( p)
ij j k
w ( p 1) w ( p) wij ( p)
ij
ij
+ = + D , (7)
D = a d , (8)
where w ( p) x ( p) ( p)
ij j j
where error, e ( p) y ( p) y ( p)
= − , (9)
k dk k
error gradient for neuron in the output layer is:
d = − , (10)
( p) y ( p)[1 y ( p)]e ( p)
k k k k
and ( ) ( )[1 ( )] ( ) ( )
d = − d . (11)
=
p y p y p p w p
j j j k jk
1
l
k
Step 4 Iteration
Increase iteration t by one, go back to Step 2 and repeat the process until the error value reduces
to the desired level. A complete flow chart of our proposed network is shown in Fig. 6.
3. EXPERIMENTS RESULTS PERFORMANCE
The performance and effectiveness of the system has been justified using different hand
expressions and issuing commands to a robot named “Moto-Robo”. The computer configuration
for this experiment was Pentium IV 1.2 GHz PC along with 512 MB RAM. Visual C++ was used
as the programming language to implement the algorithm.
3.1. Interfacing the robot
The communication link between the computer and the robot has been established by means of
parallel communication port. The parallel port is a 25 pin D-shaped female (DB25) connector
equipped in the back of the computer. The pin configuration of DB25 connector is shown in the
Fig. 7. The lines in DB25 connector are divided in to three groups: Status lines, Data lines and
Control lines. As the name refers, data is transferred over data lines, Control lines are used to
control the peripheral and of course, the peripheral returns status signals back computer through
Status lines.
In order to access parallel ports by the programmers some library functions are used.
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
18
start
Initialization
Activation
Weight training
Iteration
Terminate?
Yes
Stop
Figure 6. Flow chart of BPNN
No
Status Resister Data Resister
Control Resister
Figure 7. Pin configuration of DB25 port
Visual C++ provides two functions to access IO mapped peripherals, ‘inp’ for reading data from
the port and ‘outp’ for writing data into the port.
3.2 Analysis of Experiments
At first to determine the testing ability of the recognition system of hand gesture, to classify signs
for both training and testing set of data where the quantity of inputs influences the neural
network. Few of the signs have resemblances between them which lead to create some problems
in the performance.
In this experiment, the binary images are used to recognize the system using training and testing
data set. Also 10 samples for each sign were taken from 10 different volunteers. For each sign, 5
out of 10 samples were used for training purpose, while the remaining 5 signs were used for
testing. Various orientations and distances is considered while collecting sample images using
9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
digital camera. This way, we were able to obtain a data set with cases that had different sizes and
orientations, so we could examine the capabilities of our feature extraction scheme.
Performance evaluation of the system depends on its capability of correctly categorizing samples
related to their classes. The ratio of correctly categorizing samples and total amount of sample is
denoted as recognition ratio, i.e.
19
Number of correctly categorized sign
cognition rate (12)
Re = ×100%
Total amount of signs
In backpropagation learning algorithm modification in weights considered for a number of
periods results to continuous decrement in Training curve is represented in Fig. 8
1.4
1.2
1.0
0.8
0.6
0.4
0.2
Figure 8. Error versus iteration for training the BPNN
Figure 9. Program interface for robot control
0
0 500 1000 1500 2000 2500
Learning time (iteration)
Error
10. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
20
Table 1 Command to control Moto-Robo
3.3 Implementation
A remote control car (Moto-Robo), connected to the pc through the parallel port, has been
controlled by means of commands directed by the hand gesture of the user. The car has several
movements, such as: Forward, Backward, Turn right, Turn Left, Turn Light on, Turn Light off
and so on depending on the sign languages F, B, R, L, W, E, respectively. Some of the ASL
employed for controlling the robot is listed in Table 1.
The system was tested with (300) images, (ten images for each sign) untrained images; previously
unseen for the testing phase. In order to determining the yields in a suitable way a GUI has been
created by us. An example is shown in the Fig 9, where one of the actions of the robot as a result
of hand gesture recognition process is shown.
4. CONCLUSION
This research presents the development of a system for the recognition of American Sign
Language. On recognition of different hand gestures, a real time robot interaction system has
been implemented. For individual image pattern related to the set of training a set of input data
for accomplishing the work. Without the need of any gloves, images for different signs were
captured by digital camera. Deviation in position, direction, size and gesture are proved to be
easily adapted by the developed system. This is because the extracted features method used
Affine transformation to make the system translation, scaling and rotation invariant. The
recognition rate for training data and testing data are 92.0% and 80% respectively for the future
system.
The work presented in this research deals with static signs of ASL only. Adaptation of dynamic
signs can be an interesting thing to watch in future. There is a limitation in the existing system
that it only deals with images that have a non-skin colour background, overcoming this limitation
can make the system more compatible in real life. Beside hand images other types of images for
example eye tracking, facial expression, head gesture etc. can also be considered as sample
images for the network to analyse. The goal is to create a symbiotic environment in order to give
the opportunity to the robots to exchange their ideas with human beings which will definitely
bring benefits for both and also have an positive impact on the society.
11. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.4, August 2014
21
REFERENCE
[1] M. A. Bhuiyan and H. Ueno,(2003) “Image Understanding for Human -robot Symbiosis”, 6th ICCIT,
Dhaka, Bangladesh, Vol. 2, pp. 782-787.
[2] International Bibliography of Sign Language, (2005),[online]. Available:
http://www.signlang.unihamburg.de/bibweb/FJournals.html
[3] C. Charayaphan and A. Marble, (1992) “Image processing system for interpreting motion in
American sign language”, Journal of Biomedical Engineering, Vol. 14, pp. 419–425.
[4] S. Fels and G. Hinton, (1993) “GloveTalk: a neural network interface between a DataGlove and a
speech synthesizer”, IEEE Transactions on Neural Networks, Vol. 4, pp. 2–8.
[5] T. Starner and A. Pentland,(1995) “Visual recognition of American sign language using hidden
Markov models”, International Workshop on Automatic Face and Gesture Recognition, Zurich,
Switzerland, pp. 189–194.
[6] K. Grobel and M. Assan, (1996) “Isolated sign language recognition using hidden Markov models. In
Proceedings of the international conference of system, man and cybernetics”, pp. 162–167.
[7] R. Bowden and M. Sarhadi, (2002) “A non-linear model of shape and motion for tracking finger spelt
American sign language”, Image and Vision Computing, Vol. 9–10, pp. 597–607.
[8] R. C. Gonzalez and R. E. Woods, (2003) “Digital Image Processing”, Pearson Education Inc., 2nd
Edition, Delhi.
[9] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2003) “Face Detection and Facial
Feature Localization for Human-machine Interface”, NII Journal, Vol. 5, No. 1, pp. 26-39.
[10] M. A. Bhuiyan, V. Ampornaramveth, S. Muto, and H. Ueno,(2004) “ On Tracking of Eye for Human-
Robot Interface”, International Journal of Robotics and Automation, Vol. 19, No. 1, pp. 42-54.