We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
VISION BASED HAND GESTURE RECOGNITION USING FOURIER DESCRIPTOR FOR INDIAN SIG...sipij
Indian Sign Language (ISL) interpretation is the major research work going on to aid Indian deaf and dumb people. Considering the limitation of glove/sensor based approach, vision based approach was considered for ISL interpretation system. Among different human modalities, hand is the primarily used modality to any sign language interpretation system so, hand gesture was used for recognition of manual
alphabets and numbers. ISL consists of manual alphabets, numbers as well as large set of vocabulary with grammar. In this paper, methodology for recognition of static ISL manual alphabets, number and static symbols is given. ISL alphabet consists of single handed and two handed sign. Fourier descriptor as a feature extraction method was chosen due the property of invariant to rotation, scale and translation. True
positive rate was achieved 94.15% using nearest neighbourhood classifier with Euclidean distance where
sample data were considered with different illumination changes, different skin color and varying distance
from camera to signer position.
Video Audio Interface for recognizing gestures of Indian sign LanguageCSCJournals
We proposed a system to robotically recognize gestures of sign language from a video stream of the signer. The developed system converts words and sentences of Indian sign language into voice and text in English. We have used the power of image processing techniques and artificial intelligence techniques to achieve the objective. To accomplish the task we used powerful image processing techniques such as frame differencing based tracking, edge detection, wavelet transform, image fusion techniques to segment shapes in our videos. It also uses Elliptical Fourier descriptors for shape feature extraction and principal component analysis for feature set optimization and reduction. Database of extracted features are compared with input video of the signer using a trained fuzzy inference system. The proposed system converts gestures into a text and voice message with 91 percent accuracy. The training and testing of the system is done using gestures from Indian Sign Language (INSL). Around 80 gestures from 10 different signers are used. The entire system was developed in a user friendly environment by creating a graphical user interface in MATLAB. The system is robust and can be trained for new gestures using GUI.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
Using The Hausdorff Algorithm to Enhance Kinect's Recognition of Arabic Sign ...CSCJournals
The objective of this research is to utilize mathematical algorithms to overcome the limitations of the depth sensor Kinect in detecting the movement and details of fingers and joints used for the Arabic alphabet sign language (ArSL). This research proposes a model to accurately recognize and interpret a specific ArSL alphabet using Microsoft's Kinect SDK Version 2 and a supervised machine learning classifier algorithm (Hausdorff distance) with the Candescent Library. The dataset of the model, considered prior knowledge for the algorithm, was collected by allowing volunteers to gesture certain letters of the Arabic alphabet. To accurately classify the gestured letters, the algorithm matches the letters by first filtering each sign based on the number of fingers and their visibility, then by calculating the Euclidean distance between contour points of a gestured sign and a stored sign, while comparing the results with an appropriate threshold.
To be able to evaluate the classifier, participants gesturing different letters with the same Euclidean distance value of the stored gestures. The class name that was closest to the gestured sign appeared directly on the display sign window. Then the results of the classifier were analyzed mathematically.
When comparing unknown incoming gestures signed with the stored gestures from the dataset collected, the model matched the gesture with the correct letter with high accuracy.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
Movement Tracking in Real-time Hand Gesture RecognitionPranav Kulkarni
To translate the gesture performed by the user in a
video sequence into meaningful symbols/commands, feature
extraction is the first and most crucial step in such systems
which measures the detected hand positions and its movement
track. We propose an efficient approach based on inter-frame
difference (IDF) to handle the hand movement tracking, which
is shown to be more robust in the accuracy aspect compared to
skin-color based approaches. Computational efficiency is
another attractive property that our approach greatly
improves the processing frame rate to fulfil the demand of a
real-time hand gesture recognition system.
VISION BASED HAND GESTURE RECOGNITION USING FOURIER DESCRIPTOR FOR INDIAN SIG...sipij
Indian Sign Language (ISL) interpretation is the major research work going on to aid Indian deaf and dumb people. Considering the limitation of glove/sensor based approach, vision based approach was considered for ISL interpretation system. Among different human modalities, hand is the primarily used modality to any sign language interpretation system so, hand gesture was used for recognition of manual
alphabets and numbers. ISL consists of manual alphabets, numbers as well as large set of vocabulary with grammar. In this paper, methodology for recognition of static ISL manual alphabets, number and static symbols is given. ISL alphabet consists of single handed and two handed sign. Fourier descriptor as a feature extraction method was chosen due the property of invariant to rotation, scale and translation. True
positive rate was achieved 94.15% using nearest neighbourhood classifier with Euclidean distance where
sample data were considered with different illumination changes, different skin color and varying distance
from camera to signer position.
Video Audio Interface for recognizing gestures of Indian sign LanguageCSCJournals
We proposed a system to robotically recognize gestures of sign language from a video stream of the signer. The developed system converts words and sentences of Indian sign language into voice and text in English. We have used the power of image processing techniques and artificial intelligence techniques to achieve the objective. To accomplish the task we used powerful image processing techniques such as frame differencing based tracking, edge detection, wavelet transform, image fusion techniques to segment shapes in our videos. It also uses Elliptical Fourier descriptors for shape feature extraction and principal component analysis for feature set optimization and reduction. Database of extracted features are compared with input video of the signer using a trained fuzzy inference system. The proposed system converts gestures into a text and voice message with 91 percent accuracy. The training and testing of the system is done using gestures from Indian Sign Language (INSL). Around 80 gestures from 10 different signers are used. The entire system was developed in a user friendly environment by creating a graphical user interface in MATLAB. The system is robust and can be trained for new gestures using GUI.
Novel Approach to Use HU Moments with Image Processing Techniques for Real Ti...CSCJournals
Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach; height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
Using The Hausdorff Algorithm to Enhance Kinect's Recognition of Arabic Sign ...CSCJournals
The objective of this research is to utilize mathematical algorithms to overcome the limitations of the depth sensor Kinect in detecting the movement and details of fingers and joints used for the Arabic alphabet sign language (ArSL). This research proposes a model to accurately recognize and interpret a specific ArSL alphabet using Microsoft's Kinect SDK Version 2 and a supervised machine learning classifier algorithm (Hausdorff distance) with the Candescent Library. The dataset of the model, considered prior knowledge for the algorithm, was collected by allowing volunteers to gesture certain letters of the Arabic alphabet. To accurately classify the gestured letters, the algorithm matches the letters by first filtering each sign based on the number of fingers and their visibility, then by calculating the Euclidean distance between contour points of a gestured sign and a stored sign, while comparing the results with an appropriate threshold.
To be able to evaluate the classifier, participants gesturing different letters with the same Euclidean distance value of the stored gestures. The class name that was closest to the gestured sign appeared directly on the display sign window. Then the results of the classifier were analyzed mathematically.
When comparing unknown incoming gestures signed with the stored gestures from the dataset collected, the model matched the gesture with the correct letter with high accuracy.
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
Movement Tracking in Real-time Hand Gesture RecognitionPranav Kulkarni
To translate the gesture performed by the user in a
video sequence into meaningful symbols/commands, feature
extraction is the first and most crucial step in such systems
which measures the detected hand positions and its movement
track. We propose an efficient approach based on inter-frame
difference (IDF) to handle the hand movement tracking, which
is shown to be more robust in the accuracy aspect compared to
skin-color based approaches. Computational efficiency is
another attractive property that our approach greatly
improves the processing frame rate to fulfil the demand of a
real-time hand gesture recognition system.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
Design and Development of a 2D-Convolution CNN model for Recognition of Handw...CSCJournals
Owing to the innumerable appearances due to different writers, their writing styles, technical environment differences and noise, the handwritten character recognition has always been one of the most challenging task in pattern recognition. The emergence of deep learning has provided a new direction to break the limits of decades old traditional methods. There exist many scripts in the world which are being used by millions of people. Handwritten character recognition studies of several of these scripts are found in the literature. Different hand-crafted feature sets have been used in these recognition studies. Feature based approaches derive important properties from the test patterns and employ them in a more sophisticated classification model. Feature extraction using Zernike moment and Polar harmonic transformation techniques was also performed and a moderate classification accuracy was also achieved. The problems faced while using these techniques led us to use CNN based recognition approach which is capable of learning the feature vector from the training character image samples in an unsupervised manner in the sense that no hand-crafting is employed to determine the feature vector. This paper presents a deep learning paradigm using a Convolution Neural Network (CNN) which is implemented for handwritten Gurumukhi and devanagari character recognition (HGDCR). In the present experiment, the training of a 34-layer CNN for a 35 class self-generated handwritten Gurumukhi and 60 class (50 alphabet and 10 digits) handwritten Devanagari character dataset was performed on a GPU (Graphic Processing Unit) machine. The experiment resulted with an average recognition accuracy of more than 92% in case of Handwritten Gurumukhi Character dataset and 97.25% in case of Handwritten Devanagari Character dataset. It was also concluded that the training and classification through our network design performed about 10 times faster than on a moderately fast CPU. The advantage of this framework is proved by the experimental results.
HABIT: Handwritten Analysis based Individualistic Traits PredictionCSCJournals
Handwriting Analysis is a scientific method of identifying, evaluating and understanding an individual’s personality based on handwriting. Each personality trait of a person is represented by a neurological brain pattern. Each of these neurological brain patterns produces a unique neuromuscular movement that is the same for every person who has that particular personality trait. When writing, these tiny movements occur unconsciously. Strokes, patterns and pressure applied while writing can reveal specific personality traits. The true personality including emotional outlay, fears, honesty and defenses are revealed. Professional handwriting examiners called graphologists analyze handwriting samples for this purpose. However, accuracy of the analysis depends on how skilled the analyst is. The analyst is also prone to fatigue. High cost incurred is yet another deterrent. This paper aims at implementing an off-line, writer-independent handwriting analysis system “HABIT” (Handwriting Analysis Based Individualistic Traits Prediction) which acts as a tool to predict the personality traits of a writer automatically from features extracted from a scanned image of the writer’s handwriting sample given as input. The features include slant of baseline, pen pressure, slant of letters and size of writing. The implementation uses Java and Eclipse-Indigo as tools.
Computers still have a long way to go before they can interact with users in a truly natural fashion. From a
user’s perspective, the most natural way to interact with a computer would be through a
speech and gesture
interface. Although speech recognition has made significant advances in the past ten years, gesture
recognition has been lagging behind.
Sign L
anguages (SL) are the most accomplished forms of gestural
communication. Therefore, their auto
matic analysis is a real challenge, which is interestingly implied to their
lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest
in the Automatic Natural Language Processing (ANLP) domain. In this w
ork, we are dealing with sign
language recognition, in particular of French Sign Language (FSL). FSL has its own specificities, such as the
simultaneity of several parameters, the important role of the facial expression or movement and the use of
space for
the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL) events
occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than
the spoken languages. We present a novel approach bas
ed on HMM to reduce the recognition complexity.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
Design and Development of a 2D-Convolution CNN model for Recognition of Handw...CSCJournals
Owing to the innumerable appearances due to different writers, their writing styles, technical environment differences and noise, the handwritten character recognition has always been one of the most challenging task in pattern recognition. The emergence of deep learning has provided a new direction to break the limits of decades old traditional methods. There exist many scripts in the world which are being used by millions of people. Handwritten character recognition studies of several of these scripts are found in the literature. Different hand-crafted feature sets have been used in these recognition studies. Feature based approaches derive important properties from the test patterns and employ them in a more sophisticated classification model. Feature extraction using Zernike moment and Polar harmonic transformation techniques was also performed and a moderate classification accuracy was also achieved. The problems faced while using these techniques led us to use CNN based recognition approach which is capable of learning the feature vector from the training character image samples in an unsupervised manner in the sense that no hand-crafting is employed to determine the feature vector. This paper presents a deep learning paradigm using a Convolution Neural Network (CNN) which is implemented for handwritten Gurumukhi and devanagari character recognition (HGDCR). In the present experiment, the training of a 34-layer CNN for a 35 class self-generated handwritten Gurumukhi and 60 class (50 alphabet and 10 digits) handwritten Devanagari character dataset was performed on a GPU (Graphic Processing Unit) machine. The experiment resulted with an average recognition accuracy of more than 92% in case of Handwritten Gurumukhi Character dataset and 97.25% in case of Handwritten Devanagari Character dataset. It was also concluded that the training and classification through our network design performed about 10 times faster than on a moderately fast CPU. The advantage of this framework is proved by the experimental results.
HABIT: Handwritten Analysis based Individualistic Traits PredictionCSCJournals
Handwriting Analysis is a scientific method of identifying, evaluating and understanding an individual’s personality based on handwriting. Each personality trait of a person is represented by a neurological brain pattern. Each of these neurological brain patterns produces a unique neuromuscular movement that is the same for every person who has that particular personality trait. When writing, these tiny movements occur unconsciously. Strokes, patterns and pressure applied while writing can reveal specific personality traits. The true personality including emotional outlay, fears, honesty and defenses are revealed. Professional handwriting examiners called graphologists analyze handwriting samples for this purpose. However, accuracy of the analysis depends on how skilled the analyst is. The analyst is also prone to fatigue. High cost incurred is yet another deterrent. This paper aims at implementing an off-line, writer-independent handwriting analysis system “HABIT” (Handwriting Analysis Based Individualistic Traits Prediction) which acts as a tool to predict the personality traits of a writer automatically from features extracted from a scanned image of the writer’s handwriting sample given as input. The features include slant of baseline, pen pressure, slant of letters and size of writing. The implementation uses Java and Eclipse-Indigo as tools.
Computers still have a long way to go before they can interact with users in a truly natural fashion. From a
user’s perspective, the most natural way to interact with a computer would be through a
speech and gesture
interface. Although speech recognition has made significant advances in the past ten years, gesture
recognition has been lagging behind.
Sign L
anguages (SL) are the most accomplished forms of gestural
communication. Therefore, their auto
matic analysis is a real challenge, which is interestingly implied to their
lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest
in the Automatic Natural Language Processing (ANLP) domain. In this w
ork, we are dealing with sign
language recognition, in particular of French Sign Language (FSL). FSL has its own specificities, such as the
simultaneity of several parameters, the important role of the facial expression or movement and the use of
space for
the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL) events
occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than
the spoken languages. We present a novel approach bas
ed on HMM to reduce the recognition complexity.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
Sign language SL is commonly considered as the primary gesture based language for deaf and dumb people. It is a medium of communication for such people. Basically image based and sensor based are the two important sign language recognition methods. Because of the difficulties in wearing complex devices like Hand Gloves, armbands, helmets etc. in sensor based approaches, lots of researches are done by companies and researchers on image based approaches. Sign language is used by these people to communicate with the normal people. Understanding this sign language is a difficult task according to the normal people. To address these difficulties, a real time translator for sign language using deep learning DL is introduced. It enables to reduce the limitations and cons of other methods to a greater extent. With the help of this real time translator, communication will be better and fast without causing any delay. Jeni Moni | Anju J Prakash "Real Time Translator for Sign Language" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32915.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32915/real-time-translator-for-sign-language/jeni-moni
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
Hand Gesture Recognition using OpenCV and Pythonijtsrd
Hand gesture recognition system has developed excessively in the recent years, reason being its ability to cooperate with machine successfully. Gestures are considered as the most natural way for communication among human and PCs in virtual framework. We often use hand gestures to convey something as it is non verbal communication which is free of expression. In our system, we used background subtraction to extract hand region. In this application, our PCs camera records a live video, from which a preview is taken with the assistance of its functionalities or activities. Surya Narayan Sharma | Dr. A Rengarajan "Hand Gesture Recognition using OpenCV and Python" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38413.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38413/hand-gesture-recognition-using-opencv-and-python/surya-narayan-sharma
Similar to Vision Based Approach to Sign Language Recognition (20)
Super Capacitor Electronic Circuit Design for Wireless ChargingIJAAS Team
Keeping time as base, a gadget has been proposed, where electrical accessories like Mobiles are charged within a fraction of minutes which is highly efficient and time saver as compared to the present time chargers which take nearly two hours to get fully charged. Objective of this project is to create a circuit which will be charged quickly and wireless. Wireless charging circuit works on the principle of inductive coupling. AC energy has been converted to DC energy through diode rectifier. Oscillator circuit produces high frequency passed by transmitter circuit to transmit magnetic field which is received by receiver circuit. A wireless charging concept with super capacitor will lead to faster charging and long operative life. Here super capacitor is used as a storage device. A Super capacitor has magnificent property, it can charge as well as discharge very quickly and linearly alike battery. The main difference between battery and super capacitor is specific energy, Super capacitor have 10-50 time less than battery.
On the High Dimentional Information Processing in Quaternionic Domain and its...IJAAS Team
There are various high dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through a wide spectrum of simulations which demonstrate the significance of the work.
Using FPGA Design and HIL Algorithm Simulation to Control Visual ServoingIJAAS Team
This is a novel research paper provides an optimal solution for object tracking using visual servoing control system with programmable gate array technology to realize the visual controller. The controller takes in account the robot dynamics to generate the joint torques directly for performing the tasks related to object tracking using visual servoing. Also, the notion of dynamic perceptibility provides the capability of the designed system to track desired objects employing direct visual servoing technique. This idea is assimilated in the suggested controller and realized in the programmable gate array. Additionally, this paper grants an ideal control framework for direct visual servoing robots that incorporates dynamic perceptibility features. With the aim of evaluating the proposed FPGA based architecture, the control algorithm is applied to Hardware-in-the-loop simulation (HIL) set up of three degrees of freedom rigid robotic manipulator with three links. Furthermore, different investigations are performed to demonstrate the behavior of the proposed system when a trajectory adjacent to a singularity is attained.
Mitigation of Selfish Node Attacks In Autoconfiguration of MANETsIJAAS Team
Mobile ad-hoc networks (MANETs) are composed of mobile nodes connected by wireless links without using any pre-existent infrastructure. Hence the assigning of unique IP address to the incoming node becomes difficult. There are various dynamic auto configuration protocols available to assign IP address to the incoming nodes including grid based protocol which assigns IP address with less delay and low protocol overhead. Such protocols get affected by presence of either selfish nodes or malicious nodes. Moreover there is no centralized approach to defend against these threats like in wired network such as firewall, intrusion detection system, proxy etc. The selfish nodes are the nodes which receive packet destined to it and drop packet destined to other nodes in order to save its energy and resources. This behavior of nodes affects normal functioning of auto configuration protocol. Many algorithms are available to isolate selfish nodes but they do not deal with presence of false alarm and protocol overhead. And also there are certain algorithms which use complex formulae and tedious mathematical calculations. The proposed algorithm in this paper helps to overcome the attack of selfish nodes effect in an efficient and scalable address auto configuration protocol that automatically configures a network by assigning unique IP addresses to all nodes with a very low protocol overhead, minimal address acquisition delay and computational overhead.
Design and Analysis of an Improved Nucleotide Sequences Compression Algorithm...IJAAS Team
DNA (deoxyribonucleic acid), is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA. The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). With continuous technology development and growth of sequencing data, large amount of biological data is generated. This large amount of generated data causes difficulty to store, analyse and process DNA sequences. So there is a wide need of reducing the size, for this reason, DNA Compression is employed to reduce the size of DNA sequence. Therefore there is a huge need of compressing the DNA sequence. In this paper, we have proposed an efficient and fast DNA sequence compression algorithm based on differential direct coding and variable look up table (LUT).
Review: Dual Band Microstrip Antennas for Wireless ApplicationsIJAAS Team
In this manuscript, a review of dual band microstrip antennas for wireless communication is presented. This review manuscript discusses regarding the geometric structures, different methods of analysis for antenna characteristics, and different types of wireless applications.
Building Fault Tolerance Within Wsn-A Topology ModelIJAAS Team
Wireless Sensor network plays a crucial role which helps in visualizing, processing, and analyzing the information wirelessly. WSN is a network which consists of huge amount of sensor devices which are of low cost and low powered also known as sensor nodes. These type of networks are generally used in real time applications such as monitoring of environmental conditions, militaries, industries etc., .but the problem that exists in WSN is may be due to different failures such as node failure, link failure, sink failure, interference, power dissipation and collision. If these faults are unable to handle then the desired network criteria’s may not be reached properly which results in inefficiency of the network. So, the main idea behind the investigation is to form a different networking topology which works in the event of failure.
Simplified Space Vector Pulse Width Modulation Based on Switching Schemes wit...IJAAS Team
This paper presents a simplified control strategy of SVPWM with a three segment switching sequence and 7 segment switch frequency for high power multilevel inverter. In the proposed method, the inverter switching sequences are optimized for minimization of device switching sequence frequency and improvement of harmonic spectrum by using the three most derived switching states and one suitable redundant state for each space vector. The proposed 3-segment sequence is compared with conventional 7-segment sequence similar for five level Cascaded H-Bridge inverter with various values of switching frequencies including very low frequency. The output spectrum of the proposed sequence design shows the reduction of device switching frequency and states current and line voltage. THD this minimizing the filter size requirement of the inverter, employed in industrial applications. Where sinusoidal output voltage is required.
An Examination of the Impact of Power Sector Reform on Manufacturing and Serv...IJAAS Team
The main objective of this study is to empirically examine the impact of Power Sector Reform on Manufacturing and Services Sector in Nigeria between 1999-2016. The study employed secondary annual time series data sourced from World Bank database (2016). The methodology adopted for the study was Augmented Dickey-Fuller (ADF); a test for long-run relationship using ARDL Bounds Testing approach with analysis of long-run and shortrun dynamics in the model. A striking revelation from the study is the inverse relationship that exists between manufacturing output and electricity consumption in Nigeria within the period referenced. This negative relationship is not unconnected with widespread allegation of misappropriation of budgeted funds for the Power Sector by successive administrations in Nigeria since 1999. It must be stated in clear terms that constant and consistent electricity generation, transmission and distribution is sine-qua-none for the growth of the national economy. Virtually all sectors of the economy depend on the supply of electricity to do business and so the lack of this vital ingredient of growth contributes in no small measure in stagnating economic growth and development. Efforts at reforming the power sector can only be fruitful when ALL stakeholders in the power sector including the political class put away their personal agendas and take the bull by the horn towards rescuing the nation from the looming danger of stagnant economic growth. Furthermore, there is the need for the Nigerian government to come up with new, better and alternative ways of improving energy generation and supply, as well as proper maintenance of electricity infrastructure in the country.
A Novel Handoff Necessity Estimation Approach Based on Travelling DistanceIJAAS Team
Mobility management is one of the most important challenges in Next
Generation Wireless Networks (NGWNs) as it enables users to move across
geographic boundaries of wireless networks. Nowadays, mobile
communications have heterogeneous wireless networks offering variable
coverage and Quality of Service (QoS). The availability of alternatives
generates a problem of occurrence of unnecessary handoff that results in
wastage of network resources. To avoid this, an efficient algorithm needs to
be developed to minimize the unnecessary handoffs. Conventionally,
whenever Wireless Local Area Network (WLAN) connectivity is available,
the mobile node switch from cellular network to wireless local area network
to gain maximum use of high bandwidth and low cost of wireless local area
network as much as possible. But to maintain call quality and minimum
number of call failure, a considerable proportion of these handovers should
be determined. Our algorithm makes the handoff to wireless local area
network only when the Predicted Received Signal Strength (PRSS) falls
below a threshold value and travelling distance inside the wireless local area
network is larger than a threshold distance.Through MATLAB simulation,
we show that our algorithm is able to improve handover performance.
The Combination of Steganography and Cryptography for Medical Image ApplicationsIJAAS Team
To give more security for the biomedical images for the patient betterment as well privacy for the patient highly confidently patient image report can be placed in database. If unknown persons like hospital staffs, relatives and third parties like intruder trying to see the report it has in the form of hidden state in another image. The patient detail like MRI image has been converted into any form of steganography. Then, encrypt those image by using proposed cryptography algorithm and place in the database.
Physical Properties and Compressive Strength of Zinc Oxide Nanopowder as a Po...IJAAS Team
In this study, the application of nanotechnology was applied in the dentistry field, especially in the innovation of dental amalgam material. To date, mercury (Hg) has been used widely as dental amalgam material with consideration of the cheap price, ease of use, and good mechanical strength. However, last few years, many problems have been faced in the dentistry field due to the use of mercury. Hence, new material is needed as an innovation to eliminate the mercury from dental amalgam composition. This research was conducted to analyze the physical properties and compressive strength of zinc oxide (ZnO) nanopowder as a potential dental amalgam material. The physical properties such as morphology and dimensions were analyzed by SEM and XRD. Further, the compression test was conducted by using hydraulic press machine. The results showed that the ZnO nanopowder analyzed has the particle size of 14.34 nm with the morphology classified as nanorods type. On the compression load of 500 kg, the average of ZnO green density is 3.170 g/cm3. This value experienced the increase of 4.763% when the load was set to 1000 kg, and 7.539% at 2000 kg. The dwelling time also took the same effect. At 30 seconds, the average of ZnO green density is 3.260 g/cm3. This value experienced the increase of 0.583% at 60 seconds and 3.098% at 90 seconds.
Experimental and Modeling Dynamic Study of the Indirect Solar Water Heater: A...IJAAS Team
The Indirect Solar Water Heater System (SWHS) with Forced Circulation is modeled by proposing a theoretical dynamic multi-node model. The SWHS, which works with a 1,91 m2 PFC and 300 L storage tank, and it is equipped with available forced circulation scale system fitted with an automated subsystem that controlled hot water, is what the experimental setup consisted of. The system, which 100% heated water by only using solar energy. The experimental weather conditions are measured every one minute. The experiments validation steps were performed for two periods, the first one concern the cloudy days in December, the second for the sunny days in May; the average deviations between the predicted and the experimental values is 2 %, 5 % for the water temperature output and for the useful energy are 4 %, 9 % respectively for the both typical days, which is very satisfied. The thermal efficiency was determined experimentally and theoretically and shown to agree well with the EN12975 standard for the flow rate between 0,02 kg/s and 0,2kg/s.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
An Improved Greedy Parameter Stateless Routing in Vehicular Ad Hoc NetworkIJAAS Team
Congestion problem and packet delivery related issues in the vehicular ad hoc network environment is a widely researched problem in recent years. Many network designers utilize various algorithms for the design of ad hoc networks and compare their results with the existing approaches. The design of efficient network protocol is a major challenge in vehicular ad hoc network which utilizes the value of GPS and other parameters associated with the vehicles. In this paper GPSR protocol is improved and compared with the existing GPSR protocol and AODV protocol on the basis of various performance parameters like throughput of the network, delay and packet delivery ratio. The results also validate the performance of the proposed approach.
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemIJAAS Team
Several classical timing synchronization schemes have been proposed for the timing synchronization in OFDM systems based on the correlation between identical parts of OFDM symbol. These schemes show poor performance due to the presence of plateau and significant side lobe. In this paper we present a timing synchronization schemes with timing metric based on a Constant Amplitude Zero Auto Correlation (CAZAC) sequence. The performance of the proposed timing synchronization scheme is better than the classical techniques.
Workload Aware Incremental Repartitioning of NoSQL for Online Transactional P...IJAAS Team
Numerous applications are deployed on the web with the increasing popularity of internet. The applications include, 1) Banking applications, 2) Gaming applications, 3) E-commerce web applications. Different applications reply on OLTP (Online Transaction Processing) systems. OLTP systems need to be scalable and require fast response. Today modern web applications generate huge amount of the data which one particular machine and Relational databases cannot handle. The E-Commerce applications are facing the challenge of improving the scalability of the system. Data partitioning technique is used to improve the scalability of the system. The data is distributed among the different machines which results in increasing number of transactions. The work-load aware incremental repartitioning approach is used to balance the load among the partitions and to reduce the number of transactions that are distributed in nature. Hyper Graph Representation technique is used to represent the entire transactional workload in graph form. In this technique, frequently used items are collected and Grouped by using Fuzzy C-means Clustering Algorithm. Tuple Classification and Migration Algorithm is used for mapping clusters to partitions and after that tuples are migrated efficiently.
A Fusion Based Visibility Enhancement of Single Underwater Hazy ImageIJAAS Team
Underwater images are prone to contrast loss, limited visibility, and undesirable color cast. For underwater computer vision and pattern recognition algorithms, these images need to be pre-processed. We have addressed a novel solution to this problem by proposing fully automated underwater image dehazing using multimodal DWT fusion. Inputs for the combinational image fusion scheme are derived from Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) for contrast enhancement in HSV color space and color constancy using Shades of Gray algorithm respectively. To appraise the work conducted, the visual and quantitative analysis is performed. The restored images demonstrate improved contrast and effective enhancement in overall image quality and visibility. The proposed algorithm performs on par with the recent underwater dehazing techniques.
Graph Based Workload Driven Partitioning System by Using MongoDBIJAAS Team
The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
2. IJAAS ISSN: 2252-8814
Vision Based Approach to Sign Language Recognition (Himanshu Gupta)
157
Computer Interaction (HCI). With the wide applications of HCI, now days, it becomes active focus of
research.
In recent years, vision-based automatic hand gesture recognition has been an active research topic
with alot of motivating applications such as sign language interpretation, robot control and human computer
interaction (HCI). But the problem is challenging because of a number of issues which includes the
complicated nature of dynamic and static hand gestures, occlusions and complex backgrounds. We need very
intensive computer resources to attack a problem in its generality which requires elaborate algorithms. Our
motivation to do this work is a robot navigation problem, because we are highly interested in controlling a
robot by hand signs given by a human being. We are interested in computationally efficient algorithm
because of real-time operational requirements. In hand gesture recognition problem in earlier approaches
there was a use of markers on the finger tips in a robot control context. To identify which are the fingers that
are active in the gesture to detect the color and presence of the markers, an associated algorithm was used.
But what makes this as an infeasible approach is placing markers on the user’s hand which is very
inconvenient. Recent methods use more accurate computer techniques and they don’t require markers. Hand
gesture recognition is performed through a curvature space method in, which involves finding the boundary
contours of the hand. It is a robust approach that is translation, scale and rotation invariant on the hand pose,
yet it is computationally demanding. A multi-system camera is used to pick the COG of the hand and points
it with farthest distances from the center, giving us the locations of the finger tips, which are then used to
obtain a skeleton image, and then finally for the gesture recognition and it is proposed in a vision-based hand
pose recognition technique using skeleton images. A technique was proposed for gesture recognition for sign
language recognition. Particle filters, Fourier descriptors, principal component analysis, orientation
histograms, neural networks and specialized mappings architecture are included in the 3D and 2D hand
gesture recognition which are the computer vision tools. Our focus is the recognition of a fixed set of manual
commands by a robot, in a reasonably structured environment in real time. Therefore the important thing for
us is speed, hence simplicity of the algorithm. We develop and implement such a procedure in this work. Our
way involves segmenting the hand based on size constraints and skin color statistics. Based on the
preprocessing steps which are finding the farthest point from the center of gravity and finding the center of
gravity (COG) of the hand region, we will derive a signal which carries information on the activity of the
fingers in the sign. Finally we identify the sign based on that signal. Our algorithm is non-changing to
rotations, translations and scale of the hand. Also, this technique does not need any storage of a hand gesture
database in the robot’s memory.
2. RELATED WORK
Even after much of the efforts there are many scholarly articles in which different authors have used
different techniques. In [1] the author intends to create a two device on wearable and another one that is desk
based. The device will record video of person talking in asl. Wearable device will record the video of person
wearing it while desk based will record video of other people. While there are two authors yuntao cui y and
john wengz [2] researched discriminant analysis which are characterized as follows: one has two types of
multivariate observations. The first, called training samples, are those whose class identity are known. The
second type, referred to as test samples, consists of observations for which class identity are unknown and
which have to be assigned to one of the class. One of the author said that a series of hypotheses and tests,
where a hypothesis of model parameters at each step is generated in the direction of the parameter space
(from the previous hypothesis) achieving the greatest decrease in mis-correspondence [3]. While there was a
category in which a approach called “ruled based approach” is used. Rule-based approaches consist of a set
of manually encoded rules between feature inputs. Given an input gesture a set of features as in [4] are
extracted and compared to the encoded rules, the rule that matches the input is outputted as the gesture.
There has been many reviews written on hidden markov model. Hidden-markov-models (HMM) are
a very strong and powerful technique for detailed examination of nondeterministic time signals. They are
widely used in gesture recognition, speech recognition and sign language recognition, since they have the
ability to handle spatio-temporal variations. In 2002 suat akyol [5] found a special class of HMMs, the so
called linear models, only have forward transitions, i.e. A transition never leads to another state, which
connects directly or indirectly to the current state. This topology is supposed to model the causality condition
of time signals.
There has been a significant amount of research based on skin color. In [6] extracts skin, clothes and
elbow region by matching template of initial person’s region. Then it tracks the face, hands and elbow at each
frame using skin colour and template of shape. While in 90s [7] there have been efforts for color of human
skin which is described by spherical influence field of color prototypes. Its representative color features are
extracted by using the method of color prototype density estimation in rce training procedure. In a paper
3. ISSN: 2252-8814
IJAAS Vol. 7, No. 2, June 2018: 156 – 161
158
published in 2009 authors liwicki, stephan, and mark everingham [8] classifies pixels as hand or non-hand by
a combination of three components: a signer-specific skin color model, a spatially-varying ‘non-skin’ color
model, and a spatial coherence prior. Emil m. Petriu et al.[9] haar like features to tell the ratio between dark
and bright areas, can operate much faster than pixel based system. It also robusts noise and lighting changes.
However, it is difficult to separate out hand from the image because of mixture of skin color in that
still some authors researched about that. In [10] performing region-based segmentation of the hand,
eliminating small false-alarm regions that were declared as “hand-like,” based on their color statistics.
Finding the center of gravity (cog) of the hand region as well as the farthest distance in the hand region from
the cog. In one of the papers with the help of kinetic sensors we can acquire colour image input and depth
map input as said in [11]. With the help of both, hand can be separated out. Colour image input will help to
locate the hand while depth image input will distinguish hand from other objects in screen removing problem
from cluttered backgrounds or other body parts mixing with hands. Some authors discussed in their papers
and tried to solve illumination problem by adding another level to hand detection but [12] does not so we can
say that this will be the least effective method. It uses hts algorithm for hand tracking and segmentation. It
uses edge detection technique to it.
A number of authors conducted their research based on the frames. In [13] a tracking system is used
to obtain head and hand positions, these co-ordinates are then clustered to obtain a quantized description for
each frame which are then temporally concatenated to create biframe features. In 2013 one of the author in
her technique said [14] that it uses video input from microsoft kinetic 3gear technology which sense the
motion of the hand in a skeletal frames and give the value of coordinates in each frame. The proposed
method takes into account both the local and global features associated with a gesture. In [15] for recognition
of dynamic gestures implies the analysis of temporal sequences of frames, which contain the various poses
assumed by the hand during its movement. Step by Step Process of Proposed Work as shown in Figure 1.
Figure 1. Step by step process of proposed work
4. IJAAS ISSN: 2252-8814
Vision Based Approach to Sign Language Recognition (Himanshu Gupta)
159
3. PROPOSED WORK
We ask user to upload the video of his hand where he is using sign language to communicate. Video
is collection of frames that are played together. If we divide the video into frames and consider each frame
one by one. We process each frame as single image and move on to next frame till the end of the video we
will be able to process each frame in video.
3.1. Localising Hand Region
We assume that the user has uploaded the video in acceptable format and decent quality. Then our
first task is to segment out the hand in the video from the background. We achieve this result in two steps.
First, we find the pixels that belong to the hand region. Refer Figure 1.
It is been observed that YCbCr colour space gives better clustering result and computational
efficiency [16]. So we convert our image into YCbCr colour space (refer to Eq. 1). In YCbCr colour space, Y
is luminance, Cb and Cr are chromaticity of blue and red colour. Cb and cr are two dimensional independent.
Demonstrated in Figure 2 the converted RGB colour space to YCbCr colour space :
�
𝑌
𝐶𝐶
𝐶𝐶
� = �
0.2990 0.5870 0.1440
−0.1687 −0.3313 0.5000
0.5000 −0.4187 −0.0813
� �
𝑅
𝐺
𝐵
� + �
0
128
128
�
(1)
After converting the image we extract the skin region in the image. For our system to accept people
with different skin tones we apply skin threshold to separate out the skin [17].
80≤Cb≤120 and 133≤Cr≤173
Figure 2. Converted image from RGB to YCbCr color space
3.2. Finding Centroid
After we found out the skin coloured regions in our image we calculate centroid, centre of gravity
(COG) by using following formula
𝑥̅ =
∑ 𝑥𝑗
𝑛
𝑗=0
𝑛
𝑎𝑎𝑎 𝑦� =
∑ 𝑦𝑗
𝑛
𝑗=0
𝑛
(2)
Where xj and yj are x and y coordinates of the jth
pixel in the hand area, and n denotes the number of pixels in
the area.
After we obtain centroid (refer to Eq. 2) we find the distance from the most extreme point in the
hand to the centre, normally this farthest distance is the distance from the centroid to tip of the longest active
finger in the particular gesture [4].
We then draw a circle of whose radius is farthest distance from centroid. This circle contain the
whole gesture.
5. ISSN: 2252-8814
IJAAS Vol. 7, No. 2, June 2018: 156 – 161
160
Figure.3.Expected images after conversion
3.3. Gesture Classification
Now we have to identify what the gesture means. For this we have used template matching due to its
simplicity and predictable response with a limited training set. Template matching is basically the two
dimensional cross-correlation of a grayscale image with a grayscale template, hence estimating the degree of
similarity between the two. We compare the unknown gesture with the preset template models of individual
gestures.
4. CONCLUSION
Deaf and Dumb people rely on sign language interpreters for communication. However, they cannot
depend on interpreters every day in life mainly due to the high costs and the difficulty in finding and
scheduling qualified interpreters. This system will help disabled persons in improving their quality of life
significantly.
This paper proposes a hand gesture recognition method based on YCbCr colour space, COG and
template matching. The proposed method can detect hand gesture from background. It accepts different skin
tones and different illumination conditions. This method uses chromaticity in YCbCr colour space, avoiding
the effect on hand gesture and uses COG to segment the hand. It then recognises the gesture with the help of
template matching. This method is robust to different backgrounds and provides better result. This method
will be a good addition to the ongoing research in the field of Human Computer Interface (HCI).
ACKNOWLEDGEMENTS
We would like to thank all the researchers working on this field who in one way or another guided
us on achieving our goals. We would also like to express our appreciation to our guide at VIT University who
was kind enough to share her views with us and offered some suggestions in making this project a success
REFERENCES
[1] Starner, Thad, Joshua Weaver, and Alex Pentland. "Real-time american sign language recognition using desk and
wearable computer based video." IEEE Transactions on Pattern Analysis and Machine Intelligence 20.12 (1998):
1371-1375.
[2] Tanibata, Nobuhiko, Nobutaka Shimada, and Yoshiaki Shirai. "Extraction of hand features for recognition of sign
language words." International conference on vision interface. 2002.
[3] Cooper, Helen, and Richard Bowden. "Learning signs from subtitles: A weakly supervised approach to sign
language recognition." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE,
2009.
[4] Malima, Asanterabi Kighoma, Erol Özgür, and Müjdat Çetin. "A fast algorithm for vision-based hand gesture
recognition for robot control." (2006): 1-4.
[5] Chen, Qing, Nicolas D. Georganas, and Emil M. Petriu. "Real-time vision-based hand gesture recognition using
haar-like features." Instrumentation and Measurement Technology Conference Proceedings, 2007. IMTC 2007.
IEEE. IEEE, 2007.
6. IJAAS ISSN: 2252-8814
Vision Based Approach to Sign Language Recognition (Himanshu Gupta)
161
[6] Dong, Guo, Yonghua Yan, and M. Xie. "Vision-based hand gesture recognition for human-vehicle
interaction." Proc. of the International conference on Control, Automation and Computer Vision. Vol. 1. 1998.
[7] Ren, Zhou, Jingjing Meng, and Junsong Yuan. "Depth camera based hand gesture recognition and its applications
in human-computer-interaction." Information, Communications and Signal Processing (ICICS) 2011 8th
International Conference on. IEEE, 2011.
[8] Garg, Pragati, Naveen Aggarwal, and Sanjeev Sofat. "Vision based hand gesture recognition." World Academy of
Science, Engineering and Technology 49.1 (2009): 972-977.
[9] Porta, Marco. "Vision-based user interfaces: methods and applications." International Journal of Human-Computer
Studies 57.1 (2002): 27-73.
[10] Geetha, M., et al. "A vision based dynamic gesture recognition of indian sign language on kinect based depth
images." Emerging Trends in Communication, Control, Signal Processing & Computing Applications (C2SPCA),
2013 International Conference on. IEEE, 2013.
[11] Murthy, G. R. S., and R. S. Jadon. "A review of vision based hand gestures recognition." International Journal of
Information Technology and Knowledge Management 2.2 (2009): 405-410.
[12] Akyol, Suat, and Ulrich Canzler. "An information terminal using vision based sign language recognition." ITEA
Workshop on Virtual Home Environments, VHE Middleware Consortium. Vol. 12. 2002.
[13] Ghotkar, Archana S., and Gajanan K. Kharate. "Hand segmentation techniques to hand gesture recognition for
natural human computer interaction." International Journal of Human Computer Interaction (IJHCI) 3.1 (2012):
15.
[14] Cui, Yuntao, and Juyang Weng. "Appearance-based hand sign recognition from intensity image
sequences." Computer Vision and Image Understanding 78.2 (2000): 157-176.
[15] Liwicki, Stephan, and Mark Everingham. "Automatic recognition of fingerspelled words in british sign
language." Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009. IEEE Computer
Society Conference on. IEEE, 2009.
[16] Qiu-yu, Zhang, et al. "Hand gesture segmentation method based on YCbCr color space and k-means clustering."
International Journal of Signal Processing, Image Processing and Pattern Recognition 8.5 (2015): 105-116.
[17] Basilio, Jorge Alberto Marcial, et al. "Explicit image detection using YCbCr space color model as skin detection."
Applications of Mathematics and Computer Engineering (2011): 123-128.