Learning how to speak in order to communicate with others is part ofgrowing up. Like a normal person, deaf and mutes also need to learn how toconnect to the world they live in. For this purpose, an Electronic Glove orE-Glovewas developed as a teaching aid for the hearing impaired particularlychildren. E-Glove makes use ofthe American Sign Language (ASL) asthebasis for recognizing hand gestures. It was designed using flex sensors andan accelerometer to detect the degree of bend made by the fingers as well asamovement of the hand. E-Glove transmits the data received from the sensorswirelessly to a computer and then displays the letter or basic word thacorrespondsto a gesture made by the individual wearing it. E-Glove provides a simple, accurate, reliable, cheap, speedy gesture recognition and userfriendlyteaching aid for the instructors that are teaching sign language to thedeaf and mute community.
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
An alter ego (Latin for "other I") means alternative self, which is believed to be distinct from a person's normal or true original personality. Finding one's alter ego will require finding one's other self, one with different personality.
Electronic Hand Glove Through Gestures For Verbally Challenged PersonsIJERA Editor
This paper presents design of Electronic hand glove to facilitate an easy and better communication through
synthesized speech for the verbally challenged peoples. Most Probably, a speechless person communicates
through sign language which is not understood by the majority of people. The proposed system is designed to
solve this problem. Gestures of fingers of a person of this glove will be converted into synthesized speech to
convey an audible message to others. Speech is typically accompanied by manual gestures. Earlier there were
many systems designed for dumb and deaf to interact with ordinary people. But these systems had many
drawbacks and interrupts. We are designing such a system that even a dumb, deaf and blind can communicate
with each other without taking help of ordinary people. This system is going to help them to interact with the
outside world.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
AlterEgo is a headset being developed at MIT that allows silent communication with voice-controlled devices through interpreting neuromuscular signals in the jaw and face during internal speech. It uses bone conduction headphones to allow the user to hear responses without others hearing. The electrical impulses from internal verbalizations are classified into words by a neural network. AlterEgo aims to combine humans and computers such that computing augments human abilities in a discreet manner.
AlterEgo: A Personalized Wearable Silent Speech Interface. MIT
Arnav Kapur
MIT Media Lab
Cambridge, USA
arnavk@media.mit.edu
Shreyas Kapur
MIT Media Lab
Cambridge, USA
shreyask@mit.edu
Pattie Maes
MIT Media Lab
Cambridge, USA
pattie@media.mit.edu
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
An alter ego (Latin for "other I") means alternative self, which is believed to be distinct from a person's normal or true original personality. Finding one's alter ego will require finding one's other self, one with different personality.
Electronic Hand Glove Through Gestures For Verbally Challenged PersonsIJERA Editor
This paper presents design of Electronic hand glove to facilitate an easy and better communication through
synthesized speech for the verbally challenged peoples. Most Probably, a speechless person communicates
through sign language which is not understood by the majority of people. The proposed system is designed to
solve this problem. Gestures of fingers of a person of this glove will be converted into synthesized speech to
convey an audible message to others. Speech is typically accompanied by manual gestures. Earlier there were
many systems designed for dumb and deaf to interact with ordinary people. But these systems had many
drawbacks and interrupts. We are designing such a system that even a dumb, deaf and blind can communicate
with each other without taking help of ordinary people. This system is going to help them to interact with the
outside world.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
AlterEgo is a headset being developed at MIT that allows silent communication with voice-controlled devices through interpreting neuromuscular signals in the jaw and face during internal speech. It uses bone conduction headphones to allow the user to hear responses without others hearing. The electrical impulses from internal verbalizations are classified into words by a neural network. AlterEgo aims to combine humans and computers such that computing augments human abilities in a discreet manner.
AlterEgo: A Personalized Wearable Silent Speech Interface. MIT
Arnav Kapur
MIT Media Lab
Cambridge, USA
arnavk@media.mit.edu
Shreyas Kapur
MIT Media Lab
Cambridge, USA
shreyask@mit.edu
Pattie Maes
MIT Media Lab
Cambridge, USA
pattie@media.mit.edu
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Human Computer Interface Glove for Sign Language TranslationPARNIKA GUPTA
A human computer interface glove was developed with the aim of translating sign language to text & speech. The glove utilizes five flex sensors and an inertial measurement unit to accurately capture hand gestures. All components were placed on the backside of the glove providing the user with full range of motion, and not restricting the user from performing other tasks while wearing the glove.
This document proposes a project to develop a sign language translator glove. The glove will use flex sensors, contact sensors, and accelerometers to detect finger positions and hand motions corresponding to letters, words, and sentences in American Sign Language. The detected signals will be sent to a detection unit and transmitted to a base station. The base station will display the signed letter on an LCD screen and pronounce it through speakers. The expected outcome is a portable glove that can translate signed letters, words, and sentences into text and speech. The proposed application is to help communication between deaf, mute, or physically impaired individuals and others.
Gesture Gloves - For Speechless patientsNeha Udeshi
Patients with speech disorders often find it difficult to communicate their needs to the general audience. These patients include the mute, senior citizens, paralyzed, and patients with diseases such as dysarthria, aphasia, to name a few. To satisfy their requirements, the Gesture Gloves have been designed. These gloves ease the communication, without much ado, by engendering predefined gestures to voice. The input is in the form of hand gestures which are converted to text and speech. The gloves are equipped with multiple flex sensors that produce varying resistance for every gesture made by the person.
This document summarizes a presentation on HandTalk, a technology that aims to help deaf and mute individuals communicate. HandTalk uses a virtual reality glove called the P5 glove that detects finger gestures and converts them to text using gesture recognition software. The text is then converted to speech so hearing individuals can understand the deaf or mute person. The goal is to create an accurate and inexpensive alternative to existing expensive gesture recognition systems. The presentation outlines the hardware, software, user interface, motivation, design, problems with other systems, and future enhancements of the HandTalk system.
An embedded module as “virtual tongue”ijistjournal
There are several human disabilities in nature of which speech impaired people find difficulty in
communicating with others, which is very important to convey their messages without speech. In this
paper, to make them self reliable and independent, with the advent of embedded systems technology an
embedded handheld icon based assistive device as “Virtual Tongue” for Voiceless, which speaks for
severely speech disordered people by simply pressing icons appropriately to fulfill their needs, is proposed.
This proposed module comprises a microcontroller based player to play voice messages, Secure Digital
(SD) card reader, Universal Serial Bus (USB) port, icon based remote keypad, audio amplifier and speaker
along with the benefits like portable, reliable, user friendly, affordable cost, low power consumption and of
course speech in regional language with good clarity. The proposed system is designed to produce speech
regardless of time length, audible to the neighbors, based on the request from the user by pressing the icons
thereby this module deserves inarticulate people. An extended version with a feature of converting text into
voice by adding a circuit, with which any text fed through a keyboard can be converted into speech, is also
discussed.
IRJET- Smart Hand Gloves for Disable PeopleIRJET Journal
1) The document describes a smart glove prototype designed to help disabled people communicate through hand gestures.
2) The glove uses flex sensors on the fingers to detect hand gestures and an Arduino microcontroller to convert the gestures to text or pre-recorded voices.
3) The glove has three modes - displaying gesture status, converting gestures to voices, and controlling home appliances wirelessly through hand gestures.
This document describes the development of an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software aims to translate one-handed finger spelling input in Indian Sign Language alphabets A-Z and numbers 1-9 into spoken English audio output, and 165 spoken English words input into Indian Sign Language picture display output. It utilizes the camera and microphone of the device for image and speech acquisition, and performs vision and speech analysis for translation. The software is intended to help communication between deaf or speech-impaired individuals and those who do not understand sign language.
Hand talk (assistive technology for dumb)- Sign language glove with voiceVivekanand Gaikwad
We propose a sign language glove which will assist those people who are suffering for any kind of speech defect to communicate through gesture. The glove will record all the gesture made by the user & then it will translate these gesture into audio form.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
Electronic hand glove for deaf and blindpptgtsooka
This paper propose a methode design an electronic hand glove which would help the communication between deaf and blind. There are around 285millions of visually impaired people in the world and 900,000 of deaf and blind.
The document discusses "Enable Talk Gloves", gloves equipped with sensors that recognize sign language and translate it into text-to-speech on a smartphone. A team of Ukrainian students developed the gloves to help deaf people communicate. The gloves measure finger bending and hand motion with sensors connected to a microcontroller and Bluetooth. This allows translation of signs into text then spoken words on a phone. While the gloves can currently translate a few phrases, the team aims to expand the sign library and improve accuracy and speed for conversation. Long-term, the technology could benefit other applications like interacting with interfaces and may become a mainstream computing method.
Recently more & more hearing impaired people started using sign language. There are about 70 million people in the whole World that are not able to speak (dumb). A dumb person makes communication with other people using their motion of the hand or expressions. . Sign language helps the dumb people to make communication like normal people. The sign language translator which has been already developed uses a glove fitted with sensors that can interpret the 16 English letters in American Sign Language (ASL). Accelerometers and flex sensors are used in this system which increases its overall cost. We proposed a solution as a prototype called as “smart glove-for speech impaired people” which will translate sign language into text. It will help dump and deaf people to express their thoughts in more convenient way. As a sign language we have used traditional finger movements with contact switch wrapped around the user’s fingers. An IR transmitter receiver pair, HT12E and HT12D IC and, Arduino (Micro Controller) board helps transmitting data to PC. Moreover, use of contact switches reduces the system’s overall cost.
Keywords: - Arduino, HT12E IC & HT12D IC, IR transmitter receiver, contact switch.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
This document describes a seminar presentation on a sign language recognition system for deaf and dumb people. The system uses a microcontroller, flex sensors to detect hand gestures, an ADC to convert analog sensor signals to digital, and a voice processor and speakers to provide audio output of the recognized sign. It recognizes several letters and displays them on an LCD. Potential applications include improving communication for deaf individuals and future work could expand its capabilities.
This document describes a sign language translation project using a glove. The goal of the project is to bridge communication between deaf/mute people and others by translating sign language gestures into text and speech using an inexpensive electronic device. The glove will contain flex sensors and an accelerometer to capture hand movements and gestures, which will then be recognized, translated, and output as text on an LCD display and audio from a speaker. A block diagram shows the overall architecture of the glove unit, detection unit, and other components like the power supply. The document discusses the motivation, prime idea, content layout, advantages, and limitations of the project.
The document discusses the development of a tool to convert sign language gestures captured by a Kinect sensor into speech. The system is intended to help deaf or mute individuals communicate more easily by recognizing gestures and matching them to text which is then converted to speech. The proposed design includes modules for gesture input, gesture recognition matching to text, and text to speech conversion to provide an accessible communication system for the hearing impaired.
IRJET- Hand Talk- Assistant Technology for Deaf and DumbIRJET Journal
This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
IRJET- Smart Speaking Glove for Speech Impaired PeopleIRJET Journal
This document describes a smart speaking glove system for speech impaired people that uses flex sensors on a glove to detect gestures and convert them to synthesized speech output. The flex sensors detect finger bending and send signals to a microcontroller. The microcontroller matches the signals to predefined gestures and messages stored in its database and outputs the corresponding message to an LCD display and speaker. It also includes an emergency function using a GPS and GSM modules to track the user's location and send a message if they activate a panic switch.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Human Computer Interface Glove for Sign Language TranslationPARNIKA GUPTA
A human computer interface glove was developed with the aim of translating sign language to text & speech. The glove utilizes five flex sensors and an inertial measurement unit to accurately capture hand gestures. All components were placed on the backside of the glove providing the user with full range of motion, and not restricting the user from performing other tasks while wearing the glove.
This document proposes a project to develop a sign language translator glove. The glove will use flex sensors, contact sensors, and accelerometers to detect finger positions and hand motions corresponding to letters, words, and sentences in American Sign Language. The detected signals will be sent to a detection unit and transmitted to a base station. The base station will display the signed letter on an LCD screen and pronounce it through speakers. The expected outcome is a portable glove that can translate signed letters, words, and sentences into text and speech. The proposed application is to help communication between deaf, mute, or physically impaired individuals and others.
Gesture Gloves - For Speechless patientsNeha Udeshi
Patients with speech disorders often find it difficult to communicate their needs to the general audience. These patients include the mute, senior citizens, paralyzed, and patients with diseases such as dysarthria, aphasia, to name a few. To satisfy their requirements, the Gesture Gloves have been designed. These gloves ease the communication, without much ado, by engendering predefined gestures to voice. The input is in the form of hand gestures which are converted to text and speech. The gloves are equipped with multiple flex sensors that produce varying resistance for every gesture made by the person.
This document summarizes a presentation on HandTalk, a technology that aims to help deaf and mute individuals communicate. HandTalk uses a virtual reality glove called the P5 glove that detects finger gestures and converts them to text using gesture recognition software. The text is then converted to speech so hearing individuals can understand the deaf or mute person. The goal is to create an accurate and inexpensive alternative to existing expensive gesture recognition systems. The presentation outlines the hardware, software, user interface, motivation, design, problems with other systems, and future enhancements of the HandTalk system.
An embedded module as “virtual tongue”ijistjournal
There are several human disabilities in nature of which speech impaired people find difficulty in
communicating with others, which is very important to convey their messages without speech. In this
paper, to make them self reliable and independent, with the advent of embedded systems technology an
embedded handheld icon based assistive device as “Virtual Tongue” for Voiceless, which speaks for
severely speech disordered people by simply pressing icons appropriately to fulfill their needs, is proposed.
This proposed module comprises a microcontroller based player to play voice messages, Secure Digital
(SD) card reader, Universal Serial Bus (USB) port, icon based remote keypad, audio amplifier and speaker
along with the benefits like portable, reliable, user friendly, affordable cost, low power consumption and of
course speech in regional language with good clarity. The proposed system is designed to produce speech
regardless of time length, audible to the neighbors, based on the request from the user by pressing the icons
thereby this module deserves inarticulate people. An extended version with a feature of converting text into
voice by adding a circuit, with which any text fed through a keyboard can be converted into speech, is also
discussed.
IRJET- Smart Hand Gloves for Disable PeopleIRJET Journal
1) The document describes a smart glove prototype designed to help disabled people communicate through hand gestures.
2) The glove uses flex sensors on the fingers to detect hand gestures and an Arduino microcontroller to convert the gestures to text or pre-recorded voices.
3) The glove has three modes - displaying gesture status, converting gestures to voices, and controlling home appliances wirelessly through hand gestures.
This document describes the development of an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software aims to translate one-handed finger spelling input in Indian Sign Language alphabets A-Z and numbers 1-9 into spoken English audio output, and 165 spoken English words input into Indian Sign Language picture display output. It utilizes the camera and microphone of the device for image and speech acquisition, and performs vision and speech analysis for translation. The software is intended to help communication between deaf or speech-impaired individuals and those who do not understand sign language.
Hand talk (assistive technology for dumb)- Sign language glove with voiceVivekanand Gaikwad
We propose a sign language glove which will assist those people who are suffering for any kind of speech defect to communicate through gesture. The glove will record all the gesture made by the user & then it will translate these gesture into audio form.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
Electronic hand glove for deaf and blindpptgtsooka
This paper propose a methode design an electronic hand glove which would help the communication between deaf and blind. There are around 285millions of visually impaired people in the world and 900,000 of deaf and blind.
The document discusses "Enable Talk Gloves", gloves equipped with sensors that recognize sign language and translate it into text-to-speech on a smartphone. A team of Ukrainian students developed the gloves to help deaf people communicate. The gloves measure finger bending and hand motion with sensors connected to a microcontroller and Bluetooth. This allows translation of signs into text then spoken words on a phone. While the gloves can currently translate a few phrases, the team aims to expand the sign library and improve accuracy and speed for conversation. Long-term, the technology could benefit other applications like interacting with interfaces and may become a mainstream computing method.
Recently more & more hearing impaired people started using sign language. There are about 70 million people in the whole World that are not able to speak (dumb). A dumb person makes communication with other people using their motion of the hand or expressions. . Sign language helps the dumb people to make communication like normal people. The sign language translator which has been already developed uses a glove fitted with sensors that can interpret the 16 English letters in American Sign Language (ASL). Accelerometers and flex sensors are used in this system which increases its overall cost. We proposed a solution as a prototype called as “smart glove-for speech impaired people” which will translate sign language into text. It will help dump and deaf people to express their thoughts in more convenient way. As a sign language we have used traditional finger movements with contact switch wrapped around the user’s fingers. An IR transmitter receiver pair, HT12E and HT12D IC and, Arduino (Micro Controller) board helps transmitting data to PC. Moreover, use of contact switches reduces the system’s overall cost.
Keywords: - Arduino, HT12E IC & HT12D IC, IR transmitter receiver, contact switch.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
This document describes a seminar presentation on a sign language recognition system for deaf and dumb people. The system uses a microcontroller, flex sensors to detect hand gestures, an ADC to convert analog sensor signals to digital, and a voice processor and speakers to provide audio output of the recognized sign. It recognizes several letters and displays them on an LCD. Potential applications include improving communication for deaf individuals and future work could expand its capabilities.
This document describes a sign language translation project using a glove. The goal of the project is to bridge communication between deaf/mute people and others by translating sign language gestures into text and speech using an inexpensive electronic device. The glove will contain flex sensors and an accelerometer to capture hand movements and gestures, which will then be recognized, translated, and output as text on an LCD display and audio from a speaker. A block diagram shows the overall architecture of the glove unit, detection unit, and other components like the power supply. The document discusses the motivation, prime idea, content layout, advantages, and limitations of the project.
The document discusses the development of a tool to convert sign language gestures captured by a Kinect sensor into speech. The system is intended to help deaf or mute individuals communicate more easily by recognizing gestures and matching them to text which is then converted to speech. The proposed design includes modules for gesture input, gesture recognition matching to text, and text to speech conversion to provide an accessible communication system for the hearing impaired.
IRJET- Hand Talk- Assistant Technology for Deaf and DumbIRJET Journal
This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
IRJET- Smart Speaking Glove for Speech Impaired PeopleIRJET Journal
This document describes a smart speaking glove system for speech impaired people that uses flex sensors on a glove to detect gestures and convert them to synthesized speech output. The flex sensors detect finger bending and send signals to a microcontroller. The microcontroller matches the signals to predefined gestures and messages stored in its database and outputs the corresponding message to an LCD display and speaker. It also includes an emergency function using a GPS and GSM modules to track the user's location and send a message if they activate a panic switch.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
Glove based wearable devices for sign language-GloSignIAESIJAI
Loss of the capability to talk or hear has psychological and social effects on the affected individuals due to the absence of appropriate interaction. Sign Language is used by such individuals to assist them in communicating with each other. This paper proposes a glove called GloSign that can convert American sign language to characters. This glove consists of flex and inertial measurement unit (IMU) sensors to identify gestures. The data from glove is uploaded on IoT platform, which makes the glove portable and wireless. The data from gloves is passed through a k-nearest neighbors (KNN) Algorithm machine learning algorithm to improve the accuracy of the system. The system was able to achieve an accuracy of 96.8%. The glove can also be used to form sentences. The output is displayed on the screen or is converted to speech. This glove can be used in communicating with people who don’t know sign language.
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLEIRJET Journal
1) The document describes a hand gesture-based speaking system to help mute people communicate through converting hand gestures to audio messages.
2) The system uses flex sensors to detect finger movements and a Raspberry Pi microcontroller to identify predefined gestures and convert them to speech using text-to-speech.
3) The flex sensors are attached to gloves to allow mute users to easily convey common messages through natural hand gestures that are translated to audio by the system.
Communication among blind, deaf and dumb PeopleIJAEMSJORNAL
Now-a-days Science and Technology have made the human world so easy but still some physically and visually challenged people suffer from communication with others. In this project, we are going to propose a new system prototype called communication among Blind, deaf and dumb people .This will helps the disabled people to overcome their difficulties in communicating with some other people with disabilities or normal people. The blind people will communicate through the speakers, the deaf and dumb people will see through it and reply through typing in a terminal .These are all done as an application , so that will be easily understand by the people with disabilities.
IRJET- A Review on Iot Based Sign Language ConversionIRJET Journal
This document summarizes a research paper on an IoT-based sign language conversion system. The system uses a glove equipped with flex sensors, contact sensors and a gyroscope to capture the user's hand gestures. The glove's microcontroller analyzes the sensor readings to identify gestures from a library and transmits them via Bluetooth to a smartphone. The system aims to help deaf people communicate with others conveniently and affordably by translating sign language gestures to text displayed on a smartphone.
This document describes a hand gesture vocalizer system that aims to help deaf, blind, and speech impaired people communicate more easily. The system uses flex sensors on a glove to detect finger bending gestures and an accelerometer to detect hand tilting gestures. A microcontroller identifies the gestures and sends the output to an LCD display and via Bluetooth to an Android phone to vocalize the gesture as speech. The system was designed and developed by students to address communication barriers for people with disabilities by translating common sign language gestures into audio and text outputs. It achieved gesture detection and translation but had limitations in vocabulary size and accuracy. Future work could explore expanding its capabilities for more advanced communication.
Two Way Communication System with Binary Code Medium for People with Multiple...IRJET Journal
The document describes a proposed communication system to help people with multiple disabilities like blindness, deafness, and being mute communicate effectively. The system uses binary code as the medium of communication. It involves wearable devices with sensors, vibration motors, microcontrollers and other components. The system allows disabled users to send and receive text and vibration messages for communication. It aims to reduce communication barriers between disabled individuals and enable them to connect with others.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
IRJET - Android based Portable Hand Sign Recognition SystemIRJET Journal
This document describes an Android-based portable hand sign recognition system. The system uses a custom-designed data glove with flex sensors to detect hand gestures. A microcontroller converts the analog sensor data to digital and sends it via Bluetooth to an Android mobile device. The goal is to help deaf and mute people communicate through sign language translation to text or voice on a mobile device. The system aims to improve on previous vision-based sign language recognition approaches by using a wearable sensor-based method.
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
This document summarizes research on a glove-based gesture recognition system using IR sensors. The system aims to help those who are deaf and mute communicate through hand gestures. An IR sensor and LED placed on a glove detect hand gestures based on the amount of light received by the sensor. The Arduino microcontroller recognizes the gestures and displays the meaning on an LCD screen while playing an audio message. The researchers claim this method is more accurate and has a lower error rate than conventional image processing approaches. It is intended to help address both safety and communication issues faced by those who are deaf or speech-impaired. Experimental results showed the system successfully recognized gestures and could help reduce the gap between those who are normal and speech-impaired.
Sign Language Recognition using Deep LearningIRJET Journal
The document discusses using deep learning techniques like MobileNet V2 to develop a model for sign language recognition. It aims to classify sign language gestures to help communicate with deaf people. The model was trained on a dataset of sign language images and achieved an accuracy of 70% in recognizing letters, numbers, and gestures.
Survey Paper on Raspberry pi based Assistive Device for Communication between...IRJET Journal
This document discusses several research papers on developing assistive devices to aid communication between blind, deaf, and mute individuals. It begins with an abstract describing the goal of converting sign language to voice and text using a Raspberry Pi, camera, speaker and LCD display by recognizing human gestures. It then summarizes 8 research papers on related topics, describing systems that use flex sensors on gloves to detect sign language and translate it to speech/text, or use image processing on hand gestures. The document concludes by outlining the proposed methodology, hardware and software requirements, and potential limitations and future work for a sign language translation system using a Raspberry Pi, camera and sensors.
Wireless and uninstrumented communication by gestures for deaf and mute based...IOSR Journals
Abstract: The fact that technology is advancing as per Moore’s law, the attention towards deaf and mute individuals with hi-tech technology is not much. Deaf and mute have to communicate through sign language even for pithy things. And also many people did not understand this language. Now-a-days gesture is becoming an increasingly popular means of interacting with computers. This paper sheds light of an proposed potential idea relying on latest technology named Wi-See which was developed in Washington, US. This technology actually uses our conventional Wi-Fi signals for home automation by gesture recognition. So, depending upon this hi-tech technology, my modified application idea is towards deaf and dumb, especially, one who cannot speak, but knows English language for communication. Since wireless signals do not require line-of-sight and can traverse through walls, proposed idea can be very useful to expressed views by speechless people without requiring instrumentation of the human body with sensing devices. The whole idea is based on Doppler shift in frequency of Wi-Fi signals. Instead of controlling home appliances as by Wi-See, this idea extends its view for speech or words through speakers installed. Each successive pattern of English alphabet generated by Doppler shift by gestures in air, can be recorded and matched with predefined pattern, which when processed, be outputed through speaker as combined letter word ,inspired by English digital dictionary having prediction and correction algorithm. Keywords: Wi-Fi, Wi-See, Doppler shift, Gestures, Communication
Gestures Based Sign Interpretation System using Hand GloveIRJET Journal
This document describes a glove-based sign language interpretation system that uses flex sensors and an Arduino Uno microcontroller. The system is intended to help those with speech impairments communicate by translating sign language gestures into text and speech output. The glove contains flex sensors that detect finger and hand movements, sending that data to the Arduino which interprets the gestures using machine learning algorithms and outputs the translation. The system aims to reduce communication barriers for the deaf and hard of hearing.
A review of factors that impact the design of a glove based wearable devicesIAESIJAI
Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Sign Language is used by such individuals to assist them in communicating
with each other. The paper aims to report details of various aspects of
wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
is also comparison of technology used to determine which wearable device
is better, which is also done with the help of accuracy. The limitations and
future research help in determining how the wearable devices can be
improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
This project aims to develop a smart glove interpreter to facilitate communication between deaf or impaired people and normal people using wireless data transmission. The glove is fitted with flex sensors that detect gestures which are processed by a microcontroller to provide voice outputs or messages based on the gesture. It allows impaired people to control devices or send alerts. The system works by mapping different finger flexing patterns detected by flex sensors to specific text messages or voice outputs. This provides an easier means of communication for impaired individuals.
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
Similar to Electronic Glove: A Teaching AID for the Hearing Impaired (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
2. Int J Elec & Comp Eng ISSN: 2088-8708
Electronic Glove: A Teaching AID for the Hearing Impaired (Ertie Abana)
2291
development of language and communication. However, ASL, which wasthe first language ever taught in the
BiBi approach, is said to be the native language for them and is historically used in Deaf Education.
Keeping in mind the use of BiBi approach in teaching hearing-impaired children, technology can be
integrated across the curriculum to assist professionals and teachers in Deaf Educationand make learning
more interesting. Technology has been digitizing classrooms through digital learning tools that have
increased student’s engagement and motivation towards learning.
This paper explored on the development of a data glove as a digital learning tool for teaching
hearing-impaired children. The data glove was named Electronic Glove or E-Glove. It was developed using a
general-purpose microcontroller board for processing input coming from the combination of flex sensors and
accelerometer. The use of the accelerometer only for recognizing ASL sign words is what separates the E-
Glove from other existing data gloves.
2. RESEARCH METHOD
The primary goal in the development of E-Glove is showing the recognized ASL letters and words
through a software program installed in a computer. It shall facilitate in teaching hearing-impaired children
through teachers showing the proper hand gestures and then letting the students perform it using the glove.
Figure 1 represents the block diagram of E-Glove. Five flex sensor and one accelerometer were
connected to the microcontroller that contained the functionality of the device. UHF modules were used to
wirelessly connect the E-Glove to the computer. Any letter or word recognized using the E-Glovewill be
shown and read aloud on the computer.
Figure 1. The Block Diagram of E-Glove
2.1. General description
Figure 2 shows the connection of the different components used in the development of E-
Glove.Once the microcontroller detects the sign language, it will send the letter or word that corresponds to
the hand gesture wirelessly into the computer using UHF (transmitter). The UHF (receiver) connected to the
UART acts as a bridge between the microcontroller and the computer. The computer will then display the
letters or words on the screen and also convert it into speech.
a. Arduino Mega
b. Flex sensor
c. Accelerometer
d. UHF Module
e. Software Program
2.2. Arduino mega microcontroller
Arduino microcontroller is a long-established general purpose microcontroller [11] used for
developing do-it-yourself electronic projects and numerous embedded systems. Arduino microcontrollers are
physically programmable circuit boards that can be programmed using an Arduino IDE (Integrated
Development Environment) [12] which is available in all portals [13] that deal with a product. The IDE is
also used tocompile and upload the written source code to the Arduino board. Arduino is both an open source
[14] hardware and software.
There are many types of Arduino microcontroller board. We used the Arduino Mega for
E-Glove because it has lots of digital input/output pins and analog input pins. This board is very convenient
for projects that require a lot of digital input and output. The Arduino Mega has the following specifications:
a. Microcontroller - ATmega1280
b. Operating Voltage - 5V
c. Recommended Input Voltage - 7-12V
Flex Sensors Microcontroller UHF Module
Computer
Accelerometer UHF ModuleUART
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 4, August 2018 : 2290 – 2298
2292
d. Digital I/O Pins - 54
e. Analog Input Pins - 16
f. DC Current per I/O Pin - 40 mA
g. Clock Speed - 16 MHz
The Arduino Megamicrocontroller wasthe most important component of the E-Glove because it
contained the whole functionality of the device. Its absence won’t make the device work. It holds the code for
the sensors as well as the database of the device. It also converts the analog signals coming from the sensor
into digital in order for the computer to read the data.
The flex sensors were connected to the analog pins A0-A4 while the accelerometer used A5 of the
Arduino microcontroller. The first transceiver which served as a transmitter was connected to the Rx and Tx
communication port of the Arduino microcontroller.
Figure 2. The circuit diagram of the device
2.3. Flex sensors
Flex sensors act as variable resistors mainly used to detect bending or flexing. The resistance of this
kind of sensors changes when bent. They are made up of carbon on a strip of plastic wherein resistance gets
higher and higher while bending in one direction. Flex sensors differ in length but most havea resistance
ranging from about 10 kilo-ohms to 35 kilo-ohms.
In the E-Glove, the degree of bend of the flex sensor was compared to the mapped valuesin the code
to recognize aparticular sign language. Five pieces of flex sensors were placed on each finger. Their values
are being recorded and compared with the values that represent a particular sign language. If the value
matches, it will be shown in the software program.
2.4. Accelerometer
The accelerometer is an inertial sensorthatis dynamically capable of sensing in a vast range.
Accelerometers are used to measure acceleration forces in one, two, or three orthogonal axes depending on
its application to a certain device.
4. Int J Elec & Comp Eng ISSN: 2088-8708
Electronic Glove: A Teaching AID for the Hearing Impaired (Ertie Abana)
2293
For the E-Glove to detect hand movements when doing a certain ASL sign word, the accelerometer
was used. The value read from the accelerometer was compared to the value saved in the software program to
determine if a certain word corresponds to the hand movement.
2.5. UHF module
UHF module was used in the project as a bridge of the E-Glove and the computer for them to
communicate wirelessly. The UHF module connected to the E-Glovesends the data to the computer. The
purpose of making the device wireless is to make the user comfortable and not to stay on just one area of the
room.
Specifically, UHF-EX was used in the E-Glove. It is a wireless UART transceiver with 100
milliwatts (mW) radio frequency output. Two UHF-EXwere used in E-Glovewhich served different
purposes. The first one was used as a transmitter connected to the Arduino microcontroller while the second
one was used as a receiver connected to the computer. The radio frequency power of UHF-EX allowed a
useful control range of more than 500 meters while unobstructed line of sight range can reach 700 meters.
a. Power Input - 5V/3.3V jumper selectable
b. Frequency Range - 431.1 MHz – 437.3 MHz
c. Channel Separation - 400kHz
d. Modulation Type - FSK ( Frequency Shift - Keying )
e. UART Baud rate - 9600 bit/s
f. Current Consumption TX - 36mA @ 5V
g. Current Consumption RX - 23mA @ 5V
h. Transmit to Receive Latency - 20 - 30 milliseconds (ms)
2.6. Software program
The data coming from the two sensors attached to the E-Glove were wirelessly sent to the computer
and a software program was developed to receive this data and show the corresponding sign language
equivalent. Moreover, the equivalent letter or word was also read aloud using a text-to-speech function.
The software program was developed using Microsoft Visual C# Express. Microsoft Visual C#
Express is a high-level programming language that is intended for building a variety of application using the
.NET Framework. This programming language issimple, powerful, type-safe and objected-oriented. The
continuous development of this language enables rapid application development while retaining the
simplicity and elegance of a C based programming language.
2.7. American sign language (ASL) alphabet and sign words
ASL is a natural language that serves as the first language taught in the BiBi approach of deaf
education before the English language. One of the first things that a normal person should learn is spelling. In
deaf education, the ASL alphabet is being taught to hearing-impaired children for them to spell words and
most especially names.
Figure 3. The ASL alphabet from North Star Teacher Resources
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 4, August 2018 : 2290 – 2298
2294
Figure 3 represents the ASL alphabet from North Star Teacher Resources. ASL sign words were
also taught to hearing-impaired children especially those important words that they need in everyday
conversation like Know, You, Good, Fine, Understand, Thank you, Again, Me, Hi and Bad.
3. RESULTS AND ANALYSIS
3.1. The E-Glove
The E-Glove shown in Figure 4 was sewn with flex sensors in each finger. The accelerometer was
positioned on top of the hand. Both flex sensors and accelerometer were connected in the Arduino
microcontroller which was placed inside a box that is wrapped around the forearm.
Figure 4. The prototype glove
The E-Glove sends the data wirelessly to the computer. This feature is not exhibited on most of the
previous data gloves [2], [4], [6], [8]-[10] but proven beneficial to other designs [1], [3], [5], [7]. The
wireless feature of E-Glove should encourage a healthy classroom interaction in which the teacher can let the
students use the E-Glove in their seat and perform a particular hand gesture that may serve as a form of
recitation in Deaf Education.
3.2. Sensor values
Output data were directly obtained from data glove and each sensor produced different resistance
value depending on the hand gesture. Signals produced by the sensors were translated by the Arduino
microcontroller into digital form and through serial communication, the data were transferred to the Arduino
software.
As seen in Figure 5, the sensors located at the fingers haddifferent readings depending on the sign
language. Using the Arduino software, the sensor values were mapped to get the specific letter or word. The
letter or the word which corresponds to the values was sent to the software program. These letters or words
were then shown in the graphical user interface and at the same time, it was read aloud by the system.
Figure 6 shows the reading of accelerometer in the Arduino software. On each columns were the
readings of x, y, and z-axis respectively. The accelerometer was also mapped according to the movement of
the hand.
6. Int J Elec & Comp Eng ISSN: 2088-8708
Electronic Glove: A Teaching AID for the Hearing Impaired (Ertie Abana)
2295
Figure 5. The Flex Sensor values Figure 6. The Accelerometer values
3.3. Software program
The software program developed in Visual C# is shown in Figure 7. It was designed to be user-
friendly and attractive for children. There were two, large and small, textboxes. The large text box is where
the accumulated words or letters show up while the smaller text box shows the last letter or word recognized
by the software program. Buttons were labeled concisely and clearly. The “Start” button enables the software
program to show the corresponding letter or word on the text boxes based on the gesture exhibited by the
user. The “Back Space” button deletes the last character in the large text box while the “Clear” button
removes all the characters in the large text box. The “Stop” button stops the program to show the letter or
word recognized by the E-Glove.
Figure 7. The Graphical User Interface (GUI) of the software program
The design of the GUI enabled children in Deaf Education to spell out words shown in the large text
box, something that the previous data gloves lack since they can only show the last recognized letter [1], [4].
Moreover, the GUI design is forgiving because it lets the user to correct misspelled words using the
“Backspace” button.The accumulation of letter in the large text box will enable the teacher to evaluate and
determine the number of correct gestures the student has made.
The letters and words recognized were also converted into speech for teachers to check if the gesture
was right even without looking on the GUI. Additionally, the size of the characters was large enough to be
seen by the students.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 4, August 2018 : 2290 – 2298
2296
3.4. Testing of ASL alphabet and sign words
Each of the letters in the ASL alphabetwas tested by doing the gestures once to check the accuracy
of the device. The Table 1 shows that on the second and third trial, one letter was not recognized. Calculating
for the rate of accuracy, E-Glove has 98% accuracy in detecting the letters. The simple formula below was
used in determining the rate of accuracy of E-Glove in recognizing letters in the ASL alphabet:
(
∑
∑
) (1)
The device was able to recognize all the letters on most of the trials but there were undetected letters
due to conflicts in the reading because the mapped values of the flex sensors are closely related.When the
conflict in reading happened, the E-Glove did not show any letters on the software program and the user had
to repeat the gesture again. This implies that E-Glove is reliable because the software program will not show
incorrect letters.
Table 1. The Accuracy of E-Glove in recognizing letters in the ASL Alphabet
Trial
(i)
Number of Letters
( )
Number of Recognized Letters
( )
1 26 26
2 26 25
3 26 26
4 26 26
5 26 25
The same with the ASL alphabet, simple words were also tested. Accelerometer with gyroscope was
used in the detection of words. Table 2 shows the number of recognized words for every trial. The E-Glove
was not able to recognize all the words for the second and third trial. One wordwas missed.Using the simple
formula for accuracy, testing the device achieved an accuracy of 98%.
The device was only able to detect words with the right hand orientation or position fed in the
software program.The recognition of words also exhibited the behavior of E-Glove when recognizing letters
in which no words will be shown if gesture was undetected. This does not only to make E-Glove reliable in
recognizing gestures for words but also let the users topractice the right orientation or positions of the hand
when doing the gestures.
Table 2. The Accuracy of E-Glove in recognizing ASL Sign Words
Trial
(i)
Number of Words
( )
Number of Recognized Words
( )
1 10 10
2 10 9
3 10 9
4 10 10
5 10 10
The acceptable rate of accuracy achieved by the E-Glove suggests that the method of recognizing
letters in which only one type of sensor was used compared to multiple sensors [1], [4] is sufficient to
perform its function. Since fewer data will be read and processed, the performance of the device in terms of
analyzing the gesture will be improved unlike the used of multiple sensors by previous studies where
recognition speed was traded off to accuracy. Moreover, the number of flex sensors used was also reduced
compared to other gloves which will make E-Glove cheaper to produce.
In terms of recognizing ASL sign words, E-Glove was also able to show that with the use of a single
accelerometer, an acceptable rate of accuracy can also be achieved. The accelerometer from previous
studieswas used in combination with flex sensor [1], [4] to only recognize letters.
4. CONCLUSION
In deaf education using the BiBi approach, children learn American Sign Languageas a first
language. They learn ASL as the primary way of communicating with other people. E-Glove works as an
8. Int J Elec & Comp Eng ISSN: 2088-8708
Electronic Glove: A Teaching AID for the Hearing Impaired (Ertie Abana)
2297
automated translator and converts sign language directly into a vocal and textual format using flex sensors
and accelerometer. Considering the accuracy it exhibited, the reduction of sensors to improve the gesture
recognition speed and the cheaper productioncost, it can very well be implemented as a teaching aid for Deaf
Education. Furthermore, the design of the GUI has an increased aesthetic appeal for children. This
educational tool for hearing-impaired children would encourage a healthy classroom interaction. The E-
Glove can further be improved by adding more ASL sign words from the ASL dictionary to its pool of
words. The software program can also be developed in such a way that it can construct sentences from words
detected.
ACKNOWLEDGEMENTS
We thank the Heavenly Father for his endless blessing through the development of the device. We
are grateful to all the teachers and friends who assisted us in every process undertaken during the
development.
REFERENCES
[1] Bukhari J, Rehman M, Malik SI, Kamboh A, Salman A, “American Sign Language Translation through Sensory
Glove; Signspeak”, International Journal of u-and e-Service, Science and Technology, 2015, vol. 8, no. 1,
pp. 131-142.
[2] Pramada1 S, Saylee D, Pranita N, Samiksha N, Vaidya A, “Intelligent Sign Language Recognition Using Image
Processing”, IOSR Journal of Engineering, 2013, vol. 3, no. 2, pp. 45-51.
[3] Gunasekaran K, An R, “Sign language to speech translation system using PIC microcontroller”, International
Journal of Engineering and Technology, 2013, vol. 5 no. 2, pp. 1024-1028.
[4] Anetha K, Rejina Parvin J, “Hand Talk-A Sign Language Recognition Based On Accelerometer and EMG Data”,
International Journal of Innovative Research in Computer and Communication Engineering, 2014, vol. 2, no. 3,
pp. 206-215.
[5] Havalagi PS, Nivedita SU, “The amazing digital gloves that give voice to the voiceless”, International Journal of
Advances in Engineering & Technology, 2013, vol. 6, no. 1, pp. 471-480.
[6] Philomina S, Jasmin M, “Hand Talk: Intelligent Sign Language Recognition for Deaf and Dumb”, International
Journal of Innovative Research in Science, Engineering and Technology, 2015, vol. 4, no. 1, pp. 18785-18790.
[7] Dinesh S, “Talking Glove - A Boon for the Deaf, Dumb and Physically Challenged”, International Journal of
Advanced Research in Electronics and Communication Engineering, 2015, vol. 4, no. 5, pp. 1366-1369.
[8] Solanki Krunal M, “Indian Sign Languages using Flex Sensor Glove”, International Journal of Engineering Trends
and Technology, 2013, vol. 4, no. 6, pp. 2478-2480.
[9] Lokhande P, Prajapati R, Pansare S, “Data Gloves for Sign Language Recognition System”, National Conference
on Emerging Trends in Advanced Communication Technologies, 2015, pp. 11-14.
[10] Gowri D, Vidhubala D, “Sign Language Recognition for Deaf and Dumb People”, International Journal of
Research in Engineering and Technology, 2014, vol. 3, no. 7, pp. 797-799.
[11] Gunawan TS, Yaldi IRS, Kartiwi M, Mansor H, “Performance Evaluation of Smart Home System using Internet of
Things”, International Journal of Electrical and Computer Engineering, 2018, vol. 8, no. 1, pp. 400-411.
[12] Dessai S, Mahir MM, Mayur R, Singha N, Avaradhi V, “Design and Development of Low Cost Navigation and
Security System for Indian Fisherman Using Adrino Nano Platform, International Journal of Reconfigurable and
Embedded Systems, 2015, vol. 4, no. 1, pp. 28-41.
[13] Mlakić D, Nikolovski S, Alibašić E, “Designing Automatic Meter Reading System Using Open Source Hardware
and Software”, International Journal of Electrical and Computer Engineering, vol. 7, no. 6, 2017, pp. 3282-3291.
[14] Gogineni VR, Matcha K and Rao R, “Real Time Domestic Power Consumption Monitoring using Wireless Sensor
Networks”, International Journal of Electrical and Computer Engineering, 2015, vol. 5, no. 4, pp. 685-694.
BIOGRAPHIES OF AUTHORS
Ertie Abana is currently the Head of Center for Engineering Research and Technology Innovation in
University of Saint Louis. He is teaching research for three (3) years to Computer Engineering students
and is also a part-time professor in the Graduate School program of University of Saint Louis. He
received the degrees BS in Computer Engineering and Master in Information Technology in the same
university on 2011 and 2016, respectively.
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 4, August 2018 : 2290 – 2298
2298
Kym Harris Bulauitan recently received his Bachelor’s degree in Computer Engineering in
University of Saint Louis, Tuguegarao City. His areas of interest include wireless networks, embedded
systems and software development. He attended various workshops on software development and
RFID technologies.
Ravy Kim Vicente recently received his Bachelor’s degree in Computer Engineering in University of
Saint Louis, Tuguegarao City. His areas of interest include microprocessors, sensor technologies, and
embedded systems. He attended various workshops on microcontrollers and robotics.
Michelle Rafael recently received his Bachelor’s degree in Computer Engineering in University of
Saint Louis, Tuguegarao City and is now working as Associate Software Engineer in a proffesional
services company. Her areas of interest include microprocessors, software development, and
embedded systems. She attended various workshops on microcontrollers and programming.
Jay Boy Flores recently received his Bachelor’s degree in Computer Engineering in University of
Saint Louis, Tuguegarao City. His areas of interest include microprocessors, and embedded systems.
He attended various workshops on microcontrollers and wearable technologies.