This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
This document describes a proposed system to design a communication system using sign language to aid differently abled people. The system aims to use image processing and artificial intelligence techniques to recognize characters in sign language from video input and convert them to text and speech output. It discusses technologies like blob detection, skin color recognition and template matching that would be used for sign recognition. The system is intended to help deaf and mute people communicate by translating their sign language to a format understandable by others.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
IRJET- A Smart Glove for the Dumb and DeafIRJET Journal
1) The document describes a smart glove that can translate sign language gestures into speech to help deaf people communicate.
2) The glove uses flex sensors to detect finger movements, and an accelerometer and gyroscope to detect hand movements.
3) The sensors' data is processed by a Raspberry Pi microprocessor which analyzes the gestures and outputs text on a screen and speech through a speaker to translate the sign language into a form hearing people can understand.
IRJET- Sign Language Converter for Deaf and Dumb PeopleIRJET Journal
This document describes a proposed system to convert sign language gestures into text and speech to help facilitate communication between deaf or mute individuals and others who do not understand sign language. The system would use sensors in a glove or camera image processing to recognize hand gestures representing letters, words, or concepts. The gestures would be translated into text by a microcontroller or single-board computer like Raspberry Pi and then into speech by a text-to-speech module. This would allow deaf or mute people to communicate with others without requiring the other person to know sign language. The document discusses different techniques for recognizing gestures including glove-based, vision-based, and hybrid approaches.
IRJET- Smart Speaking Glove for Speech Impaired PeopleIRJET Journal
This document describes a smart speaking glove system for speech impaired people that uses flex sensors on a glove to detect gestures and convert them to synthesized speech output. The flex sensors detect finger bending and send signals to a microcontroller. The microcontroller matches the signals to predefined gestures and messages stored in its database and outputs the corresponding message to an LCD display and speaker. It also includes an emergency function using a GPS and GSM modules to track the user's location and send a message if they activate a panic switch.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
This document describes a proposed system to design a communication system using sign language to aid differently abled people. The system aims to use image processing and artificial intelligence techniques to recognize characters in sign language from video input and convert them to text and speech output. It discusses technologies like blob detection, skin color recognition and template matching that would be used for sign recognition. The system is intended to help deaf and mute people communicate by translating their sign language to a format understandable by others.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
IRJET- A Smart Glove for the Dumb and DeafIRJET Journal
1) The document describes a smart glove that can translate sign language gestures into speech to help deaf people communicate.
2) The glove uses flex sensors to detect finger movements, and an accelerometer and gyroscope to detect hand movements.
3) The sensors' data is processed by a Raspberry Pi microprocessor which analyzes the gestures and outputs text on a screen and speech through a speaker to translate the sign language into a form hearing people can understand.
IRJET- Sign Language Converter for Deaf and Dumb PeopleIRJET Journal
This document describes a proposed system to convert sign language gestures into text and speech to help facilitate communication between deaf or mute individuals and others who do not understand sign language. The system would use sensors in a glove or camera image processing to recognize hand gestures representing letters, words, or concepts. The gestures would be translated into text by a microcontroller or single-board computer like Raspberry Pi and then into speech by a text-to-speech module. This would allow deaf or mute people to communicate with others without requiring the other person to know sign language. The document discusses different techniques for recognizing gestures including glove-based, vision-based, and hybrid approaches.
IRJET- Smart Speaking Glove for Speech Impaired PeopleIRJET Journal
This document describes a smart speaking glove system for speech impaired people that uses flex sensors on a glove to detect gestures and convert them to synthesized speech output. The flex sensors detect finger bending and send signals to a microcontroller. The microcontroller matches the signals to predefined gestures and messages stored in its database and outputs the corresponding message to an LCD display and speaker. It also includes an emergency function using a GPS and GSM modules to track the user's location and send a message if they activate a panic switch.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
The document discusses the development of a smart glove called the Palmify. It describes the motivation and hypothesis for creating a glove that can respond to body movement. The document outlines the key drivers of wearable technology like faster hardware, cloud storage, and location data. It then details the industries of fashion, technology, and medicine that wearables impact. The document provides an overview of two models of the Palmify glove created, their features and the process used to design and build the gloves. It concludes by discussing potential future projects like a wristband notification system.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...IRJET Journal
Manisha D.Raut, Pallavi Dhok, Ketan Machhale, Jaspreet Manjeet Hora "A System for Recognition of Indian Sign Language for Deaf People using Otsu’s Algorithm", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
Sign Language Recognition System is one of the important researches today in engineering field. Number of methods are been developed recently in the field of Sign Language Recognition for deaf and dumb people. It is very useful to the deaf and dumb people to convey their message to other people. In this paper we proposed some methods, through which the recognition of the signs becomes easy for peoples while communication. We use the different symbols of signs to convey the meanings. And the result of those symbols signs will be converted into the text. In this project, we are capturing hand gestures through webcam and convert this image into gray scale image. The segmentation of gray scale image of a hand gesture is performed using Otsu thresholding algorithm.. Total image level is divided into two classes one is hand and other is background. The optimal threshold value is determined by computing the ratio between class variance and total class variance. To find the boundary of hand gesture in image Canny edge detection technique is used.
IRJET - Sign Language Recognition SystemIRJET Journal
1) The document describes a sign language recognition system that uses AI and a camera to detect hand signs and convert them to either voice commands or text display.
2) The system is intended to help the deaf and hearing impaired communicate more easily by recognizing common sign language gestures and converting them.
3) The proposed system uses a Raspberry Pi computer along with a camera for sign input detection and either a voice module or LCD display for output of the recognized sign as text or voice.
This document proposes a project to develop a sign language translator glove. The glove will use flex sensors, contact sensors, and accelerometers to detect finger positions and hand motions corresponding to letters, words, and sentences in American Sign Language. The detected signals will be sent to a detection unit and transmitted to a base station. The base station will display the signed letter on an LCD screen and pronounce it through speakers. The expected outcome is a portable glove that can translate signed letters, words, and sentences into text and speech. The proposed application is to help communication between deaf, mute, or physically impaired individuals and others.
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
This document presents a system for sign language recognition using neural networks. The system aims to recognize hand gestures in real-time and translate them into English words or sentences. It uses a convolutional neural network (CNN) algorithm to extract features from captured images of hand gestures and classify the gestures. The system was able to accurately recognize gestures with a classification rate of 92.4%. The system could help mute individuals communicate through translating sign language into text that could be read or understood by others. It may also assist blind individuals by allowing communication through speech recognition of the translated text.
Recently more & more hearing impaired people started using sign language. There are about 70 million people in the whole World that are not able to speak (dumb). A dumb person makes communication with other people using their motion of the hand or expressions. . Sign language helps the dumb people to make communication like normal people. The sign language translator which has been already developed uses a glove fitted with sensors that can interpret the 16 English letters in American Sign Language (ASL). Accelerometers and flex sensors are used in this system which increases its overall cost. We proposed a solution as a prototype called as “smart glove-for speech impaired people” which will translate sign language into text. It will help dump and deaf people to express their thoughts in more convenient way. As a sign language we have used traditional finger movements with contact switch wrapped around the user’s fingers. An IR transmitter receiver pair, HT12E and HT12D IC and, Arduino (Micro Controller) board helps transmitting data to PC. Moreover, use of contact switches reduces the system’s overall cost.
Keywords: - Arduino, HT12E IC & HT12D IC, IR transmitter receiver, contact switch.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
This document is a project report submitted by three students - Ashwani Kumar, Ankit Raj, and Anand Abhishek - to Cochin University of Science & Technology in partial fulfillment of their Bachelor of Technology degree in Information Technology. The report describes a voice recognition mobile application called HandOVRS designed for physically handicapped users that can recognize common sounds in the home like doorbells, phones, and alarms and allow the user to select notification options like sending text messages.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document describes the development of an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software aims to translate one-handed finger spelling input in Indian Sign Language alphabets A-Z and numbers 1-9 into spoken English audio output, and 165 spoken English words input into Indian Sign Language picture display output. It utilizes the camera and microphone of the device for image and speech acquisition, and performs vision and speech analysis for translation. The software is intended to help communication between deaf or speech-impaired individuals and those who do not understand sign language.
The document discusses the development of a tool to convert sign language gestures captured by a Kinect sensor into speech. The system is intended to help deaf or mute individuals communicate more easily by recognizing gestures and matching them to text which is then converted to speech. The proposed design includes modules for gesture input, gesture recognition matching to text, and text to speech conversion to provide an accessible communication system for the hearing impaired.
The document discusses "Enable Talk Gloves", gloves equipped with sensors that recognize sign language and translate it into text-to-speech on a smartphone. A team of Ukrainian students developed the gloves to help deaf people communicate. The gloves measure finger bending and hand motion with sensors connected to a microcontroller and Bluetooth. This allows translation of signs into text then spoken words on a phone. While the gloves can currently translate a few phrases, the team aims to expand the sign library and improve accuracy and speed for conversation. Long-term, the technology could benefit other applications like interacting with interfaces and may become a mainstream computing method.
IRJET- Smart Hand Gloves for Disable PeopleIRJET Journal
1) The document describes a smart glove prototype designed to help disabled people communicate through hand gestures.
2) The glove uses flex sensors on the fingers to detect hand gestures and an Arduino microcontroller to convert the gestures to text or pre-recorded voices.
3) The glove has three modes - displaying gesture status, converting gestures to voices, and controlling home appliances wirelessly through hand gestures.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
This document summarizes a student project report on developing "talk gloves" that translate sign language gestures into speech. The report is dedicated to the students' teachers and families who supported them. It acknowledges those who helped with the project, including their supervisor Dr. Falah Mohammed. The report contains chapters on the project's constraints and standards, literature review, methodology used, results and analysis, and conclusions. The talk gloves are intended to help solve communication barriers faced by deaf individuals by allowing translation of sign language gestures into spoken words using a smartphone. The gloves contain sensors on the fingers to detect hand movements which are sent via Bluetooth to an Android app that converts the signals to voice. The project aims to give a voice to the 70 million
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET Journal
This document describes a communication aid system for deaf and mute people that translates sign language gestures to text and speech. The system uses a glove with flex sensors that detect hand gestures. When a gesture is made, the sensors produce a signal that is matched to stored gesture inputs to translate letters, words, and sentences to speech and text. This helps remove communication barriers for the deaf by allowing them to convey meanings through gestures that are automatically translated. The system aims to bridge the gap between those who can hear and those with speech and hearing impairments.
The document discusses the development of a smart glove called the Palmify. It describes the motivation and hypothesis for creating a glove that can respond to body movement. The document outlines the key drivers of wearable technology like faster hardware, cloud storage, and location data. It then details the industries of fashion, technology, and medicine that wearables impact. The document provides an overview of two models of the Palmify glove created, their features and the process used to design and build the gloves. It concludes by discussing potential future projects like a wristband notification system.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...IRJET Journal
Manisha D.Raut, Pallavi Dhok, Ketan Machhale, Jaspreet Manjeet Hora "A System for Recognition of Indian Sign Language for Deaf People using Otsu’s Algorithm", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
Sign Language Recognition System is one of the important researches today in engineering field. Number of methods are been developed recently in the field of Sign Language Recognition for deaf and dumb people. It is very useful to the deaf and dumb people to convey their message to other people. In this paper we proposed some methods, through which the recognition of the signs becomes easy for peoples while communication. We use the different symbols of signs to convey the meanings. And the result of those symbols signs will be converted into the text. In this project, we are capturing hand gestures through webcam and convert this image into gray scale image. The segmentation of gray scale image of a hand gesture is performed using Otsu thresholding algorithm.. Total image level is divided into two classes one is hand and other is background. The optimal threshold value is determined by computing the ratio between class variance and total class variance. To find the boundary of hand gesture in image Canny edge detection technique is used.
IRJET - Sign Language Recognition SystemIRJET Journal
1) The document describes a sign language recognition system that uses AI and a camera to detect hand signs and convert them to either voice commands or text display.
2) The system is intended to help the deaf and hearing impaired communicate more easily by recognizing common sign language gestures and converting them.
3) The proposed system uses a Raspberry Pi computer along with a camera for sign input detection and either a voice module or LCD display for output of the recognized sign as text or voice.
This document proposes a project to develop a sign language translator glove. The glove will use flex sensors, contact sensors, and accelerometers to detect finger positions and hand motions corresponding to letters, words, and sentences in American Sign Language. The detected signals will be sent to a detection unit and transmitted to a base station. The base station will display the signed letter on an LCD screen and pronounce it through speakers. The expected outcome is a portable glove that can translate signed letters, words, and sentences into text and speech. The proposed application is to help communication between deaf, mute, or physically impaired individuals and others.
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
This document presents a system for sign language recognition using neural networks. The system aims to recognize hand gestures in real-time and translate them into English words or sentences. It uses a convolutional neural network (CNN) algorithm to extract features from captured images of hand gestures and classify the gestures. The system was able to accurately recognize gestures with a classification rate of 92.4%. The system could help mute individuals communicate through translating sign language into text that could be read or understood by others. It may also assist blind individuals by allowing communication through speech recognition of the translated text.
Recently more & more hearing impaired people started using sign language. There are about 70 million people in the whole World that are not able to speak (dumb). A dumb person makes communication with other people using their motion of the hand or expressions. . Sign language helps the dumb people to make communication like normal people. The sign language translator which has been already developed uses a glove fitted with sensors that can interpret the 16 English letters in American Sign Language (ASL). Accelerometers and flex sensors are used in this system which increases its overall cost. We proposed a solution as a prototype called as “smart glove-for speech impaired people” which will translate sign language into text. It will help dump and deaf people to express their thoughts in more convenient way. As a sign language we have used traditional finger movements with contact switch wrapped around the user’s fingers. An IR transmitter receiver pair, HT12E and HT12D IC and, Arduino (Micro Controller) board helps transmitting data to PC. Moreover, use of contact switches reduces the system’s overall cost.
Keywords: - Arduino, HT12E IC & HT12D IC, IR transmitter receiver, contact switch.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
This document is a project report submitted by three students - Ashwani Kumar, Ankit Raj, and Anand Abhishek - to Cochin University of Science & Technology in partial fulfillment of their Bachelor of Technology degree in Information Technology. The report describes a voice recognition mobile application called HandOVRS designed for physically handicapped users that can recognize common sounds in the home like doorbells, phones, and alarms and allow the user to select notification options like sending text messages.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document describes the development of an automatic language translation software to aid communication between Indian Sign Language and spoken English using LabVIEW. The software aims to translate one-handed finger spelling input in Indian Sign Language alphabets A-Z and numbers 1-9 into spoken English audio output, and 165 spoken English words input into Indian Sign Language picture display output. It utilizes the camera and microphone of the device for image and speech acquisition, and performs vision and speech analysis for translation. The software is intended to help communication between deaf or speech-impaired individuals and those who do not understand sign language.
The document discusses the development of a tool to convert sign language gestures captured by a Kinect sensor into speech. The system is intended to help deaf or mute individuals communicate more easily by recognizing gestures and matching them to text which is then converted to speech. The proposed design includes modules for gesture input, gesture recognition matching to text, and text to speech conversion to provide an accessible communication system for the hearing impaired.
The document discusses "Enable Talk Gloves", gloves equipped with sensors that recognize sign language and translate it into text-to-speech on a smartphone. A team of Ukrainian students developed the gloves to help deaf people communicate. The gloves measure finger bending and hand motion with sensors connected to a microcontroller and Bluetooth. This allows translation of signs into text then spoken words on a phone. While the gloves can currently translate a few phrases, the team aims to expand the sign library and improve accuracy and speed for conversation. Long-term, the technology could benefit other applications like interacting with interfaces and may become a mainstream computing method.
IRJET- Smart Hand Gloves for Disable PeopleIRJET Journal
1) The document describes a smart glove prototype designed to help disabled people communicate through hand gestures.
2) The glove uses flex sensors on the fingers to detect hand gestures and an Arduino microcontroller to convert the gestures to text or pre-recorded voices.
3) The glove has three modes - displaying gesture status, converting gestures to voices, and controlling home appliances wirelessly through hand gestures.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
This document summarizes a student project report on developing "talk gloves" that translate sign language gestures into speech. The report is dedicated to the students' teachers and families who supported them. It acknowledges those who helped with the project, including their supervisor Dr. Falah Mohammed. The report contains chapters on the project's constraints and standards, literature review, methodology used, results and analysis, and conclusions. The talk gloves are intended to help solve communication barriers faced by deaf individuals by allowing translation of sign language gestures into spoken words using a smartphone. The gloves contain sensors on the fingers to detect hand movements which are sent via Bluetooth to an Android app that converts the signals to voice. The project aims to give a voice to the 70 million
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
IRJET- Communication Aid for Deaf and Dumb PeopleIRJET Journal
This document describes a communication aid system for deaf and mute people that translates sign language gestures to text and speech. The system uses a glove with flex sensors that detect hand gestures. When a gesture is made, the sensors produce a signal that is matched to stored gesture inputs to translate letters, words, and sentences to speech and text. This helps remove communication barriers for the deaf by allowing them to convey meanings through gestures that are automatically translated. The system aims to bridge the gap between those who can hear and those with speech and hearing impairments.
IRJET- Assisting System for Paralyzed and Mute People with Heart Rate MonitoringIRJET Journal
This document describes an assisting system for paralyzed and mute people that uses flex sensors and heart rate monitoring. The system includes a glove fitted with flex sensors to detect hand gestures which are then translated to synthesized speech by a voice module. It also monitors heart rate to detect potential heart attacks and alert doctors or emergency services if needed. The system aims to help paralyzed and mute individuals communicate their needs and also provide heart health monitoring for early detection of medical issues.
Hand Gesture Recognition and Translation ApplicationIRJET Journal
The document describes a project to develop an Android application that can recognize American Sign Language (ASL) gestures in real-time using machine learning, translate the gestures to text, and translate the text to other languages. It discusses challenges faced by deaf people in communication and education. It then reviews different approaches to sign language recognition, including sensor-based methods using gloves or cameras, and vision-based methods using cameras and deep learning models. The goal of the project is to create a more accessible sign language translation tool without the need for specialized hardware.
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
This document summarizes research on a glove-based gesture recognition system using IR sensors. The system aims to help those who are deaf and mute communicate through hand gestures. An IR sensor and LED placed on a glove detect hand gestures based on the amount of light received by the sensor. The Arduino microcontroller recognizes the gestures and displays the meaning on an LCD screen while playing an audio message. The researchers claim this method is more accurate and has a lower error rate than conventional image processing approaches. It is intended to help address both safety and communication issues faced by those who are deaf or speech-impaired. Experimental results showed the system successfully recognized gestures and could help reduce the gap between those who are normal and speech-impaired.
IRJET - Sign Language Text to Speech Converter using Image Processing and...IRJET Journal
This document describes a sign language text-to-speech converter system using image processing and convolutional neural networks (CNNs). The system captures images of hand gestures using a camera, applies image processing techniques like thresholding and blurring, and then uses a CNN model trained on a dataset of gestures to recognize the gestures and convert them to text and speech. The system was able to accurately recognize gestures for letters and numbers with about 85% accuracy. Future work may involve expanding the dataset to include more signs and working towards word and sentence recognition.
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
While a hearing-impaired individual depends on sign language and gestures, non-hearing-impaired person uses verbal language. Thus, there is need for means of arbitration to forestall situation when a non-hearing-impaired individual who does not understand the sign language wants to communicate with a hearing-impaired person. This paper is concerned with the development of a PC-based sign language translator to facilitate effective communication between hearing-impaired and non-hearing-impaired persons. Database of hand gestures in American sign language (ASL) is created using Python scripts. TensorFlow (TF) is used in the creation of a pipeline configuration model for machine learning of annotated images of gestures in the database with the real time gestures. The implementation is done in Python software environment and it runs on a PC equipped with a web camera to capture real time gestures for comparison and interpretations. The developed sign language translator is able to translate ASL/gestures to written texts along with corresponding audio renderings at an average duration of about one second. In addition, the translator is able to match real time gestures with the equivalent gesture images stored in the database even at 44% similarity.
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET Journal
The document presents a hand gesture recognition system using convolutional neural networks. The system aims to enable communication between deaf or mute individuals and those who do not understand sign language. It works by capturing an image of a hand gesture via camera, extracting features from the image, detecting the sign using a CNN model, and converting the sign to text or speech. The system can also convert text or speech to the corresponding sign. The CNN model achieves an accuracy of 95.6% for sign recognition, outperforming previous methods. A real-time prototype allows signing and two-way communication between individuals on different devices.
Survey Paper on Raspberry pi based Assistive Device for Communication between...IRJET Journal
This document discusses several research papers on developing assistive devices to aid communication between blind, deaf, and mute individuals. It begins with an abstract describing the goal of converting sign language to voice and text using a Raspberry Pi, camera, speaker and LCD display by recognizing human gestures. It then summarizes 8 research papers on related topics, describing systems that use flex sensors on gloves to detect sign language and translate it to speech/text, or use image processing on hand gestures. The document concludes by outlining the proposed methodology, hardware and software requirements, and potential limitations and future work for a sign language translation system using a Raspberry Pi, camera and sensors.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
ASL Fingerspelling Recognition Using Hybrid Deep Learning ArchitectureIRJET Journal
This document presents a novel deep learning architecture for American Sign Language (ASL) fingerspelling recognition. The model utilizes multimodal hand and facial landmark coordinates extracted from videos as input. It incorporates convolutional blocks to capture local spatial relationships, transformer blocks to model global dependencies, and positional encoding. Sequence-level Connectionist Temporal Classification (CTC) loss is used for training. The architecture fuses diverse data sources and combines convolutional and attention mechanisms. It aims to advance assistive technology for the deaf community by accurately recognizing fingerspelling sequences. The ability to learn from large-scale real-world data signifies progress in gesture-based interfaces and enabling more inclusive communication.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
This document discusses the development of an Indian Sign Language recognition system called SignReco. It begins with an abstract describing the challenges faced by deaf individuals communicating with others without translation and the benefits of a system that can recognize sign language. The paper then provides background on sign language and the goals of the proposed system, which is to classify and recognize Indian Sign Language in real-time using CNN and neural networks. A literature review covers prior work on sign language recognition systems. The proposed system's workflow and modules for model creation, language translation and app development are described. It concludes that the survey helped in developing an effective approach for an Indian Sign Language recognition system using CNN to improve accuracy.
Electronic Glove: A Teaching AID for the Hearing ImpairedIJECEIAES
Learning how to speak in order to communicate with others is part ofgrowing up. Like a normal person, deaf and mutes also need to learn how toconnect to the world they live in. For this purpose, an Electronic Glove orE-Glovewas developed as a teaching aid for the hearing impaired particularlychildren. E-Glove makes use ofthe American Sign Language (ASL) asthebasis for recognizing hand gestures. It was designed using flex sensors andan accelerometer to detect the degree of bend made by the fingers as well asamovement of the hand. E-Glove transmits the data received from the sensorswirelessly to a computer and then displays the letter or basic word thacorrespondsto a gesture made by the individual wearing it. E-Glove provides a simple, accurate, reliable, cheap, speedy gesture recognition and userfriendlyteaching aid for the instructors that are teaching sign language to thedeaf and mute community.
Communication among blind, deaf and dumb PeopleIJAEMSJORNAL
Now-a-days Science and Technology have made the human world so easy but still some physically and visually challenged people suffer from communication with others. In this project, we are going to propose a new system prototype called communication among Blind, deaf and dumb people .This will helps the disabled people to overcome their difficulties in communicating with some other people with disabilities or normal people. The blind people will communicate through the speakers, the deaf and dumb people will see through it and reply through typing in a terminal .These are all done as an application , so that will be easily understand by the people with disabilities.
Sign Language Recognition using MediapipeIRJET Journal
This document summarizes a student research project that aims to develop a sign language recognition system using the Mediapipe framework. The system takes video input of signed letters from the American Sign Language alphabet and outputs the recognized letters in text format. The document provides background on sign language and gesture recognition, describes the Mediapipe framework and implementation methodology using KNN classification, and presents preliminary results of the system detecting hand positions and recognizing letters in real-time. The overall goal is to reduce communication barriers for deaf individuals by translating sign language to written text.
IRJET- A Review on Iot Based Sign Language ConversionIRJET Journal
This document summarizes a research paper on an IoT-based sign language conversion system. The system uses a glove equipped with flex sensors, contact sensors and a gyroscope to capture the user's hand gestures. The glove's microcontroller analyzes the sensor readings to identify gestures from a library and transmits them via Bluetooth to a smartphone. The system aims to help deaf people communicate with others conveniently and affordably by translating sign language gestures to text displayed on a smartphone.
KANNADA SIGN LANGUAGE RECOGNITION USINGMACHINE LEARNINGIRJET Journal
The document describes a proposed system for Kannada sign language recognition using machine learning. It begins with an abstract discussing sign language recognition and techniques like SIFT and LDA. It then discusses the existing problems with sign language recognition systems and the objectives of the proposed system. The proposed system uses techniques like SIFT to extract features from images and LDA for dimensionality reduction before classifying images with methods like SVM, KNN, and minimum distance. It shows results of classifying Kannada sign language letters with up to 90% accuracy. It concludes that the system helps communication for deaf people and future work will focus on recognizing signs with motion and converting multiple signs to text words and sentences.
Similar to IRJET- Hand Talk- Assistant Technology for Deaf and Dumb (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.