This document describes an assisting system for paralyzed and mute people that uses flex sensors and heart rate monitoring. The system includes a glove fitted with flex sensors to detect hand gestures which are then translated to synthesized speech by a voice module. It also monitors heart rate to detect potential heart attacks and alert doctors or emergency services if needed. The system aims to help paralyzed and mute individuals communicate their needs and also provide heart health monitoring for early detection of medical issues.
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
IRJET- Hand Talk- Assistant Technology for Deaf and DumbIRJET Journal
This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
IRJET - Gesture based Communication Recognition SystemIRJET Journal
This document describes a proposed gesture-based communication recognition system that aims to translate between finger spelling and speech to help facilitate communication between deaf and hearing individuals. It discusses using techniques like mel frequency cepstral coefficients (MFCCs) to extract features from speech for recognition purposes. The system architecture involves preprocessing and modeling input signals, extracting features, and performing feature matching. Challenges with vision-based hand motion recognition are also presented, and the motivation for the project is to help reduce dependence on sign language interpreters for deaf individuals.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
This document presents a system for sign language recognition using neural networks. The system aims to recognize hand gestures in real-time and translate them into English words or sentences. It uses a convolutional neural network (CNN) algorithm to extract features from captured images of hand gestures and classify the gestures. The system was able to accurately recognize gestures with a classification rate of 92.4%. The system could help mute individuals communicate through translating sign language into text that could be read or understood by others. It may also assist blind individuals by allowing communication through speech recognition of the translated text.
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
This project presents system based on inertial sensors and gesture recognition algorithm for SMS or calling for old age people. Users hold the device to make hand gestures with their preferred handheld style. Hand motions generate inertial signals, which are wirelessly transmitted to a computer for recognition. Here DTW recognition algorithm is used for recognition of hand gestures. Zigbee is used at the transmission section of inertial device to transmit sensor values and at the receiver section of PC to receive values. Recognized gesture is send to the microcontroller for further processing which gives AT commands to GSM to selects the SMS or calling option to the person. GSM model is used for the SMS or calling. An accelerometer-based gestures recognition systems that uses only a single 3-axis accelerometer. 3-axis accelerometer recognizes gestures, where gestures here are hand movements. DTW algorithm is used in this project for recognition. The proposed DTW-based recognition algorithm includes the procedures of inertial signal acquisition, motion detection, template selection, and recognition. Here „a‟, „b‟, „c‟, „d‟, „e‟, „f‟, „g‟, „h‟, „o‟, „v‟ letters are recognized in this system . This system can be used for the emergency calling or emergency SMS by the old age people or blind people from the home.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
A teaching system for non disabled people who communicate with deafblind peIAEME Publication
This document describes a teaching system that allows non-disabled people to communicate with deafblind people using finger braille. The system uses speech recognition software to convert speech to braille code, which is then displayed on a braille typewriter. This allows a non-disabled person to communicate directly in finger braille without an interpreter. The goal is to improve communication and reduce the isolation of deafblind individuals.
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
This document proposes a system for hand gesture recognition to help deaf and dumb individuals communicate. The system would use computer vision and machine learning techniques to recognize hand gestures from video input and translate them into text in real-time. This would allow deaf and dumb people to communicate with others without needing an interpreter who understands sign language. The proposed system would segment the hand from each video frame, extract features of the hand pose, and classify the gesture by matching it to examples in a dataset. The goal is to provide deaf and dumb individuals a way to independently communicate through a automatic translation of their sign language gestures into text.
IRJET- Hand Talk- Assistant Technology for Deaf and DumbIRJET Journal
This document describes a smart glove system that translates sign language gestures into speech to help deaf and mute people communicate. The glove uses flex sensors on each finger to detect finger bending motions. An Arduino microcontroller processes the sensor data and sends it wirelessly via Bluetooth to an Android app. The app displays the sign language gesture and converts it to speech output. The goal is to help deaf and mute individuals communicate with hearing people by interpreting their sign language gestures into audible speech in real-time. The system is intended to bridge communication between those who understand sign language and those who do not.
IRJET - Gesture based Communication Recognition SystemIRJET Journal
This document describes a proposed gesture-based communication recognition system that aims to translate between finger spelling and speech to help facilitate communication between deaf and hearing individuals. It discusses using techniques like mel frequency cepstral coefficients (MFCCs) to extract features from speech for recognition purposes. The system architecture involves preprocessing and modeling input signals, extracting features, and performing feature matching. Challenges with vision-based hand motion recognition are also presented, and the motivation for the project is to help reduce dependence on sign language interpreters for deaf individuals.
IRJET- Hand Movement Recognition for a Speech Impaired PersonIRJET Journal
This document describes a system to recognize hand gestures from a speech-impaired person and convert them to speech using a flex sensor glove and microcontroller. The system uses flex sensors attached to a glove to detect hand movements and gestures. The microcontroller matches the gestures to a database of templates and outputs the corresponding speech signal through a speaker. This allows speech-impaired individuals to communicate through natural hand gestures that are translated to audio speech in real-time. The system aims to help overcome communication barriers for those unable to speak.
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
This document presents a system for sign language recognition using neural networks. The system aims to recognize hand gestures in real-time and translate them into English words or sentences. It uses a convolutional neural network (CNN) algorithm to extract features from captured images of hand gestures and classify the gestures. The system was able to accurately recognize gestures with a classification rate of 92.4%. The system could help mute individuals communicate through translating sign language into text that could be read or understood by others. It may also assist blind individuals by allowing communication through speech recognition of the translated text.
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
This project presents system based on inertial sensors and gesture recognition algorithm for SMS or calling for old age people. Users hold the device to make hand gestures with their preferred handheld style. Hand motions generate inertial signals, which are wirelessly transmitted to a computer for recognition. Here DTW recognition algorithm is used for recognition of hand gestures. Zigbee is used at the transmission section of inertial device to transmit sensor values and at the receiver section of PC to receive values. Recognized gesture is send to the microcontroller for further processing which gives AT commands to GSM to selects the SMS or calling option to the person. GSM model is used for the SMS or calling. An accelerometer-based gestures recognition systems that uses only a single 3-axis accelerometer. 3-axis accelerometer recognizes gestures, where gestures here are hand movements. DTW algorithm is used in this project for recognition. The proposed DTW-based recognition algorithm includes the procedures of inertial signal acquisition, motion detection, template selection, and recognition. Here „a‟, „b‟, „c‟, „d‟, „e‟, „f‟, „g‟, „h‟, „o‟, „v‟ letters are recognized in this system . This system can be used for the emergency calling or emergency SMS by the old age people or blind people from the home.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
A teaching system for non disabled people who communicate with deafblind peIAEME Publication
This document describes a teaching system that allows non-disabled people to communicate with deafblind people using finger braille. The system uses speech recognition software to convert speech to braille code, which is then displayed on a braille typewriter. This allows a non-disabled person to communicate directly in finger braille without an interpreter. The goal is to improve communication and reduce the isolation of deafblind individuals.
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
This project is about the development of a novel electronic speaking system for dumb and paralyzed persons. Due to their physical disability, an attender is always required to monitor and help their day to day activities. However, most of the time, attender is idle and the attender time is wasted. Hence, an electronic system is proposed in this project to help the dumb and paralyzed persons to communicate their need to the attender. The attender may be entrusted with other work during the time, when dumb and paralyzed persons do not need the support of attenders. This avoids the attenders continuously monitoring the dumb and paralyzed persons and the attenders may engage themselves in other works.
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
IRJET- Vision Based Sign Language by using MatlabIRJET Journal
This document discusses vision-based sign language translation using MATLAB. It describes a system that uses a camera to capture images of hand gestures representing letters or words in sign language. MATLAB is used to analyze the images, recognize the gestures, and translate them into spoken words that are output through a speaker. The system aims to help deaf, mute, and blind individuals communicate more easily. Several image processing and machine learning techniques for hand segmentation, feature extraction, and classification are reviewed from previous studies. The results suggest this type of system could accurately translate sign language in real-time.
Communication among blind, deaf and dumb PeopleIJAEMSJORNAL
Now-a-days Science and Technology have made the human world so easy but still some physically and visually challenged people suffer from communication with others. In this project, we are going to propose a new system prototype called communication among Blind, deaf and dumb people .This will helps the disabled people to overcome their difficulties in communicating with some other people with disabilities or normal people. The blind people will communicate through the speakers, the deaf and dumb people will see through it and reply through typing in a terminal .These are all done as an application , so that will be easily understand by the people with disabilities.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET Journal
The document presents a hand gesture recognition system using convolutional neural networks. The system aims to enable communication between deaf or mute individuals and those who do not understand sign language. It works by capturing an image of a hand gesture via camera, extracting features from the image, detecting the sign using a CNN model, and converting the sign to text or speech. The system can also convert text or speech to the corresponding sign. The CNN model achieves an accuracy of 95.6% for sign recognition, outperforming previous methods. A real-time prototype allows signing and two-way communication between individuals on different devices.
IRJET - Sign Language Text to Speech Converter using Image Processing and...IRJET Journal
This document describes a sign language text-to-speech converter system using image processing and convolutional neural networks (CNNs). The system captures images of hand gestures using a camera, applies image processing techniques like thresholding and blurring, and then uses a CNN model trained on a dataset of gestures to recognize the gestures and convert them to text and speech. The system was able to accurately recognize gestures for letters and numbers with about 85% accuracy. Future work may involve expanding the dataset to include more signs and working towards word and sentence recognition.
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
This document summarizes research on a glove-based gesture recognition system using IR sensors. The system aims to help those who are deaf and mute communicate through hand gestures. An IR sensor and LED placed on a glove detect hand gestures based on the amount of light received by the sensor. The Arduino microcontroller recognizes the gestures and displays the meaning on an LCD screen while playing an audio message. The researchers claim this method is more accurate and has a lower error rate than conventional image processing approaches. It is intended to help address both safety and communication issues faced by those who are deaf or speech-impaired. Experimental results showed the system successfully recognized gestures and could help reduce the gap between those who are normal and speech-impaired.
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
This document discusses a system for human activity recognition using flex sensors. Flex sensors are attached to the body and can detect movements. The flex sensor data is fed into a neural network model to recognize activities. The model is trained using flex sensor data from various human activities. The trained model can then accurately recognize activities based on new flex sensor input data. The system is meant to help elderly people or those with disabilities by allowing them to control devices with body movements detected by flex sensors. It aims to provide a modular system that can adapt to new users and disabilities. Flex sensors make the system customizable while neural networks enable accurate activity recognition.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
Survey Paper on Raspberry pi based Assistive Device for Communication between...IRJET Journal
This document discusses several research papers on developing assistive devices to aid communication between blind, deaf, and mute individuals. It begins with an abstract describing the goal of converting sign language to voice and text using a Raspberry Pi, camera, speaker and LCD display by recognizing human gestures. It then summarizes 8 research papers on related topics, describing systems that use flex sensors on gloves to detect sign language and translate it to speech/text, or use image processing on hand gestures. The document concludes by outlining the proposed methodology, hardware and software requirements, and potential limitations and future work for a sign language translation system using a Raspberry Pi, camera and sensors.
A review of factors that impact the design of a glove based wearable devicesIAESIJAI
Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Sign Language is used by such individuals to assist them in communicating
with each other. The paper aims to report details of various aspects of
wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
is also comparison of technology used to determine which wearable device
is better, which is also done with the help of accuracy. The limitations and
future research help in determining how the wearable devices can be
improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
Sign Language Detection and Classification using Hand Tracking and Deep Learn...IRJET Journal
The document presents a research paper on developing a real-time sign language detection system using hand tracking and deep learning. It aims to address the communication barrier faced by deaf individuals by automatically recognizing and interpreting sign language gestures. The researchers collected data, tracked hands using computer vision techniques, and classified gestures using machine learning models. Experimental results showed high confidence scores and real-time performance, demonstrating the potential of the system to facilitate independent communication for the deaf community. However, challenges remain around hand occlusion, lighting variations, and recognizing a wide range of sign language gestures.
The document describes a proposed smart glove system to help visually impaired people navigate safely. The system uses ultrasonic sensors, a microcontroller, and vibratory feedback to alert users to obstacles in front of them. It integrates these components into a glove, allowing blind users to detect obstacles from 2cm to 300cm away through vibrations in the glove. The goal is to provide a convenient and safe way for blind people to have independent mobility and explore their environment.
IRJET- Gesture Drawing Android Application for Visually-Impaired PeopleIRJET Journal
The document describes a proposed Android application to help visually impaired people make phone calls and send messages with their current location independently. The application uses gesture drawing, haptic feedback, and audio feedback to allow users to store contacts along with assigned gestures and then make calls or send messages by drawing the gestures. When gestures are drawn correctly, haptic and audio feedback are provided to confirm the action to the user. The proposed application aims to provide an easier alternative to searching contact lists manually and does not require visual feedback.
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
More Related Content
Similar to IRJET- Assisting System for Paralyzed and Mute People with Heart Rate Monitoring
This document summarizes a research paper on developing a real-time sign language detector using computer vision and machine learning techniques. The researchers created a dataset of hand gestures for letters, numbers, and common signs in Indian Sign Language (ISL) using webcam photos. They used a pre-trained SSD MobileNet V2 model with transfer learning to classify the gestures with 70-80% accuracy. Their goal was to build a free and user-friendly app to help deaf and hard of hearing people communicate through automated sign language detection and translation, with the aim of closing communication gaps. The technology identifies selected ISL signs in low light and uncontrolled backgrounds using image processing and human movement classification algorithms.
This document describes a smart glove system that translates sign language gestures into speech and text to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures, which are processed by an Arduino microcontroller. The Arduino identifies letters and words from the gestures and outputs them as speech from a connected speaker and as text on an Android phone app. The goal is to help deaf-mute individuals effectively convey information to people without sign language training by translating their gestures into audio and text in real-time.
IRJET- An Innovative Method for Communication Among Differently Abled Peo...IRJET Journal
This document describes a proposed system to help improve communication between disabled individuals, including those who are deaf, blind, or mute. The system uses a glove fitted with flex sensors that can detect hand gestures. When a gesture is made, the flex sensors trigger an Arduino microcontroller to play a pre-recorded audio message or display a message on an LCD screen. The system is designed so that deaf individuals can receive messages through visual display, blind individuals can receive messages through Braille or vibration, and mute individuals can communicate through gestures. The goal is to help overcome barriers to communication between disabled people and enable them to interact with others.
This project is about the development of a novel electronic speaking system for dumb and paralyzed persons. Due to their physical disability, an attender is always required to monitor and help their day to day activities. However, most of the time, attender is idle and the attender time is wasted. Hence, an electronic system is proposed in this project to help the dumb and paralyzed persons to communicate their need to the attender. The attender may be entrusted with other work during the time, when dumb and paralyzed persons do not need the support of attenders. This avoids the attenders continuously monitoring the dumb and paralyzed persons and the attenders may engage themselves in other works.
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
IRJET- Vision Based Sign Language by using MatlabIRJET Journal
This document discusses vision-based sign language translation using MATLAB. It describes a system that uses a camera to capture images of hand gestures representing letters or words in sign language. MATLAB is used to analyze the images, recognize the gestures, and translate them into spoken words that are output through a speaker. The system aims to help deaf, mute, and blind individuals communicate more easily. Several image processing and machine learning techniques for hand segmentation, feature extraction, and classification are reviewed from previous studies. The results suggest this type of system could accurately translate sign language in real-time.
Communication among blind, deaf and dumb PeopleIJAEMSJORNAL
Now-a-days Science and Technology have made the human world so easy but still some physically and visually challenged people suffer from communication with others. In this project, we are going to propose a new system prototype called communication among Blind, deaf and dumb people .This will helps the disabled people to overcome their difficulties in communicating with some other people with disabilities or normal people. The blind people will communicate through the speakers, the deaf and dumb people will see through it and reply through typing in a terminal .These are all done as an application , so that will be easily understand by the people with disabilities.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET Journal
The document presents a hand gesture recognition system using convolutional neural networks. The system aims to enable communication between deaf or mute individuals and those who do not understand sign language. It works by capturing an image of a hand gesture via camera, extracting features from the image, detecting the sign using a CNN model, and converting the sign to text or speech. The system can also convert text or speech to the corresponding sign. The CNN model achieves an accuracy of 95.6% for sign recognition, outperforming previous methods. A real-time prototype allows signing and two-way communication between individuals on different devices.
IRJET - Sign Language Text to Speech Converter using Image Processing and...IRJET Journal
This document describes a sign language text-to-speech converter system using image processing and convolutional neural networks (CNNs). The system captures images of hand gestures using a camera, applies image processing techniques like thresholding and blurring, and then uses a CNN model trained on a dataset of gestures to recognize the gestures and convert them to text and speech. The system was able to accurately recognize gestures for letters and numbers with about 85% accuracy. Future work may involve expanding the dataset to include more signs and working towards word and sentence recognition.
GLOVE BASED GESTURE RECOGNITION USING IR SENSORIRJET Journal
This document summarizes research on a glove-based gesture recognition system using IR sensors. The system aims to help those who are deaf and mute communicate through hand gestures. An IR sensor and LED placed on a glove detect hand gestures based on the amount of light received by the sensor. The Arduino microcontroller recognizes the gestures and displays the meaning on an LCD screen while playing an audio message. The researchers claim this method is more accurate and has a lower error rate than conventional image processing approaches. It is intended to help address both safety and communication issues faced by those who are deaf or speech-impaired. Experimental results showed the system successfully recognized gestures and could help reduce the gap between those who are normal and speech-impaired.
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
This document discusses a system for human activity recognition using flex sensors. Flex sensors are attached to the body and can detect movements. The flex sensor data is fed into a neural network model to recognize activities. The model is trained using flex sensor data from various human activities. The trained model can then accurately recognize activities based on new flex sensor input data. The system is meant to help elderly people or those with disabilities by allowing them to control devices with body movements detected by flex sensors. It aims to provide a modular system that can adapt to new users and disabilities. Flex sensors make the system customizable while neural networks enable accurate activity recognition.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
This document describes a digital vocalizer system that uses a data glove with flex sensors and an accelerometer to detect hand gestures. The sensors detect finger bending and hand tilt/position. The Arduino UNO microcontroller converts these detected gestures into corresponding audio words or visual text displayed on an LCD screen. This system aims to help reduce communication barriers between deaf/mute/blind communities and others by translating gestures into audio and visual outputs.
Survey Paper on Raspberry pi based Assistive Device for Communication between...IRJET Journal
This document discusses several research papers on developing assistive devices to aid communication between blind, deaf, and mute individuals. It begins with an abstract describing the goal of converting sign language to voice and text using a Raspberry Pi, camera, speaker and LCD display by recognizing human gestures. It then summarizes 8 research papers on related topics, describing systems that use flex sensors on gloves to detect sign language and translate it to speech/text, or use image processing on hand gestures. The document concludes by outlining the proposed methodology, hardware and software requirements, and potential limitations and future work for a sign language translation system using a Raspberry Pi, camera and sensors.
A review of factors that impact the design of a glove based wearable devicesIAESIJAI
Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Sign Language is used by such individuals to assist them in communicating
with each other. The paper aims to report details of various aspects of
wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
is also comparison of technology used to determine which wearable device
is better, which is also done with the help of accuracy. The limitations and
future research help in determining how the wearable devices can be
improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
Sign Language Detection and Classification using Hand Tracking and Deep Learn...IRJET Journal
The document presents a research paper on developing a real-time sign language detection system using hand tracking and deep learning. It aims to address the communication barrier faced by deaf individuals by automatically recognizing and interpreting sign language gestures. The researchers collected data, tracked hands using computer vision techniques, and classified gestures using machine learning models. Experimental results showed high confidence scores and real-time performance, demonstrating the potential of the system to facilitate independent communication for the deaf community. However, challenges remain around hand occlusion, lighting variations, and recognizing a wide range of sign language gestures.
The document describes a proposed smart glove system to help visually impaired people navigate safely. The system uses ultrasonic sensors, a microcontroller, and vibratory feedback to alert users to obstacles in front of them. It integrates these components into a glove, allowing blind users to detect obstacles from 2cm to 300cm away through vibrations in the glove. The goal is to provide a convenient and safe way for blind people to have independent mobility and explore their environment.
IRJET- Gesture Drawing Android Application for Visually-Impaired PeopleIRJET Journal
The document describes a proposed Android application to help visually impaired people make phone calls and send messages with their current location independently. The application uses gesture drawing, haptic feedback, and audio feedback to allow users to store contacts along with assigned gestures and then make calls or send messages by drawing the gestures. When gestures are drawn correctly, haptic and audio feedback are provided to confirm the action to the user. The proposed application aims to provide an easier alternative to searching contact lists manually and does not require visual feedback.
Similar to IRJET- Assisting System for Paralyzed and Mute People with Heart Rate Monitoring (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.