This document describes a research project that aims to develop a sign language recognition system using convolutional neural networks. The system would use a CNN model trained on images of hand signs to detect and identify signs in real-time video from a camera. The goal is to help facilitate communication between deaf or mute individuals and others. Key steps include collecting a dataset of labeled hand sign images, preprocessing the images, developing and training a CNN model, and using the model to recognize signs from live video input. The researchers believe this tool could help boost employment opportunities for deaf and mute communities by enabling easier communication without an interpreter.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET Journal
1) The document presents a study on using a convolutional neural network (CNN) to recognize American Sign Language (ASL) alphabets captured in real-time via a webcam.
2) The researchers trained a CNN model on 1600 images of 5 ASL alphabets (E, F, I, L, V) and tested it on 320 unlabeled images, achieving a validation accuracy of 74.8%.
3) While the model showed potential, the researchers acknowledged limitations like overfitting due to the small dataset and noted areas for improvement like recognizing a broader range of ASL letters and full sentences.
IRJET - Deep Learning Applications and Frameworks – A ReviewIRJET Journal
This document reviews deep learning applications and frameworks. It begins by defining deep learning and discussing how deep neural networks can be used to automatically identify patterns in large datasets. It then discusses several applications of deep learning, including self-driving cars, news aggregation, natural language processing, virtual assistants, and visual recognition. The document also describes artificial neural networks and deep neural networks. Finally, it reviews several popular deep learning frameworks, including TensorFlow, PyTorch, Keras, Caffe, and Chainer.
Sign Language Recognition using Machine LearningIRJET Journal
This document describes a study on sign language recognition using machine learning. The researchers developed a convolutional neural network model to detect hand movements and classify them as letters of the alphabet from sign language. They used a dataset of images of American Sign Language letters and trained their CNN model on this data. Their model was able to accurately recognize the letters in real-time using input from a webcam. The document also discusses using background subtraction and other techniques to improve the model's performance at sign language recognition.
IRJET- Chest Abnormality Detection from X-Ray using Deep LearningIRJET Journal
This document proposes using a convolutional neural network (CNN) to detect abnormalities in chest x-rays. It discusses developing a CNN model with an input of chest x-ray images labeled as normal or abnormal. The model would use techniques like pre-processing, data augmentation, and a network architecture with convolutional and pooling layers to classify images as normal or abnormal. The goal is to build an accurate system for detecting various chest diseases from x-ray images to help doctors with diagnosis.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
Sign language recognition System is one of the systems that have major use for the peoples who are deaf dumb. With the development of this system, we can provide such kind of peoples, a medium to communicate with peoples and their family member. As we all know deaf dumb peoples are very far from the mainstream, such kind of person don’t have proper job and proper livelihood. They spent their whole life in learning sign languages, that are not understandable for a normal people. Here sign languages detection system plays a major role by providing a platform between deaf dumb peoples and normal people, so that they can communicate with each other. Sign language detection systems can be setup at schools, hospitals, hotels, malls etc. which will make it very simple for such peoples to communicate. Hand gestures is easiest way of nonverbal communication which plays vital role in daily life. The propped paper provides a user friendly way of communication with the help of CNN algorithm. Taokeer Alam | Dr. Murugan R "Sign Language Detector Using Cloud" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49698.pdf Paper URL: https://www.ijtsrd.com/computer-science/speech-recognition/49698/sign-language-detector-using-cloud/taokeer-alam
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
The document describes a hand gesture recognition system for deaf persons to communicate their thoughts to others. It aims to bridge the communication gap between deaf-mute people and the general public by converting gestures captured in real-time via camera, which are trained using a convolutional neural network (CNN), into text output. The system allows deaf-mute users to interact with computer applications using gestures detected by their webcam without needing to install additional applications. It discusses the background and relevance of the project, as well as objectives like designing the gesture training, extracting features from images, and recognizing gestures to translate them to text.
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET Journal
1) The document presents a study on using a convolutional neural network (CNN) to recognize American Sign Language (ASL) alphabets captured in real-time via a webcam.
2) The researchers trained a CNN model on 1600 images of 5 ASL alphabets (E, F, I, L, V) and tested it on 320 unlabeled images, achieving a validation accuracy of 74.8%.
3) While the model showed potential, the researchers acknowledged limitations like overfitting due to the small dataset and noted areas for improvement like recognizing a broader range of ASL letters and full sentences.
IRJET - Deep Learning Applications and Frameworks – A ReviewIRJET Journal
This document reviews deep learning applications and frameworks. It begins by defining deep learning and discussing how deep neural networks can be used to automatically identify patterns in large datasets. It then discusses several applications of deep learning, including self-driving cars, news aggregation, natural language processing, virtual assistants, and visual recognition. The document also describes artificial neural networks and deep neural networks. Finally, it reviews several popular deep learning frameworks, including TensorFlow, PyTorch, Keras, Caffe, and Chainer.
Sign Language Recognition using Machine LearningIRJET Journal
This document describes a study on sign language recognition using machine learning. The researchers developed a convolutional neural network model to detect hand movements and classify them as letters of the alphabet from sign language. They used a dataset of images of American Sign Language letters and trained their CNN model on this data. Their model was able to accurately recognize the letters in real-time using input from a webcam. The document also discusses using background subtraction and other techniques to improve the model's performance at sign language recognition.
IRJET- Chest Abnormality Detection from X-Ray using Deep LearningIRJET Journal
This document proposes using a convolutional neural network (CNN) to detect abnormalities in chest x-rays. It discusses developing a CNN model with an input of chest x-ray images labeled as normal or abnormal. The model would use techniques like pre-processing, data augmentation, and a network architecture with convolutional and pooling layers to classify images as normal or abnormal. The goal is to build an accurate system for detecting various chest diseases from x-ray images to help doctors with diagnosis.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
Sign language recognition System is one of the systems that have major use for the peoples who are deaf dumb. With the development of this system, we can provide such kind of peoples, a medium to communicate with peoples and their family member. As we all know deaf dumb peoples are very far from the mainstream, such kind of person don’t have proper job and proper livelihood. They spent their whole life in learning sign languages, that are not understandable for a normal people. Here sign languages detection system plays a major role by providing a platform between deaf dumb peoples and normal people, so that they can communicate with each other. Sign language detection systems can be setup at schools, hospitals, hotels, malls etc. which will make it very simple for such peoples to communicate. Hand gestures is easiest way of nonverbal communication which plays vital role in daily life. The propped paper provides a user friendly way of communication with the help of CNN algorithm. Taokeer Alam | Dr. Murugan R "Sign Language Detector Using Cloud" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49698.pdf Paper URL: https://www.ijtsrd.com/computer-science/speech-recognition/49698/sign-language-detector-using-cloud/taokeer-alam
This document presents a study on sign language recognition using computer vision techniques. It aims to develop a system that can identify characters and numbers in Indian Sign Language (ISL) using convolutional neural networks. ISL uses both hands to communicate unlike American Sign Language which uses a single hand. The system creates a dataset of ISL gestures and trains a CNN model on it. It then tests the ability of the trained model to accurately predict numbers from 1 to 10 and letters from A to Z when presented with new sign language inputs. The model achieves over 90% accuracy on test data, providing an effective way to translate ISL signs and bridge communication between deaf/mute and non-signing individuals.
Sign Language Detection using Action RecognitionIRJET Journal
This document presents a sign language detection system using action recognition. It aims to enhance current systems' performance in terms of response time and accuracy. The proposed system uses machine learning algorithms like LSTM neural networks trained on data sets to classify sign language gestures in real-time video. It segments hand regions, extracts features, and recognizes signs with 98% accuracy for 26 gestures. The system is intended to help deaf individuals communicate through translating signs to text in real-world applications.
SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLEIRJET Journal
The document describes a proposed sign language interface system for hearing impaired people. The system aims to use machine learning algorithms like convolutional neural networks to classify hand gestures captured by a webcam into corresponding letters or words. The system would preprocess the images, extract features, then use a trained CNN model to predict the sign and output it as text and speech for better understanding by users. The goal is to help bridge communication between deaf/mute and normal people without requiring specialized gloves or sensors.
Sign Language Recognition using Deep LearningIRJET Journal
The document discusses using deep learning techniques like MobileNet V2 to develop a model for sign language recognition. It aims to classify sign language gestures to help communicate with deaf people. The model was trained on a dataset of sign language images and achieved an accuracy of 70% in recognizing letters, numbers, and gestures.
IRJET- Chest Abnormality Detection from X-Ray using Deep LearningIRJET Journal
This document summarizes a research paper that developed a deep learning system to detect abnormalities in chest x-rays. The system used a convolutional neural network (CNN) trained on 5000 chest x-ray images labeled as normal or abnormal. The CNN architecture included convolutional and pooling layers. Pre-processing techniques like histogram equalization and data augmentation like image flipping were used. The trained CNN could classify x-rays as normal, abnormal, or detect 14 specific diseases with 1% error rate. While large datasets improve accuracy, they also increase training time. The authors conclude CNN is effective for chest x-ray analysis but note room for improvements like including more diseases and optimized architectures.
Gesture Recognition System using Computer VisionIRJET Journal
This document presents a gesture recognition system using computer vision and convolutional neural networks. It discusses developing classifiers to recognize hand gestures and facial expressions. A dataset of 87,000 images is used to train models to classify 26 letters of the American Sign Language alphabet, as well as additional classes for space, delete and nothing. The models are trained using transfer learning with MobileNet, achieving validation accuracies of over 90% for hand gesture classification and implementing a system that recognizes and translates gestures in real-time. It concludes the paper developed robust models for American Sign Language translation and facial expression recognition using CNNs.
Indian sign language recognition systemIRJET Journal
This document discusses the development of an Indian sign language recognition system using machine learning techniques. The system aims to identify different hand gestures used in Indian sign language for finger spelling. Convolutional neural network algorithms are applied to datasets for this purpose. The system is intended to help communication between deaf/mute and normal individuals by translating Indian sign language gestures into text in real-time. Pre-processing techniques like filtering and background subtraction are used before classification with CNN models. The CNN models are trained on datasets of sign language gestures to recognize characters, words, and phrases in Indian sign language.
This document describes a project to develop a hand gesture detection model using computer vision and machine learning. The model aims to recognize Indian sign language gestures from video input and output the corresponding text. The team has made progress training models to recognize alphabets with 80% accuracy and common phrases like "Hello" and "Welcome" with 85% accuracy. The final outcome will be a working gesture detection system to help communication for deaf or mute users.
IRJET- Survey on Text Error Detection using Deep LearningIRJET Journal
This document summarizes a survey on using deep learning for text error detection. It begins with an introduction to natural language processing and deep learning. Deep learning models like convolutional neural networks, recurrent neural networks, and recursive neural networks are effective for natural language tasks. The document then discusses several deep learning networks that are relevant for text error detection, including recursive neural networks, recurrent neural networks, convolutional neural networks, and generative models. It concludes that deep learning is well-suited for modeling complex language data through multiple representation layers, but requires large labeled datasets for training.
COMPUTER VISION-ENABLED GESTURE RECOGNITION FOR DIFFERENTLY-ABLED PEOPLEIRJET Journal
This document summarizes research on developing a computer vision and machine learning system for gesture recognition in sign language. The system aims to assist deaf and mute individuals communicate by recognizing signs and converting them to text or speech output. The document reviews various approaches to sign language recognition, including glove-based methods requiring sensors and vision-based methods using computer vision algorithms to recognize signs from video input without additional devices. Deep learning techniques are investigated for tasks like feature extraction, classification, and sequence recognition. The system is intended to help break down communication barriers for the deaf and hearing-impaired community.
A survey on Enhancements in Speech RecognitionIRJET Journal
This document discusses enhancements in speech recognition and provides an overview of the history and basic model of speech recognition. It summarizes key enhancements researchers have made to improve speech recognition, especially in noisy environments. The basic model of speech recognition involves speech input, preprocessing using techniques like MFCCs, classification models like RNNs and HMMs, and output of a transcript. Researchers are working to develop robust speech recognition that can understand speech in any environment.
IRJET - Mutecom using Tensorflow-Keras ModelIRJET Journal
This document describes a system called MuteCom that uses machine learning and computer vision to enable communication between deaf or mute people and people who do not understand sign language. The system takes hand gestures as input from a webcam and uses a TensorFlow-Keras model trained on American Sign Language databases to recognize the gestures and output the predicted meaning in text and/or speech. The goal is to bridge the communication gap by allowing deaf/mute people to express themselves easily to others through a mobile application that recognizes their hand signs.
Handwritten Digit Recognition Using CNNIRJET Journal
This document discusses a research project on handwritten digit recognition using convolutional neural networks. The project aims to build a model that can recognize handwritten digits in images using the MNIST dataset to train a convolutional neural network. Specifically, it uses Keras and TensorFlow to create a 7-layer LeNet-5 CNN model on 70,000 MNIST images. The model is trained using stochastic gradient descent and backpropagation. Once trained, the model can be used to predict handwritten digits in new images. The document provides background on handwritten digit recognition and CNNs, describes the dataset and tools used, and outlines the methodology for building the recognition model.
This document summarizes a research paper that proposes using convolutional neural networks (CNNs) to detect criminal or suspicious human activity from live video surveillance feeds. It provides background on human activity analysis and how CNNs are well-suited for this task. The proposed system would take video input and trigger alerts for detected suspicious activity. The document reviews related work applying deep learning to human pose estimation and activity recognition. It outlines the proposed system architecture and algorithm, which would use a CNN trained on activity datasets to classify live video feeds in real-time. In conclusions, the document discusses potential applications and benefits of automated criminal activity detection systems.
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
IRJET- Recognition of Handwritten Characters based on Deep Learning with Tens...IRJET Journal
This paper proposes a convolutional neural network model to recognize handwritten digits using the MNIST dataset. The model is built using TensorFlow and consists of convolutional, pooling and fully connected layers. The model is trained on 60,000 images and tested on 10,000 images, achieving 98% accuracy on the training set and classifying digits with low error of 0.03% on the test set. Previous methods for handwritten digit recognition are discussed and the CNN approach is shown to provide superior performance with faster training times compared to other models.
The document describes a proposed real-time sign language detection system using machine learning. The system would use images captured by a webcam to detect gestures in sign language and translate them to text in real-time. The proposed system would be built using the Single Shot Detection algorithm and TensorFlow Object Detection API. A dataset of images of 5 different signs would be created and labelled using LabelImg software. 13 images per sign would be used to train the model and 2 images per sign to test it. The system aims to help deaf people communicate without requiring an expensive human interpreter.
IRJET- Deep Learning Techniques for Object DetectionIRJET Journal
The document discusses deep learning techniques for object detection in images. It provides an overview of convolutional neural networks (CNNs), the most popular deep learning approach for computer vision tasks. The document describes the basic architecture of CNNs, including convolutional layers, pooling layers, and fully connected layers. It then discusses several state-of-the-art CNN models for object detection, including ResNet, R-CNN, SSD, and YOLO. The document aims to help newcomers understand the key deep learning techniques and models used for object detection in computer vision.
Real Time Sign Language Translation Using Tensor Flow Object DetectionIRJET Journal
This document describes a real-time sign language translation system developed using TensorFlow object detection. The system was able to detect Indian sign language alphabets in real-time with an average accuracy of 87.4% after training an SSD MobileNet v2 model on a dataset of 500 images containing signs for the English alphabet. Future work may focus on improving accuracy, reducing latency for real-time translation, and recognizing facial expressions in addition to hand gestures.
IRJET- Deep Learning Methods for Selecting Appropriate Cosmetic Products ...IRJET Journal
The document discusses using deep learning methods like deep neural networks to select appropriate cosmetic products for different skin types. It proposes a predictive system that takes input features of products like ingredients and suitability for skin types, and uses deep learning architectures to output the best composition of cosmetic products for a given skin type. Deep learning methods could ease the challenging task for the beauty industry of recommending personalized products compared to traditional recommendation systems.
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
More Related Content
Similar to SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS
This document presents a study on sign language recognition using computer vision techniques. It aims to develop a system that can identify characters and numbers in Indian Sign Language (ISL) using convolutional neural networks. ISL uses both hands to communicate unlike American Sign Language which uses a single hand. The system creates a dataset of ISL gestures and trains a CNN model on it. It then tests the ability of the trained model to accurately predict numbers from 1 to 10 and letters from A to Z when presented with new sign language inputs. The model achieves over 90% accuracy on test data, providing an effective way to translate ISL signs and bridge communication between deaf/mute and non-signing individuals.
Sign Language Detection using Action RecognitionIRJET Journal
This document presents a sign language detection system using action recognition. It aims to enhance current systems' performance in terms of response time and accuracy. The proposed system uses machine learning algorithms like LSTM neural networks trained on data sets to classify sign language gestures in real-time video. It segments hand regions, extracts features, and recognizes signs with 98% accuracy for 26 gestures. The system is intended to help deaf individuals communicate through translating signs to text in real-world applications.
SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLEIRJET Journal
The document describes a proposed sign language interface system for hearing impaired people. The system aims to use machine learning algorithms like convolutional neural networks to classify hand gestures captured by a webcam into corresponding letters or words. The system would preprocess the images, extract features, then use a trained CNN model to predict the sign and output it as text and speech for better understanding by users. The goal is to help bridge communication between deaf/mute and normal people without requiring specialized gloves or sensors.
Sign Language Recognition using Deep LearningIRJET Journal
The document discusses using deep learning techniques like MobileNet V2 to develop a model for sign language recognition. It aims to classify sign language gestures to help communicate with deaf people. The model was trained on a dataset of sign language images and achieved an accuracy of 70% in recognizing letters, numbers, and gestures.
IRJET- Chest Abnormality Detection from X-Ray using Deep LearningIRJET Journal
This document summarizes a research paper that developed a deep learning system to detect abnormalities in chest x-rays. The system used a convolutional neural network (CNN) trained on 5000 chest x-ray images labeled as normal or abnormal. The CNN architecture included convolutional and pooling layers. Pre-processing techniques like histogram equalization and data augmentation like image flipping were used. The trained CNN could classify x-rays as normal, abnormal, or detect 14 specific diseases with 1% error rate. While large datasets improve accuracy, they also increase training time. The authors conclude CNN is effective for chest x-ray analysis but note room for improvements like including more diseases and optimized architectures.
Gesture Recognition System using Computer VisionIRJET Journal
This document presents a gesture recognition system using computer vision and convolutional neural networks. It discusses developing classifiers to recognize hand gestures and facial expressions. A dataset of 87,000 images is used to train models to classify 26 letters of the American Sign Language alphabet, as well as additional classes for space, delete and nothing. The models are trained using transfer learning with MobileNet, achieving validation accuracies of over 90% for hand gesture classification and implementing a system that recognizes and translates gestures in real-time. It concludes the paper developed robust models for American Sign Language translation and facial expression recognition using CNNs.
Indian sign language recognition systemIRJET Journal
This document discusses the development of an Indian sign language recognition system using machine learning techniques. The system aims to identify different hand gestures used in Indian sign language for finger spelling. Convolutional neural network algorithms are applied to datasets for this purpose. The system is intended to help communication between deaf/mute and normal individuals by translating Indian sign language gestures into text in real-time. Pre-processing techniques like filtering and background subtraction are used before classification with CNN models. The CNN models are trained on datasets of sign language gestures to recognize characters, words, and phrases in Indian sign language.
This document describes a project to develop a hand gesture detection model using computer vision and machine learning. The model aims to recognize Indian sign language gestures from video input and output the corresponding text. The team has made progress training models to recognize alphabets with 80% accuracy and common phrases like "Hello" and "Welcome" with 85% accuracy. The final outcome will be a working gesture detection system to help communication for deaf or mute users.
IRJET- Survey on Text Error Detection using Deep LearningIRJET Journal
This document summarizes a survey on using deep learning for text error detection. It begins with an introduction to natural language processing and deep learning. Deep learning models like convolutional neural networks, recurrent neural networks, and recursive neural networks are effective for natural language tasks. The document then discusses several deep learning networks that are relevant for text error detection, including recursive neural networks, recurrent neural networks, convolutional neural networks, and generative models. It concludes that deep learning is well-suited for modeling complex language data through multiple representation layers, but requires large labeled datasets for training.
COMPUTER VISION-ENABLED GESTURE RECOGNITION FOR DIFFERENTLY-ABLED PEOPLEIRJET Journal
This document summarizes research on developing a computer vision and machine learning system for gesture recognition in sign language. The system aims to assist deaf and mute individuals communicate by recognizing signs and converting them to text or speech output. The document reviews various approaches to sign language recognition, including glove-based methods requiring sensors and vision-based methods using computer vision algorithms to recognize signs from video input without additional devices. Deep learning techniques are investigated for tasks like feature extraction, classification, and sequence recognition. The system is intended to help break down communication barriers for the deaf and hearing-impaired community.
A survey on Enhancements in Speech RecognitionIRJET Journal
This document discusses enhancements in speech recognition and provides an overview of the history and basic model of speech recognition. It summarizes key enhancements researchers have made to improve speech recognition, especially in noisy environments. The basic model of speech recognition involves speech input, preprocessing using techniques like MFCCs, classification models like RNNs and HMMs, and output of a transcript. Researchers are working to develop robust speech recognition that can understand speech in any environment.
IRJET - Mutecom using Tensorflow-Keras ModelIRJET Journal
This document describes a system called MuteCom that uses machine learning and computer vision to enable communication between deaf or mute people and people who do not understand sign language. The system takes hand gestures as input from a webcam and uses a TensorFlow-Keras model trained on American Sign Language databases to recognize the gestures and output the predicted meaning in text and/or speech. The goal is to bridge the communication gap by allowing deaf/mute people to express themselves easily to others through a mobile application that recognizes their hand signs.
Handwritten Digit Recognition Using CNNIRJET Journal
This document discusses a research project on handwritten digit recognition using convolutional neural networks. The project aims to build a model that can recognize handwritten digits in images using the MNIST dataset to train a convolutional neural network. Specifically, it uses Keras and TensorFlow to create a 7-layer LeNet-5 CNN model on 70,000 MNIST images. The model is trained using stochastic gradient descent and backpropagation. Once trained, the model can be used to predict handwritten digits in new images. The document provides background on handwritten digit recognition and CNNs, describes the dataset and tools used, and outlines the methodology for building the recognition model.
This document summarizes a research paper that proposes using convolutional neural networks (CNNs) to detect criminal or suspicious human activity from live video surveillance feeds. It provides background on human activity analysis and how CNNs are well-suited for this task. The proposed system would take video input and trigger alerts for detected suspicious activity. The document reviews related work applying deep learning to human pose estimation and activity recognition. It outlines the proposed system architecture and algorithm, which would use a CNN trained on activity datasets to classify live video feeds in real-time. In conclusions, the document discusses potential applications and benefits of automated criminal activity detection systems.
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
IRJET- Recognition of Handwritten Characters based on Deep Learning with Tens...IRJET Journal
This paper proposes a convolutional neural network model to recognize handwritten digits using the MNIST dataset. The model is built using TensorFlow and consists of convolutional, pooling and fully connected layers. The model is trained on 60,000 images and tested on 10,000 images, achieving 98% accuracy on the training set and classifying digits with low error of 0.03% on the test set. Previous methods for handwritten digit recognition are discussed and the CNN approach is shown to provide superior performance with faster training times compared to other models.
The document describes a proposed real-time sign language detection system using machine learning. The system would use images captured by a webcam to detect gestures in sign language and translate them to text in real-time. The proposed system would be built using the Single Shot Detection algorithm and TensorFlow Object Detection API. A dataset of images of 5 different signs would be created and labelled using LabelImg software. 13 images per sign would be used to train the model and 2 images per sign to test it. The system aims to help deaf people communicate without requiring an expensive human interpreter.
IRJET- Deep Learning Techniques for Object DetectionIRJET Journal
The document discusses deep learning techniques for object detection in images. It provides an overview of convolutional neural networks (CNNs), the most popular deep learning approach for computer vision tasks. The document describes the basic architecture of CNNs, including convolutional layers, pooling layers, and fully connected layers. It then discusses several state-of-the-art CNN models for object detection, including ResNet, R-CNN, SSD, and YOLO. The document aims to help newcomers understand the key deep learning techniques and models used for object detection in computer vision.
Real Time Sign Language Translation Using Tensor Flow Object DetectionIRJET Journal
This document describes a real-time sign language translation system developed using TensorFlow object detection. The system was able to detect Indian sign language alphabets in real-time with an average accuracy of 87.4% after training an SSD MobileNet v2 model on a dataset of 500 images containing signs for the English alphabet. Future work may focus on improving accuracy, reducing latency for real-time translation, and recognizing facial expressions in addition to hand gestures.
IRJET- Deep Learning Methods for Selecting Appropriate Cosmetic Products ...IRJET Journal
The document discusses using deep learning methods like deep neural networks to select appropriate cosmetic products for different skin types. It proposes a predictive system that takes input features of products like ingredients and suitability for skin types, and uses deep learning architectures to output the best composition of cosmetic products for a given skin type. Deep learning methods could ease the challenging task for the beauty industry of recommending personalized products compared to traditional recommendation systems.
Similar to SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.