Hints of the outbreak are detected through the modified circumstances favoring the outbreaks, like the warm weather contributing to epidermal outbreaks or the loss of sanitation leading to cholera outbreaks typically relying on the routine reports from the healthcare facilities, secondary data like attendance monitoring at workplaces and schools, the web, and the media play a significant informational source with more than 60% of the initial outbreak reporting to the informal sources. Through the application of natural language processing methods and machine learning technologies, a pipeline is developed which extracts the critical entities like country, confirmed case counts, disease, and case dates, which are mandatory entities from the epidemiological article and are saved in the database thereby facilitating the data entry easier. The advantages are the facilitation of relevant score articles shown first, thereby providing the web service results termed EventEpi integrated into the Event Based Surveillance (EBS) workflows.
A review of factors that impact the design of a glove based wearable devicesIAESIJAI
Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Sign Language is used by such individuals to assist them in communicating
with each other. The paper aims to report details of various aspects of
wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
is also comparison of technology used to determine which wearable device
is better, which is also done with the help of accuracy. The limitations and
future research help in determining how the wearable devices can be
improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
A Deep Learning Model For Crime Surveillance In Phone Calls.vivatechijri
Public surveillance Systems are creating ground-breaking impact to secure lives, ensuring public safety with the interventions that will curb crime and improve well-being of communities, law enforcement agencies are employing and deploying tools like CCTV for video surveillance in banks, residential societies shopping malls,markets,roads almost everywhere to detect where and who was responsible for events like robbery, rough driving, insolence, murder etc. Other surveillance system tools like phone tapping is used to plot an event to catch a suspected person who is threatening an innocent or conspiring a violent activity over a phone call which is being done voluntarily by authorities . Now analyzing both the scenarios in the case of CCTV the crime(robbery,murder) has already occured and in the case of phone tapping there must be information in advance about the suspected person who is going to commit a crime, Now to overcome this issue a system is proposed to know in advance who is suspected and what conspiracy is being done over phone calls as well to detect it automatically and report it to law enforcemet authorities.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
This document discusses the development of an Indian Sign Language recognition system called SignReco. It begins with an abstract describing the challenges faced by deaf individuals communicating with others without translation and the benefits of a system that can recognize sign language. The paper then provides background on sign language and the goals of the proposed system, which is to classify and recognize Indian Sign Language in real-time using CNN and neural networks. A literature review covers prior work on sign language recognition systems. The proposed system's workflow and modules for model creation, language translation and app development are described. It concludes that the survey helped in developing an effective approach for an Indian Sign Language recognition system using CNN to improve accuracy.
Static-gesture word recognition in Bangla sign language using convolutional n...TELKOMNIKA JOURNAL
Sign language is the communication process of people with hearing impairments. For hearing-impaired communication in Bangladesh and parts of India, Bangla sign language (BSL) is the standard. While Bangla is one of the most widely spoken languages in the world, there is a scarcity of research in the field of BSL recognition. The few research works done so far focused on detecting BSL alphabets. To the best of our knowledge, no work on detecting BSL words has been conducted till now for the unavailability of BSL word dataset. In this research, a small static-gesture word dataset has been developed, and a deep learning-based method has been introduced that can detect BSL static-gesture words from images. The dataset, “BSLword” contains 30 static-gesture BSL words with 1200 images for training.
The training is done using a multi-layered convolutional neural network with the Adam optimizer. OpenCV is used for image processing and TensorFlow is used to build the deep learning models. This system can recognize BSL static-gesture words with 92.50% accuracy on the word dataset.
This document describes a deep learning model called Deepsign that detects and recognizes Indian Sign Language (ISL) from video frames. Deepsign uses LSTM and GRU recurrent neural networks to recognize isolated signs from a custom dataset of 11 ISL signs called IISL2020. The best performing model architecture consisted of a single layer of LSTM followed by a GRU layer, which achieved around 97% accuracy on the IISL2020 dataset. The Deepsign model aims to help communication between deaf or mute individuals and those who do not understand sign language.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
A prior case study of natural language processing on different domain IJECEIAES
This document summarizes a prior case study on natural language processing across different domains. It begins with an introduction to natural language processing, describing how it is a branch of artificial intelligence that allows computers to understand human language. It then reviews several existing studies that applied natural language processing techniques such as named entity recognition and text mining to tasks like identifying technical knowledge in resumes, enhancing reading skills for deaf students, and predicting student performance. The document concludes by highlighting some of the challenges in developing new natural language processing models.
A review of factors that impact the design of a glove based wearable devicesIAESIJAI
Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Sign Language is used by such individuals to assist them in communicating
with each other. The paper aims to report details of various aspects of
wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
is also comparison of technology used to determine which wearable device
is better, which is also done with the help of accuracy. The limitations and
future research help in determining how the wearable devices can be
improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
A Deep Learning Model For Crime Surveillance In Phone Calls.vivatechijri
Public surveillance Systems are creating ground-breaking impact to secure lives, ensuring public safety with the interventions that will curb crime and improve well-being of communities, law enforcement agencies are employing and deploying tools like CCTV for video surveillance in banks, residential societies shopping malls,markets,roads almost everywhere to detect where and who was responsible for events like robbery, rough driving, insolence, murder etc. Other surveillance system tools like phone tapping is used to plot an event to catch a suspected person who is threatening an innocent or conspiring a violent activity over a phone call which is being done voluntarily by authorities . Now analyzing both the scenarios in the case of CCTV the crime(robbery,murder) has already occured and in the case of phone tapping there must be information in advance about the suspected person who is going to commit a crime, Now to overcome this issue a system is proposed to know in advance who is suspected and what conspiracy is being done over phone calls as well to detect it automatically and report it to law enforcemet authorities.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
This document discusses the development of an Indian Sign Language recognition system called SignReco. It begins with an abstract describing the challenges faced by deaf individuals communicating with others without translation and the benefits of a system that can recognize sign language. The paper then provides background on sign language and the goals of the proposed system, which is to classify and recognize Indian Sign Language in real-time using CNN and neural networks. A literature review covers prior work on sign language recognition systems. The proposed system's workflow and modules for model creation, language translation and app development are described. It concludes that the survey helped in developing an effective approach for an Indian Sign Language recognition system using CNN to improve accuracy.
Static-gesture word recognition in Bangla sign language using convolutional n...TELKOMNIKA JOURNAL
Sign language is the communication process of people with hearing impairments. For hearing-impaired communication in Bangladesh and parts of India, Bangla sign language (BSL) is the standard. While Bangla is one of the most widely spoken languages in the world, there is a scarcity of research in the field of BSL recognition. The few research works done so far focused on detecting BSL alphabets. To the best of our knowledge, no work on detecting BSL words has been conducted till now for the unavailability of BSL word dataset. In this research, a small static-gesture word dataset has been developed, and a deep learning-based method has been introduced that can detect BSL static-gesture words from images. The dataset, “BSLword” contains 30 static-gesture BSL words with 1200 images for training.
The training is done using a multi-layered convolutional neural network with the Adam optimizer. OpenCV is used for image processing and TensorFlow is used to build the deep learning models. This system can recognize BSL static-gesture words with 92.50% accuracy on the word dataset.
This document describes a deep learning model called Deepsign that detects and recognizes Indian Sign Language (ISL) from video frames. Deepsign uses LSTM and GRU recurrent neural networks to recognize isolated signs from a custom dataset of 11 ISL signs called IISL2020. The best performing model architecture consisted of a single layer of LSTM followed by a GRU layer, which achieved around 97% accuracy on the IISL2020 dataset. The Deepsign model aims to help communication between deaf or mute individuals and those who do not understand sign language.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
A prior case study of natural language processing on different domain IJECEIAES
This document summarizes a prior case study on natural language processing across different domains. It begins with an introduction to natural language processing, describing how it is a branch of artificial intelligence that allows computers to understand human language. It then reviews several existing studies that applied natural language processing techniques such as named entity recognition and text mining to tasks like identifying technical knowledge in resumes, enhancing reading skills for deaf students, and predicting student performance. The document concludes by highlighting some of the challenges in developing new natural language processing models.
Live Sign Language Translation: A SurveyIRJET Journal
The document discusses various approaches that have been used for live sign language translation. It reviews 20 research papers that used techniques like convolutional neural networks, support vector machines, k-nearest neighbors, and LSTM networks to classify hand gestures and translate sign language into text with varying levels of accuracy between 62.3% to 99.9%. Deep learning models using CNNs and LSTMs achieved the highest accuracy compared to traditional classifiers. The paper aims to help other researchers in the field understand past approaches and how to potentially improve sign language translation systems.
Sentiment analysis on Bangla conversation using machine learning approachIJECEIAES
Nowadays, online communication is more convenient and popular than faceto-face conversation. Therefore, people prefer online communication over face-to-face meetings. Enormous people use online chatting systems to speak with their loved ones at any given time throughout the world. People create massive quantities of conversation every second because of their online engagement. People's feelings during the conversation period can be gleaned as useful information from these conversations. Text analysis and conclusion of any material as summarization can be done using sentiment analysis by natural language processing. The use of communication for customer service portals in various e-commerce platforms and crime investigations based on digital evidence is increasing the need for sentiment analysis of a conversation. Other languages, such as English, have welldeveloped libraries and resources for natural language processing, yet there are few studies conducted on Bangla. It is more challenging to extract sentiments from Bangla conversational data due to the language's grammatical complexity. As a result, it opens vast study opportunities. So, support vector machine, multinomial naïve Bayes, k-nearest neighbors, logistic regression, decision tree, and random forest was used. From the dataset, extracted information was labeled as positive and negative.
Sign Language Recognition using Deep LearningIRJET Journal
The document discusses using deep learning techniques like MobileNet V2 to develop a model for sign language recognition. It aims to classify sign language gestures to help communicate with deaf people. The model was trained on a dataset of sign language images and achieved an accuracy of 70% in recognizing letters, numbers, and gestures.
A Hybrid Method of Long Short-Term Memory and AutoEncoder Architectures for S...AhmedAdilNafea
Sarcasm detection is considered one of the most challenging tasks in sentiment analysis and opinion mining applications in the social media. Sarcasm identification is therefore essential for a good public opinion decision. There are some studies on sarcasm detection that apply standard word2vec model and have shown great performance with word-level analysis. However, once a sequence of terms is being tackled, the performance drops. This is because averaging the embedding of each term in a sentence to get the general embedding would discard the important embedding of some terms. LSTM showed significant improvement in terms of document embedding. However, within the classification LSTM requires adding additional information in order to precisely classify the document into sarcasm or not. This study aims to propose two technique based on LSTM and Auto-Encoder for improving the sarcasm detection. A benchmark dataset has been used in the experiments along with several pre-processing operations that have been applied. These include stop word removal, tokenization and special character removal with LSTM which can be represented by configuring the document embedding and using Auto-Encoder the classifier that was trained on the proposed LSTM. Results showed that the proposed LSTM with Auto-Encoder outperformed the baseline by achieving 84% of f-measure for the dataset. The main reason behind the superiority is that the proposed auto encoder is processing the document embedding as input and attempt to output the same embedding vector. This will enable the architecture to learn the interesting embedding that have significant impact on sarcasm polarity.
Sensing complicated meanings from unstructured data: a novel hybrid approachIJECEIAES
The majority of data on computers nowadays is in the form of unstructured data and unstructured text. The inherent ambiguity of natural language makes it incredibly difficult but also highly profitable to find hidden information or comprehend complex semantics in unstructured text. In this paper, we present the combination of natural language processing (NLP) and convolution neural network (CNN) hybrid architecture called automated analysis of unstructured text using machine learning (AAUT-ML) for the detection of complex semantics from unstructured data that enables different users to make understand formal semantic knowledge to be extracted from an unstructured text corpus. The AAUT-ML has been evaluated using three datasets data mining (DM), operating system (OS), and data base (DB), and compared with the existing models, i.e., YAKE, term frequency-inverse document frequency (TF-IDF) and text-R. The results show better outcomes in terms of precision, recall, and macro-averaged F1-score. This work presents a novel method for identifying complex semantics using unstructured data.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
Enhancing speaker verification accuracy with deep ensemble learning and inclu...IJECEIAES
Effective speaker identification is essential for achieving robust speaker recognition in real-world applications such as mobile devices, security, and entertainment while ensuring high accuracy. However, deep learning models trained on large datasets with diverse demographic and environmental factors may lead to increased misclassification and longer processing times. This study proposes incorporating ethnicity and gender information as critical parameters in a deep learning model to enhance accuracy. Two convolutional neural network (CNN) models classify gender and ethnicity, followed by a Siamese deep learning model trained with critical parameters and additional features for speaker verification. The proposed model was tested using the VoxCeleb 2 database, which includes over one million utterances from 6,112 celebrities. In an evaluation after 500 epochs, equal error rate (EER) and minimum decision cost function (minDCF) showed notable results, scoring 1.68 and 0.10, respectively. The proposed model outperforms existing deep learning models, demonstrating improved performance in terms of reduced misclassification errors and faster processing times.
An Overview Of Natural Language ProcessingScott Faria
This document provides an overview of natural language processing (NLP). It discusses the history and evolution of NLP. It describes common NLP tasks like part-of-speech tagging, parsing, named entity recognition, question answering, and text summarization. It also discusses applications of NLP like sentiment analysis, chatbots, and targeted advertising. Major approaches to NLP problems include supervised and unsupervised machine learning using neural networks. The document concludes that NLP has many applications and improving human-computer interaction through voice is an important area of future work.
An optimal unsupervised text data segmentation 3prj_publication
This document summarizes a research paper that proposes an unsupervised text data segmentation model using genetic algorithms (OUTDSM) to improve clustering accuracy. The OUTDSM uses an encoding strategy, fitness function, and genetic operators to evolve optimal text clusters. Experimental results presented in the research paper demonstrate that OUTDSM can arrive at global optima and prevent stagnation at local optima due to its biologically diverse population. Key areas of related work discussed include text mining techniques, clustering methods, and prior uses of optimization algorithms like genetic algorithms for text mining tasks.
The Identification of Depressive Moods from Twitter Data by Using Convolution...IRJET Journal
The document presents research on identifying depressive moods on Twitter using a convolutional neural network model. Key points:
- A CNN model was developed using text and emoji representations from Twitter data to predict depressive moods.
- The model achieved 88% accuracy on the test data, demonstrating it could correctly classify sentiments most of the time.
- The model was more successful at identifying negative sentiments than positive ones, based on higher precision and recall rates for negative classes.
- Integrating both text and emoji data enhanced the model's performance, showing potential for more nuanced sentiment analysis.
Dialectal Arabic sentiment analysis based on tree-based pipeline optimizatio...IJECEIAES
This document summarizes a research paper that proposes using a tree-based pipeline optimization tool (TPOT) to improve sentiment classification of dialectal Arabic texts. The paper provides background on sentiment analysis and challenges in analyzing informal Arabic texts. It then discusses related work applying TPOT and AutoML techniques to optimize machine learning for various tasks. The proposed approach uses TPOT for sentiment analysis of three Arabic dialect datasets to automatically optimize hyperparameters and improve over similar prior work.
TUNING DARI SPEECH CLASSIFICATION EMPLOYING DEEP NEURAL NETWORKSIJCI JOURNAL
Recently, many researchers have focused on building and improving speech recognition systems to facilitate and enhance human-computer interaction. Today, Automatic Speech Recognition (ASR) system has become an important and common tool from games to translation systems, robots, and so on. However, there is still a need for research on speech recognition systems for low-resource languages. This article deals with the recognition of a separate word for Dari language, using Mel-frequency cepstral coefficients (MFCCs) feature extraction method and three different deep neural networks including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Multilayer Perceptron (MLP), and two hybrid models of CNN and RNN. We evaluate our models on our built-in isolated Dari words corpus that consists of 1000 utterances for 20 short Dari terms. This study obtained the impressive result of 98.365% average accuracy.
This paper proposes a system to enable communication between hearing impaired individuals and others using sign language conversion. The system uses motion capture to detect hand gestures and convert them to text. Natural language processing is then used to match the text to predefined sign language datasets. For the matched dataset, a voiceover is provided. The system also allows converting voice to text and displaying corresponding sign language images. This two-way translation aims to serve as an interpreter between deaf and hearing users to facilitate basic communication.
Sentimental analysis is a context based mining of text, which extracts and identify subjective information from a text or sentence provided. Here the main concept is extracting the sentiment of the text using machine learning techniques such as LSTM Long short term memory . This text classification method analyses the incoming text and determines whether the underlined emotion is positive or negative along with probability associated with that positive or negative statements. Probability depicts the strength of a positive or negative statement, if the probability is close to zero, it implies that the sentiment is strongly negative and if probability is close to1, it means that the statement is strongly positive. Here a web application is created to deploy this model using a Python based micro framework called flask. Many other methods, such as RNN and CNN, are inefficient when compared to LSTM. Dirash A R | Dr. S K Manju Bargavi "LSTM Based Sentiment Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42345.pdf Paper URL: https://www.ijtsrd.comcomputer-science/data-processing/42345/lstm-based-sentiment-analysis/dirash-a-r
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
Pakistan sign language to Urdu translator using KinectCSITiaesprime
The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
This document describes a deep learning model called the Predict Text Classification Network (PTCN) that was developed to predict the outcome (pass/fail) of congressional roll call votes based solely on the text of legislation. The PTCN uses a hybrid convolutional and long short-term memory neural network architecture to analyze legislative texts and predict whether a vote will pass or fail. The model was tested on legislative texts from 2000-2019 and achieved an average prediction accuracy of 67.32% using 10-fold cross-validation, suggesting it can recognize patterns in language that correlate with congressional voting behaviors.
KANNADA SIGN LANGUAGE RECOGNITION USINGMACHINE LEARNINGIRJET Journal
The document describes a proposed system for Kannada sign language recognition using machine learning. It begins with an abstract discussing sign language recognition and techniques like SIFT and LDA. It then discusses the existing problems with sign language recognition systems and the objectives of the proposed system. The proposed system uses techniques like SIFT to extract features from images and LDA for dimensionality reduction before classifying images with methods like SVM, KNN, and minimum distance. It shows results of classifying Kannada sign language letters with up to 90% accuracy. It concludes that the system helps communication for deaf people and future work will focus on recognizing signs with motion and converting multiple signs to text words and sentences.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Live Sign Language Translation: A SurveyIRJET Journal
The document discusses various approaches that have been used for live sign language translation. It reviews 20 research papers that used techniques like convolutional neural networks, support vector machines, k-nearest neighbors, and LSTM networks to classify hand gestures and translate sign language into text with varying levels of accuracy between 62.3% to 99.9%. Deep learning models using CNNs and LSTMs achieved the highest accuracy compared to traditional classifiers. The paper aims to help other researchers in the field understand past approaches and how to potentially improve sign language translation systems.
Sentiment analysis on Bangla conversation using machine learning approachIJECEIAES
Nowadays, online communication is more convenient and popular than faceto-face conversation. Therefore, people prefer online communication over face-to-face meetings. Enormous people use online chatting systems to speak with their loved ones at any given time throughout the world. People create massive quantities of conversation every second because of their online engagement. People's feelings during the conversation period can be gleaned as useful information from these conversations. Text analysis and conclusion of any material as summarization can be done using sentiment analysis by natural language processing. The use of communication for customer service portals in various e-commerce platforms and crime investigations based on digital evidence is increasing the need for sentiment analysis of a conversation. Other languages, such as English, have welldeveloped libraries and resources for natural language processing, yet there are few studies conducted on Bangla. It is more challenging to extract sentiments from Bangla conversational data due to the language's grammatical complexity. As a result, it opens vast study opportunities. So, support vector machine, multinomial naïve Bayes, k-nearest neighbors, logistic regression, decision tree, and random forest was used. From the dataset, extracted information was labeled as positive and negative.
Sign Language Recognition using Deep LearningIRJET Journal
The document discusses using deep learning techniques like MobileNet V2 to develop a model for sign language recognition. It aims to classify sign language gestures to help communicate with deaf people. The model was trained on a dataset of sign language images and achieved an accuracy of 70% in recognizing letters, numbers, and gestures.
A Hybrid Method of Long Short-Term Memory and AutoEncoder Architectures for S...AhmedAdilNafea
Sarcasm detection is considered one of the most challenging tasks in sentiment analysis and opinion mining applications in the social media. Sarcasm identification is therefore essential for a good public opinion decision. There are some studies on sarcasm detection that apply standard word2vec model and have shown great performance with word-level analysis. However, once a sequence of terms is being tackled, the performance drops. This is because averaging the embedding of each term in a sentence to get the general embedding would discard the important embedding of some terms. LSTM showed significant improvement in terms of document embedding. However, within the classification LSTM requires adding additional information in order to precisely classify the document into sarcasm or not. This study aims to propose two technique based on LSTM and Auto-Encoder for improving the sarcasm detection. A benchmark dataset has been used in the experiments along with several pre-processing operations that have been applied. These include stop word removal, tokenization and special character removal with LSTM which can be represented by configuring the document embedding and using Auto-Encoder the classifier that was trained on the proposed LSTM. Results showed that the proposed LSTM with Auto-Encoder outperformed the baseline by achieving 84% of f-measure for the dataset. The main reason behind the superiority is that the proposed auto encoder is processing the document embedding as input and attempt to output the same embedding vector. This will enable the architecture to learn the interesting embedding that have significant impact on sarcasm polarity.
Sensing complicated meanings from unstructured data: a novel hybrid approachIJECEIAES
The majority of data on computers nowadays is in the form of unstructured data and unstructured text. The inherent ambiguity of natural language makes it incredibly difficult but also highly profitable to find hidden information or comprehend complex semantics in unstructured text. In this paper, we present the combination of natural language processing (NLP) and convolution neural network (CNN) hybrid architecture called automated analysis of unstructured text using machine learning (AAUT-ML) for the detection of complex semantics from unstructured data that enables different users to make understand formal semantic knowledge to be extracted from an unstructured text corpus. The AAUT-ML has been evaluated using three datasets data mining (DM), operating system (OS), and data base (DB), and compared with the existing models, i.e., YAKE, term frequency-inverse document frequency (TF-IDF) and text-R. The results show better outcomes in terms of precision, recall, and macro-averaged F1-score. This work presents a novel method for identifying complex semantics using unstructured data.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
Enhancing speaker verification accuracy with deep ensemble learning and inclu...IJECEIAES
Effective speaker identification is essential for achieving robust speaker recognition in real-world applications such as mobile devices, security, and entertainment while ensuring high accuracy. However, deep learning models trained on large datasets with diverse demographic and environmental factors may lead to increased misclassification and longer processing times. This study proposes incorporating ethnicity and gender information as critical parameters in a deep learning model to enhance accuracy. Two convolutional neural network (CNN) models classify gender and ethnicity, followed by a Siamese deep learning model trained with critical parameters and additional features for speaker verification. The proposed model was tested using the VoxCeleb 2 database, which includes over one million utterances from 6,112 celebrities. In an evaluation after 500 epochs, equal error rate (EER) and minimum decision cost function (minDCF) showed notable results, scoring 1.68 and 0.10, respectively. The proposed model outperforms existing deep learning models, demonstrating improved performance in terms of reduced misclassification errors and faster processing times.
An Overview Of Natural Language ProcessingScott Faria
This document provides an overview of natural language processing (NLP). It discusses the history and evolution of NLP. It describes common NLP tasks like part-of-speech tagging, parsing, named entity recognition, question answering, and text summarization. It also discusses applications of NLP like sentiment analysis, chatbots, and targeted advertising. Major approaches to NLP problems include supervised and unsupervised machine learning using neural networks. The document concludes that NLP has many applications and improving human-computer interaction through voice is an important area of future work.
An optimal unsupervised text data segmentation 3prj_publication
This document summarizes a research paper that proposes an unsupervised text data segmentation model using genetic algorithms (OUTDSM) to improve clustering accuracy. The OUTDSM uses an encoding strategy, fitness function, and genetic operators to evolve optimal text clusters. Experimental results presented in the research paper demonstrate that OUTDSM can arrive at global optima and prevent stagnation at local optima due to its biologically diverse population. Key areas of related work discussed include text mining techniques, clustering methods, and prior uses of optimization algorithms like genetic algorithms for text mining tasks.
The Identification of Depressive Moods from Twitter Data by Using Convolution...IRJET Journal
The document presents research on identifying depressive moods on Twitter using a convolutional neural network model. Key points:
- A CNN model was developed using text and emoji representations from Twitter data to predict depressive moods.
- The model achieved 88% accuracy on the test data, demonstrating it could correctly classify sentiments most of the time.
- The model was more successful at identifying negative sentiments than positive ones, based on higher precision and recall rates for negative classes.
- Integrating both text and emoji data enhanced the model's performance, showing potential for more nuanced sentiment analysis.
Dialectal Arabic sentiment analysis based on tree-based pipeline optimizatio...IJECEIAES
This document summarizes a research paper that proposes using a tree-based pipeline optimization tool (TPOT) to improve sentiment classification of dialectal Arabic texts. The paper provides background on sentiment analysis and challenges in analyzing informal Arabic texts. It then discusses related work applying TPOT and AutoML techniques to optimize machine learning for various tasks. The proposed approach uses TPOT for sentiment analysis of three Arabic dialect datasets to automatically optimize hyperparameters and improve over similar prior work.
TUNING DARI SPEECH CLASSIFICATION EMPLOYING DEEP NEURAL NETWORKSIJCI JOURNAL
Recently, many researchers have focused on building and improving speech recognition systems to facilitate and enhance human-computer interaction. Today, Automatic Speech Recognition (ASR) system has become an important and common tool from games to translation systems, robots, and so on. However, there is still a need for research on speech recognition systems for low-resource languages. This article deals with the recognition of a separate word for Dari language, using Mel-frequency cepstral coefficients (MFCCs) feature extraction method and three different deep neural networks including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Multilayer Perceptron (MLP), and two hybrid models of CNN and RNN. We evaluate our models on our built-in isolated Dari words corpus that consists of 1000 utterances for 20 short Dari terms. This study obtained the impressive result of 98.365% average accuracy.
This paper proposes a system to enable communication between hearing impaired individuals and others using sign language conversion. The system uses motion capture to detect hand gestures and convert them to text. Natural language processing is then used to match the text to predefined sign language datasets. For the matched dataset, a voiceover is provided. The system also allows converting voice to text and displaying corresponding sign language images. This two-way translation aims to serve as an interpreter between deaf and hearing users to facilitate basic communication.
Sentimental analysis is a context based mining of text, which extracts and identify subjective information from a text or sentence provided. Here the main concept is extracting the sentiment of the text using machine learning techniques such as LSTM Long short term memory . This text classification method analyses the incoming text and determines whether the underlined emotion is positive or negative along with probability associated with that positive or negative statements. Probability depicts the strength of a positive or negative statement, if the probability is close to zero, it implies that the sentiment is strongly negative and if probability is close to1, it means that the statement is strongly positive. Here a web application is created to deploy this model using a Python based micro framework called flask. Many other methods, such as RNN and CNN, are inefficient when compared to LSTM. Dirash A R | Dr. S K Manju Bargavi "LSTM Based Sentiment Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42345.pdf Paper URL: https://www.ijtsrd.comcomputer-science/data-processing/42345/lstm-based-sentiment-analysis/dirash-a-r
Adopting progressed CNN for understanding hand gestures to native languages b...IRJET Journal
This document proposes adopting a convolutional neural network (CNN) to recognize static hand gestures in native languages like Telugu through both audio and text for easy understanding. The CNN architecture contains convolutional layers, ReLU activation layers, max pooling layers, a softmax output layer, and a fully connected classification layer. Test results show the CNN approach achieves around 94% accuracy in identifying gestures. The system is designed to improve communication between disabled and non-disabled individuals by translating gestures into native languages without requiring knowledge of other languages like English.
Pakistan sign language to Urdu translator using KinectCSITiaesprime
The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
This document describes a deep learning model called the Predict Text Classification Network (PTCN) that was developed to predict the outcome (pass/fail) of congressional roll call votes based solely on the text of legislation. The PTCN uses a hybrid convolutional and long short-term memory neural network architecture to analyze legislative texts and predict whether a vote will pass or fail. The model was tested on legislative texts from 2000-2019 and achieved an average prediction accuracy of 67.32% using 10-fold cross-validation, suggesting it can recognize patterns in language that correlate with congressional voting behaviors.
KANNADA SIGN LANGUAGE RECOGNITION USINGMACHINE LEARNINGIRJET Journal
The document describes a proposed system for Kannada sign language recognition using machine learning. It begins with an abstract discussing sign language recognition and techniques like SIFT and LDA. It then discusses the existing problems with sign language recognition systems and the objectives of the proposed system. The proposed system uses techniques like SIFT to extract features from images and LDA for dimensionality reduction before classifying images with methods like SVM, KNN, and minimum distance. It shows results of classifying Kannada sign language letters with up to 90% accuracy. It concludes that the system helps communication for deaf people and future work will focus on recognizing signs with motion and converting multiple signs to text words and sentences.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
NATURAL LANGUAGE PROCESSING
1. International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023
David C. Wyld et al. (Eds): DBML, CITE, IOTBC, EDUPAN, NLPAI -2023
pp. 57-61, 2023. IJCI – 2023 DOI:10.5121/ijci.2023.120205
NATURAL LANGUAGE PROCESSING
Arvind Chandrasekaran
Graduate Student, Colorado Technical University, Colorado, USA
ABSTRACT
Hints of the outbreak are detected through the modified circumstances favoring the outbreaks, like the
warm weather contributing to epidermal outbreaks or the loss of sanitation leading to cholera outbreaks
typically relying on the routine reports from the healthcare facilities, secondary data like attendance
monitoring at workplaces and schools, the web, and the media play a significant informational source with
more than 60% of the initial outbreak reporting to the informal sources. Through the application of natural
language processing methods and machine learning technologies, a pipeline is developed which extracts
the critical entities like country, confirmed case counts, disease, and case dates, which are mandatory
entities from the epidemiological article and are saved in the database thereby facilitating the data entry
easier. The advantages are the facilitation of relevant score articles shown first, thereby providing the web
service results termed EventEpi integrated into the Event Based Surveillance (EBS) workflows.
KEYWORDS
Event-based surveillance; Contribution; Methods; Information extraction; Entity filtering; Scoring;
Evaluation.
1. INTRODUCTION
The public health surveillance goal is the timely detection and containment of disease outbreaks
effectively to minimize the health consequences and the public health burden. The data
acquisition under the traditional report systems is a passive process, and the routine is followed
and established by the institutes of public health and the legislator. (Abbood; Busche; Ghozzi. et
al., Nov 2020) The process is termed indicator-based surveillance. The outbreak hints could be
detected through changed circumstances known to favor the outbreaks, for instance, the warm
weather contributing to more epidemic outbreaks or the loss of sanitation leading to the cholera
outbreak. (Abbood; Busche; Ghozzi. et al., Nov 2020) Massive data filtering needs help finding
the proper criteria for the information considered and discarded. The task could get complex as
the filter mustn’t miss the important events while confidence exists to exclude. (Abbood; Busche;
Ghozzi. et al., Nov 2020) Without the above filters, performing EBS for extensive data is not
feasible. Natural language processing algorithms are suited well to capture informal resources
and guide information filtering and structuring systematically and automatically.
2. SIGN LANGUAGE
Sign Language facilitates the people for the creation of an inclusive society of people having
disabilities with equal chances for development and growth to live safe, productive, and dignified
life. Sign language isn’t included in the teaching materials, and the hard-to-hear children’s
parents are unaware of the value of sign language for bridging the communication gaps. (Sharma;
Tulsian; Verma. et al., 2022) The responsibility of Indian Sign Language encourages hard-of-
hearing students in intermediate, primary, and higher education to use Indian Sign Language as
an instruction form. They train and educate diverse groups like teachers, government officials,
2. International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023
58
professionals, the public, and community teachers. (Sharma; Tulsian; Verma. et al., 2022) They
promote Sign Language in collaboration with hard-of-hearing groups and institutions with
disabilities. HamNoSys is the sign language is the International Phonetic Alphabet intended to
transcribe the sounds consistently for any spoken language. The parameter versions preclude the
well-known alphabet with new glyphs developed to deduce the symbol’s meaning. HamNoSys
consists of gestures, pointers, and handshapes. Sign language possesses hand gestures for
expressing the words or sentences expressing the implications arising from the sign language in-
depth and described under three broad categories one-handed, nonmanual, and two-handed.
3. PROPOSED SYSTEM
The system is proposed using natural language processing capabilities of recognizing,
interpreting, and expressing dynamic hand gestures per the ASL standards for ten languages.
(Deshpande; Halgekar; Kulkarni. et al., 2021) The system provides the capability to adjust the
dynamic gestures for the letter’s proper recognition, thereby preventing the chances of letter
recognition as different and remaining the same. (Deshpande; Halgekar; Kulkarni. et al., 2021)
They can recognize dynamic hand movements, accurately associate with static hand gestures
from the sentence for conversion to other languages and detect the sentiment behind them. The
conceptual model for the ER diagram is shown below.
Figure 1. ER Diagram for the system
The above entity relationship diagram is the technique of data modeling graphically illustrating
the system’s entities. The system’s entities are the users having disabilities and without
disabilities and the webcam laptop.
3. International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023
59
3.1. Sequence Diagram
The sequence diagram regarding the object interactions arranged as the time sequence is shown
below. The diagram depicts the classes and objects involved in the scenario (Deshpande;
Halgekar; Kulkarni. et al., 2021), and the message sequences are exchanged between the needed
objects to carry out the scenario functionality.
Figure 2. Sequence Diagram for User Module
3.2. Implementation
The convolutional neural network is developed for recognizing ASL gestures through pre-
existing American Sign Language datasets. The image processing and image frames are captured
through hand gestures for prediction. (Deshpande; Halgekar; Kulkarni. et al., 2021) The
algorithm is applied to translate the predicted data into speech. Vader sentiment analysis is
applied to the output text to find the sentiment’s conveyed through the statement in the
percentage format.
4. TEXT ALIGNMENT
Natural Language Processing has the process of determination of minimum text elements
matching for two languages in the translated.js set termed text alignment. Extra-linguistic factors
like the text data's provenance, topic, and technical structuring influence text alignment.
(Manjula; Shivamurthaiah. et al., 2021) If the character encodings are a close translation of each
other, the sentences are the smallest AUs text. For the translation of introductory textbooks, the
organization comprised of the chapters and parallel texts would be huge proportionally. (Manjula;
Shivamurthaiah. et al., 2021) The initial alignment challenge with the development of parallel
corpora linked with the related documents.
5. DEEP LEARNING
Through computer technological advancement, natural language processing has evolved into an
interdisciplinary research field covering wide areas like human-computer interaction, artificial
intelligence, quantitative linguistics, corpus linguistics, and machine translation. (Feng; Shi. et
4. International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023
60
al., May 2021) NLP modeling research is generally experienced under three phases empiricism,
rationalism, and deep learning. Deep learning is a robust NLP development and has evolved as
new life in both AI and language studies involving quantitative methods. (Feng; Shi. et al., May
2021) Natural Language Processing deep learning is an interesting and timely publication as they
report deep learning and studies about independent areas of NLP, like conversational language
understanding, lexical parsing, and analysis.
5.1. Dataset Extraction
Before deep learning model training, the dataset is established following the subject matter. A
customized Corpus is generated to fit the research needs. (Andrade-Seagarra; Gabriel A. et al.,
2021) Twitter’s API and its advantages are used to generate a Corpus after the extraction,
analysis, and processing of tweets through the processing, extraction, and tweets analysis through
implementation of the script used subsequently for the models training.
5.2. Training
The above-proposed system is developed from the two methodologies of Recurrent Neural
Network (RNN) + Long Short-Term Memory (LSTM) and Bidirectional Encoder
Representations from Transformers. (BERT) (Andrade-Seagarra; Gabriel A. et al., 2021) They
are the deep learning models with the highest accuracy and the prediction and classification tasks
under the sentiment analysis area. Sequence classification is the predictive modeling problem
where the input sequence is present as the input sequence over time or space. (Andrade-Seagarra;
Gabriel A. et al., 2021) The goal would be to predict the category for the sequence assignment.
The classification model starts with the block development allowing the text preprocessing, so
that deep cleaning is applied across the original contents. The dataset preprocessing is performed
using the stopwords technique to remove the special characters and punctuation symbols
irrelevant to the sentence meaning. (Andrade-Seagarra; Gabriel A. et al., 2021) BERT has the
library to perform the tokenization and preprocessing tasks termed BERT Tokenizer. The library
splits the text into tokens and converts the tokens into tokenized vocabulary indexes. (Andrade-
Seagarra; Gabriel A. et al., 2021) They equalize or limit the sentences of equal length, and the
attention mask is created where the last parameter is the required component for the model
training.
6. CONCLUSION
Two models of NLP deep learning are used for detecting Twitter social network cyberbullying.
The models proposed achieves a 5% accuracy improvement. The model RNN+LSTM is the
balanced option having an execution time with an accuracy of 91.82% and an execution time of
78 min. RNN+LSTM is not an efficient tool for cyberbullying the classification option because
the BERT has a better criterion for detecting harassment. BERT performance is improved by
20% compared with RNN+LSTM. BERT is a pre-trained model, and the analyzed account
doesn’t contain offensive comments explicitly presenting the challenge for the models to predict
harassment.
5. International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023
61
REFERENCES
[1] Abbood, Auss; Ullrich, Alexander; Busche, Rüdiger; Ghozzi, Stéphane. PLoS Computational
Biology; San Francisco Vol. 16, Iss. 11, (Nov 2020): e1008277. DOI:10.1371/journal. pcbi.1008277.
EventEpi —A natural language processing framework for event-based surveillance.
https://www.proquest.com/compscijour/docview/2479465271/4B2F377419C1445DPQ/1?accountid=
144789
[2] Andrade-Segarra, Diego A; Gabriel A. Le´on-Paredes. International Journal of Advanced Computer
Science and Applications; West Yorkshire Vol. 12, Iss. 5, (2021). DOI:10.14569/
IJACSA.2021.0120592. Deep Learning-based Natural Language Processing Methods Comparison
for Presumptive Detection of Cyberbullying in Social Networks.
https://www.proquest.com/compscijour/docview/2655118531/BBC286109F22434CPQ/9?
accountid=144789
[3] Feng, Haoda; Shi, Feng. Natural Language Engineering; Cambridge Vol. 27, Iss. 3, (May 2021):
373-375. DOI:10.1017/S1351324919000597. Deep Learning in Natural Language Processing.
https://www.proquest.com/compscijour/docview/2534572957/BBC286109F22434CPQ/6?accountid=
144789
[4] Kulkarni, Aishwarya; Halgekar, Pranav; Deshpande, Girish R; Rao, Anagha; Dinni, Aishwarya.
Turkish Journal of Computer and Mathematics Education; Trabzon Vol. 12, Iss. 10, (2021): 129-137.
Dynamic sign language translating system using deep learning and natural language processing.
https://www.proquest.com/compscijour/docview/2623612530/EF7302645A024D66PQ/4?accountid=
144789
[5] Manjula, S; Shivamurthaiah, M. Turkish Journal of Computer and Mathematics Education; Trabzon
Vol. 12, Iss. 13, (2021): 2465-2472. Identification of Languages from The Text Document Using
Natural Language Processing System.
https://www.proquest.com/compscijour/docview/2623930195/79824294D4E347C4PQ/5?accountid=
144789
[6] Sharma, Purushottam; Tulsian, Devesh; Verma, Chaman; Sharma, Pratibha; Nancy, Nancy. Future
Internet; Basel Vol. 14, Iss. 9, (2022): 253. DOI:10.3390/fi14090253. Translating Speech to Indian
Sign Language Using Natural Language Processing.
https://www.proquest.com/compscijour/docview/2716521607/AA2ACB9B4AB64EF5PQ/2?accounti
d=144789
AUTHOR
Arvind Chandrasekaran from Texas, USA. Presently working for PPG Healthcare for the past six years. I
have also been pursuing Doctorate in Computer Science (Big Data Analytics) from Colorado Technical
University for the past two years, having completed 54 credits; I’m looking to achieve the same by this
year.