The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
The Project is based on design & implementation of smart hybrid system for street sign boards recognition, text and speech conversions through character extraction and symbol matching. The default language use to pronounce signs on the street boards is English. Here we are proposing a novel method to convert identified character or symbol into multiple languages like Hindi, Marathi, Urdu, etc. This Project is helpful to all starting from the visually impaired, the tourists, the illiterates and all the people who travel. The system is accomplished with the speech pronunciation in different languages and to display on screen. This Project has a multidisciplinary approach as it belongs to the domains like computer vision, speech processing, & Google cloud platform. Computer vision is used for character and symbol extraction from sign boards. Speech processing is used for text to speech conversion. GCP is used for multiple language conversion of original extracted text. Further programming is done for real time pronunciation and displaying desired output.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Static-gesture word recognition in Bangla sign language using convolutional n...TELKOMNIKA JOURNAL
Sign language is the communication process of people with hearing impairments. For hearing-impaired communication in Bangladesh and parts of India, Bangla sign language (BSL) is the standard. While Bangla is one of the most widely spoken languages in the world, there is a scarcity of research in the field of BSL recognition. The few research works done so far focused on detecting BSL alphabets. To the best of our knowledge, no work on detecting BSL words has been conducted till now for the unavailability of BSL word dataset. In this research, a small static-gesture word dataset has been developed, and a deep learning-based method has been introduced that can detect BSL static-gesture words from images. The dataset, “BSLword” contains 30 static-gesture BSL words with 1200 images for training.
The training is done using a multi-layered convolutional neural network with the Adam optimizer. OpenCV is used for image processing and TensorFlow is used to build the deep learning models. This system can recognize BSL static-gesture words with 92.50% accuracy on the word dataset.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
The Project is based on design & implementation of smart hybrid system for street sign boards recognition, text and speech conversions through character extraction and symbol matching. The default language use to pronounce signs on the street boards is English. Here we are proposing a novel method to convert identified character or symbol into multiple languages like Hindi, Marathi, Urdu, etc. This Project is helpful to all starting from the visually impaired, the tourists, the illiterates and all the people who travel. The system is accomplished with the speech pronunciation in different languages and to display on screen. This Project has a multidisciplinary approach as it belongs to the domains like computer vision, speech processing, & Google cloud platform. Computer vision is used for character and symbol extraction from sign boards. Speech processing is used for text to speech conversion. GCP is used for multiple language conversion of original extracted text. Further programming is done for real time pronunciation and displaying desired output.
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
The objective of this research is to develop a supervised machine learning hand-gesturing model to recognize Arabic Sign Language (ArSL), using two sensors: Microsoft's Kinect with a Leap Motion Controller. The proposed model relies on the concept of supervised learning to predict a hand pose from two depth images and defines a classifier algorithm to dynamically transform gestural interactions based on 3D positions of a hand-joint direction into their corresponding letters whereby live gesturing can be then compared and letters displayed in real time. This research is motivated by the need to increase the opportunity for the Arabic hearing-impaired to communicate with ease using ArSL and is the first step towards building a full communication system for the Arabic hearing impaired that can improve the interpretation of detected letters using fewer calculations. To evaluate the model, participants were asked to gesture the 28 letters of the Arabic alphabet multiple times each to create an ArSL letter data set of gestures built by the depth images retrieved by these devices. Then, participants were later asked to gesture letters to validate the classifier algorithm developed. The results indicated that using both devices for the ArSL model were essential in detecting and recognizing 22 of the 28 Arabic alphabet correctly 100 %.
Static-gesture word recognition in Bangla sign language using convolutional n...TELKOMNIKA JOURNAL
Sign language is the communication process of people with hearing impairments. For hearing-impaired communication in Bangladesh and parts of India, Bangla sign language (BSL) is the standard. While Bangla is one of the most widely spoken languages in the world, there is a scarcity of research in the field of BSL recognition. The few research works done so far focused on detecting BSL alphabets. To the best of our knowledge, no work on detecting BSL words has been conducted till now for the unavailability of BSL word dataset. In this research, a small static-gesture word dataset has been developed, and a deep learning-based method has been introduced that can detect BSL static-gesture words from images. The dataset, “BSLword” contains 30 static-gesture BSL words with 1200 images for training.
The training is done using a multi-layered convolutional neural network with the Adam optimizer. OpenCV is used for image processing and TensorFlow is used to build the deep learning models. This system can recognize BSL static-gesture words with 92.50% accuracy on the word dataset.
While a hearing-impaired individual depends on sign language and gestures, non-hearing-impaired person uses verbal language. Thus, there is need for means of arbitration to forestall situation when a non-hearing-impaired individual who does not understand the sign language wants to communicate with a hearing-impaired person. This paper is concerned with the development of a PC-based sign language translator to facilitate effective communication between hearing-impaired and non-hearing-impaired persons. Database of hand gestures in American sign language (ASL) is created using Python scripts. TensorFlow (TF) is used in the creation of a pipeline configuration model for machine learning of annotated images of gestures in the database with the real time gestures. The implementation is done in Python software environment and it runs on a PC equipped with a web camera to capture real time gestures for comparison and interpretations. The developed sign language translator is able to translate ASL/gestures to written texts along with corresponding audio renderings at an average duration of about one second. In addition, the translator is able to match real time gestures with the equivalent gesture images stored in the database even at 44% similarity.
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...ijtsrd
Recognition languages are developed for the better communication of the challenged people. The recognition signs include the combination of various with hand gestures, movement, arms and facial expressions to convey the words thought. The languages used in sign are rich and complex as equal as to languages that are spoken. As the technological world is growing rapidly, the sign languages for human are made to recognised by systems in order to improve the accuracy and the multiply the various sign languages with newer forms. In order to improve the accuracy in detecting the input sign, a model has been proposed. The proposed model consists of three phases a training phase, a testing phase and a storage output phase. A gesture is extracted from the given input picture. The extracted image is processed to remove the background noise data with the help of threshold pixel image value. After the removal of noise from the image and the filtered image to trained model is tested with a user input and then the detection accuracy is measured. A total of 50 sign gestures were loaded into the training model. The trained model accuracy is measured and then the output is extracted in the form of the mentioned language symbol. The detection mechanism of the proposed model is compared with the other detection methods such as Hidden Markov Model(HMM), Convolutional Neural Networks(CNN) and Support Vector Machine(SVM). The classification is done by means of a Support Vector Machine(SVM) which classifies at a higher accuracy. The accuracy obtained was 99 percent in comparison with the other detection methods. D. Anbarasan | R. Aravind | K. Alice"GRS “ Gesture based Recognition System for Indian Sign Language Recognition System for Deaf and Dumb People" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9638.pdf http://www.ijtsrd.com/engineering/computer-engineering/9638/grs--gesture-based-recognition-system-for-indian-sign-language-recognition-system-for-deaf-and-dumb-people/d-anbarasan
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Text Detection and Recognition with Speech Output for Visually Challenged Per...IJERA Editor
Reading text from scene, images and text boards is an exigent task for visually challenged persons. This task has been proposed to be carried out with the help of image processing. Since a long period of time, image processing has helped a lot in the field of object recognition and still an emerging area of research. The proposed system reads the text encountered in images and text boards with the aim to provide support to the visually challenged persons. Text detection and recognition in natural scene can give valuable information for many applications. In this work, an approach has been attempted to extract and recognize text from scene images and convert that recognized text into speech. This task can definitely be an empowering force in a visually challenged person's life and can be supportive in relieving them of their frustration of not being able to read whatever they want, thus enhancing the quality of their lives.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
Sign language SL is commonly considered as the primary gesture based language for deaf and dumb people. It is a medium of communication for such people. Basically image based and sensor based are the two important sign language recognition methods. Because of the difficulties in wearing complex devices like Hand Gloves, armbands, helmets etc. in sensor based approaches, lots of researches are done by companies and researchers on image based approaches. Sign language is used by these people to communicate with the normal people. Understanding this sign language is a difficult task according to the normal people. To address these difficulties, a real time translator for sign language using deep learning DL is introduced. It enables to reduce the limitations and cons of other methods to a greater extent. With the help of this real time translator, communication will be better and fast without causing any delay. Jeni Moni | Anju J Prakash "Real Time Translator for Sign Language" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32915.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32915/real-time-translator-for-sign-language/jeni-moni
Vision Based Approach to Sign Language RecognitionIJAAS Team
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
Hybrid model for detection of brain tumor using convolution neural networksCSITiaesprime
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
More Related Content
Similar to Pakistan sign language to Urdu translator using Kinect
While a hearing-impaired individual depends on sign language and gestures, non-hearing-impaired person uses verbal language. Thus, there is need for means of arbitration to forestall situation when a non-hearing-impaired individual who does not understand the sign language wants to communicate with a hearing-impaired person. This paper is concerned with the development of a PC-based sign language translator to facilitate effective communication between hearing-impaired and non-hearing-impaired persons. Database of hand gestures in American sign language (ASL) is created using Python scripts. TensorFlow (TF) is used in the creation of a pipeline configuration model for machine learning of annotated images of gestures in the database with the real time gestures. The implementation is done in Python software environment and it runs on a PC equipped with a web camera to capture real time gestures for comparison and interpretations. The developed sign language translator is able to translate ASL/gestures to written texts along with corresponding audio renderings at an average duration of about one second. In addition, the translator is able to match real time gestures with the equivalent gesture images stored in the database even at 44% similarity.
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...ijtsrd
Recognition languages are developed for the better communication of the challenged people. The recognition signs include the combination of various with hand gestures, movement, arms and facial expressions to convey the words thought. The languages used in sign are rich and complex as equal as to languages that are spoken. As the technological world is growing rapidly, the sign languages for human are made to recognised by systems in order to improve the accuracy and the multiply the various sign languages with newer forms. In order to improve the accuracy in detecting the input sign, a model has been proposed. The proposed model consists of three phases a training phase, a testing phase and a storage output phase. A gesture is extracted from the given input picture. The extracted image is processed to remove the background noise data with the help of threshold pixel image value. After the removal of noise from the image and the filtered image to trained model is tested with a user input and then the detection accuracy is measured. A total of 50 sign gestures were loaded into the training model. The trained model accuracy is measured and then the output is extracted in the form of the mentioned language symbol. The detection mechanism of the proposed model is compared with the other detection methods such as Hidden Markov Model(HMM), Convolutional Neural Networks(CNN) and Support Vector Machine(SVM). The classification is done by means of a Support Vector Machine(SVM) which classifies at a higher accuracy. The accuracy obtained was 99 percent in comparison with the other detection methods. D. Anbarasan | R. Aravind | K. Alice"GRS “ Gesture based Recognition System for Indian Sign Language Recognition System for Deaf and Dumb People" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: http://www.ijtsrd.com/papers/ijtsrd9638.pdf http://www.ijtsrd.com/engineering/computer-engineering/9638/grs--gesture-based-recognition-system-for-indian-sign-language-recognition-system-for-deaf-and-dumb-people/d-anbarasan
Hand gesture recognition method arriving great consideration in latest few years since of its manifoldness application and facility to interrelate by machine efficiently during human computer interaction. This paper mainly focuses on the survey on Hand Gesture Recognition. The hand gestures give a divide complementary modality to speech for express ones data. Hand gesture is the method of non-verbal communiqué for human beings for its freer expressions much more other than the body parts. Hand gesture detection has greater significance in scheme a competent human computer interaction method. This paper emphasis on different hand gesture approaches, technologies and applications.
Text Detection and Recognition with Speech Output for Visually Challenged Per...IJERA Editor
Reading text from scene, images and text boards is an exigent task for visually challenged persons. This task has been proposed to be carried out with the help of image processing. Since a long period of time, image processing has helped a lot in the field of object recognition and still an emerging area of research. The proposed system reads the text encountered in images and text boards with the aim to provide support to the visually challenged persons. Text detection and recognition in natural scene can give valuable information for many applications. In this work, an approach has been attempted to extract and recognize text from scene images and convert that recognized text into speech. This task can definitely be an empowering force in a visually challenged person's life and can be supportive in relieving them of their frustration of not being able to read whatever they want, thus enhancing the quality of their lives.
Abstract: The main communication methods used by deaf people are sign language, but opposed to common thought, there is no specific universal sign language: every country or even regional group uses its own set of signs. The use of sign language in digital systems can enhance communication in both directions: animated avatars can synthesize signals based on voice or text recognition; and sign language can be translated into various text or sound forms based on different images, videos and sensors input. The ultimate goal of this research, but it is not a simple spelling of spoken language, so that recognizing different signs or letters of the alphabet (which has been a common approach) is not sufficient for its transcription and automatic interpretation. Here proposes an algorithm and method for an application this would help us in recognising the various user defined signs. The palm images of right and left hand are loaded at runtime. Firstly these images will be seized and stored in directory. Then technique called Template matching is used for finding areas of an image that match (are similar) to a template image (patch). Our goal is to detect the highest matching area. We need two primary components- A) Source image (I): In the template image in which we try to find a match. B) Template image (T): The patch image which will be compared to the template image. In proposed system user defined patterns will be having 60% accuracy while default patterns will be provided with 80% accuracy.
A gesture recognition system for the Colombian sign language based on convolu...journalBEEI
Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate.
Digital voice over is a social project aimed at improving the ability of speaking and hearing by enabling people to communicate better with the public. There are approximately 9.1 billion deaf and hard of hearing people worldwide. They encounter many problems while trying to communicate with the society in daily life. Deaf and speech impaired people often use language to communicate but have difficulty communicating with people who do not understand the language. Sign language uses sign language patterns i.e., body language, gestures and movements of arms and fingers etc. to convey information about people. relies on. This project was designed to meet the need to create electronic devices that can translate sign language into speech to facilitate communication between the deaf and dumb and the public. Venkat P. Patil | Suyash Mali | Girish Ghadi | Chintamani Satpute | Amey Deshmukh "Hand Gesture Vocalizer" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-2 , April 2023, URL: https://www.ijtsrd.com.com/papers/ijtsrd55157.pdf Paper URL: https://www.ijtsrd.com.com/engineering/electronics-and-communication-engineering/55157/hand-gesture-vocalizer/venkat-p-patil
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
Sign language is a visual-gestural language used by deaf-dumb people for communication. As normal people are unfamiliar of sign language, the hearing-impaired people find it difficult to communicate with them. The communication gap between the normal and the deaf-dumb people can be bridged by means of Human–Computer Interaction. The objective of this paper is to convert the Dravidian (Tamil) sign language into text. The proposed method recognizes 12 vowels, 18 consonants and a special character “Aytham” of Tamil language by a vision based approach. In this work, the static images of the hand signs are obtained a web/digital camera. The hand region is segmented by a threshold applied to the hue channel of the input image. Then the region of interest (i.e. from wrist to fingers) is segmented using the reversed horizontal projection profile and the Discrete Cosine transformed signature is extracted from the boundary of hand sign. These features are invariant to translation, scale and rotation. Sparse representation classifier is incorporated to recognize 31 hand signs. The proposed method has attained a maximum recognition accuracy of 71% in a uniform background.
A mediator person is required for communication between deaf person and a second person. But a
mediator should know the sign language used by deaf person. But this is also not possible always since there are
multiple sign languages for multiple languages. It is difficult for a deaf person to understand what a second
person speaks. And therefore deaf person should keep track of lip movements of second person in order to know
what he is speaking. But the lip movements do not give proper efficiency and accuracy since the facial
expressions and speech might not match. To overcome the above problems we have proposed a system, an
Android Application for recognizing sign language using hand gesture with the facility for user to define and
upload their own sign language into the system. The features of this system are the real time conversion of
gesture to text and speech. For two-way communication between deaf person and second person, the speech of
second person is converted into text. The processing steps include: gesture extraction, gesture matching and
conversion of text to speech and vice-versa. The system is not only useful for deaf community but can also be
used by common people who migrate to different regions and do not know local language.
Sign language SL is commonly considered as the primary gesture based language for deaf and dumb people. It is a medium of communication for such people. Basically image based and sensor based are the two important sign language recognition methods. Because of the difficulties in wearing complex devices like Hand Gloves, armbands, helmets etc. in sensor based approaches, lots of researches are done by companies and researchers on image based approaches. Sign language is used by these people to communicate with the normal people. Understanding this sign language is a difficult task according to the normal people. To address these difficulties, a real time translator for sign language using deep learning DL is introduced. It enables to reduce the limitations and cons of other methods to a greater extent. With the help of this real time translator, communication will be better and fast without causing any delay. Jeni Moni | Anju J Prakash "Real Time Translator for Sign Language" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd32915.pdf Paper Url :https://www.ijtsrd.com/computer-science/other/32915/real-time-translator-for-sign-language/jeni-moni
Vision Based Approach to Sign Language RecognitionIJAAS Team
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
Hybrid model for detection of brain tumor using convolution neural networksCSITiaesprime
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
Sentiment analysis of online licensing service quality in the energy and mine...CSITiaesprime
The Ministry of Energy and Mineral Resources of the Republic of Indonesia regularly assessed public satisfaction with its online licensing services. User rated their satisfaction at 3.42 on a scale of 4, below the organization's average of 3.53. Evaluating public service performance is crucial for quality improvement. Previous research relied solely on survey data to assess public satisfaction. This study goes further by analyzing user feedback in text form from an online licensing application to identify negative aspects of the service that need enhancement. The dataset spanned September 2019 to February 2023, with 24,112 entries. The choice of classification methods on the highest accuracy values among decision tree, random forest, naive bayes, stochastic gradient descent, logistic regression (LR), and k-nearest neighbor. The text data was converted into numerical form using CountVectorizer and term frequency-inverse document frequency (TF-IDF) techniques, along with unigrams and bigrams for dividing sentences into word segments. LR bigram CountVectorizer ranked highest with 89% for average precision, F1-score, and recall, compared to the other five classification methods. The sentiment analysis polarity level was 36.2% negative. Negative sentiment revealed expectations from the public to the ministry to improve the top three aspects: system, mechanism, and procedure; infrastructure and facilities; and service specification product types.
Trends in sentiment of Twitter users towards Indonesian tourism: analysis wit...CSITiaesprime
This research analyzes the sentiment of Twitter users regarding tourism in Indonesia using the keyword "wonderful Indonesia" as the tourism promotion identity. This study aims to gain a deeper understanding of the public sentiment towards "wonderful Indonesia" through social media data analysis. The novelty obtained provides new insights into valuable information about Indonesian tourism for the government and relevant stakeholders in promoting Indonesian tourism and enhancing tourist experiences. The method used is tweet analysis and classification using the K-nearest neighbor (KNN) algorithm to determine the positive, neutral, or negative sentiment of the tweets. The classification results show that the majority of tweets (65.1% out of a total of 14,189 tweets) have a neutral sentiment, indicating that most tweets with the "wonderful Indonesia" tagline are related to advertising or promoting Indonesian tourism. However, the percentage of tweets with positive sentiment (33.8%) is higher than those with negative sentiment (1.1%). This study also achieved training results with an accuracy rate of 98.5%, precision of 97.6%, recall of 98.5%, and F1-score of 98.1%. However, reassessment is needed in the future as Twitter users' sentiments can change along with the development of Indonesian tourism itself.
The impact of usability in information technology projectsCSITiaesprime
Achieving success in information system and technology (IS/IT) projects is a complex and multifaceted endeavour that has proven difficult. The literature is replete with project failures, but identifying the critical success factors contributing to favourable outcomes remains challenging. The triad of Time-Cost-Quality is widely accepted as key to achieving project success. While time and cost can be quantified and measured, quality is a more complex construct that requires different metrics and measurement approaches. Utilizing the PRISMA Methodology, this study initiated a comprehensive search across literature databases and identified 142 relevant articles pertaining to the specified keywords. A subset of ten articles was deemed suitable for further examination through rigorous screening and eligibility assessments. Notably, a primary finding indicates that despite recognizing usability as a critical element, there is a tendency to neglect usability enhancements due to time and resource constraints. Regarding the influence of usability on project success, the active involvement of end-users emerges as a pivotal factor. Moreover, fostering the enhancement of Human Computer Interaction (HCI) knowledge within the development team is essential. Failure to provide good usability can lead to project failure, undermining user satisfaction and adoption of the technology.
Collecting and analyzing network-based evidenceCSITiaesprime
Since nearly the beginning of the Internet, malware has been a significant deterrent to productivity for end users, both personal and business related. Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ‘witness’ of a criminal event. Forensic investigations include collection, recovery, analysis, and presentation of information stored on network devices and related to network crimes. These activities often involve wide range of analysis tools and application of different methods. This work presents methods that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstructions of events or action to reach case conclusions. Main aim of network forensic is to gather evidence. Additionally, the evidence obtained during the investigation must be produced through a rigorous investigation procedure in a legal context.
Agile adoption challenges in insurance: a systematic literature and expert re...CSITiaesprime
The drawback of agile is struggled to function in large businesses like banks, insurance companies, and government agencies, which are frequently associated with cumbersome processes. Traditional software development techniques were cumbersome and pay more attention to standardization and industry, this leads to high costs and prolonged costs. The insurance company does not embrace change and agility may find themselves distracted and lose customers to agile competitors who are more relevant and customer-centric. Thus, to investigate the challenges and to recognize the prospect of agile adoption in insurance industry, a systematic literature review (SLR) in this study was organized and validated by expert review from professional with expertise in agile. The project performance domain from project management body of knowledge (PMBOK) was applied to align the challenges and the solution. Academicians and practitioners can acquire the perception and knowledge in having exceeded understanding about the challenge and solution of agile adoption from the results.
Exploring network security threats through text mining techniques: a comprehe...CSITiaesprime
In response to the escalating cybersecurity threats, this research focuses on leveraging text mining techniques to analyze network security data effectively. The study utilizes user-generated reports detailing attacks on server networks. Employing clustering algorithms, these reports are grouped based on threat levels. Additionally, a classification algorithm discerns whether network activities pose security risks. The research achieves a noteworthy 93% accuracy in text classification, showcasing the efficacy of these techniques. The novelty lies in classifying security threat report logs according to their threat levels. Prioritizing high-risk threats, this approach aids network management in strategic focus. By enabling swift identification and categorization of network security threats, this research equips organizations to take prompt, targeted actions, enhancing overall network security.
An LSTM-based prediction model for gradient-descending optimization in virtua...CSITiaesprime
A virtual learning environment (VLE) is an online learning platform that allows many students, even millions, to study according to their interests without being limited by space and time. Online learning environments have many benefits, but they also have some drawbacks, such as high dropout rates, low engagement, and students' self-regulated behavior. Evaluating and analyzing the students' data generated from online learning platforms can help instructors to understand and monitor students learning progress. In this study, we suggest a predictive model for assessing student success in online learning. We investigate the effect of hyperparameters on the prediction of student learning outcomes in VLEs by the long short-term memory (LSTM) model. A hyperparameter is a parameter that has an impact on prediction results. Two optimization algorithms, adaptive moment estimation (Adam) and Nesterov-accelerated adaptive moment estimation (Nadam), were used to modify the LSTM model's hyperparameters. Based on the findings of research done on the optimization of the LSTM model using the Adam and Nadam algorithm. The average accuracy of the LSTM model using Nadam optimization is 89%, with a maximum accuracy of 93%. The LSTM model with Nadam optimisation performs better than the model with Adam optimisation when predicting students in online learning.
Generalization of linear and non-linear support vector machine in multiple fi...CSITiaesprime
Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. They belong to a family of generalized linear classifiers. In other terms, SVM is a classification and regression prediction tool that uses machine learning theory to maximize predictive accuracy. In this article, the discussion about linear and non-linear SVM classifiers with their functions and parameters is investigated. Due to the equality type of constraints in the formulation, the solution follows from solving a set of linear equations. Besides this, if the under-consideration problem is in the form of a non-linear case, then the problem must convert into linear separable form with the help of kernel trick and solve it according to the methods. Some important algorithms related to sentimental work are also presented in this paper. Generalization of the formulation of linear and non-linear SVMs is also open in this article. In the final section of this paper, the different modified sections of SVM are discussed which are modified by different research for different purposes.
Designing a framework for blockchain-based e-voting system for LibyaCSITiaesprime
A transition to democratic rule is considered the first step down a long road towards Libya’s recovery and prosperity. Thus, it strives to improve the country’s elections by introducing new technologies. A blockchain is a distributed ledger that is characterised by independence and security. Therefore, it has been widely applied in various fields ranging from credit encryption and digital currency. With the development of internet technology, electronic voting (E-voting) systems have been greatly popularised. However, they suffer from various security threats, which create a sense of distrust among existing systems. Integrating blockchain with online elections is a promising trend, which could lead to make an election transparent, immutable, reliable, and more secure. In this paper, we present a literature review and a case analysis of blockchain technology. Moreover, a framework for an E-voting system based on blockchain is proposed. The methodology is adopted on the basis of three activities, they are identification of the relevant literature about E-voting, system modelling, and the determination of suitable technological tools. The framework is secure and reliable. Thus, it could help increase the number of voters and ensure a high level of participation, as well as facilitate free and fair electoral processes.
Development reference model to build management reporter using dynamics great...CSITiaesprime
The digital technology transformation impacts changes in business patterns that require companies to innovate to act appropriately in making strategic decisions quickly, precisely, and accurately to increase efficiency, be practical company performance, and impacts changes in business patterns that require companies to innovate to act appropriately in making strategic decisions quickly to improve the performance. An enterprise resource planning (ERP) system is one step toward achieving performance. ERP system is one step to achieving performance. ERP system is essential for companies to automate the efficiency of business processes. The decisions from management in implementing the ERP system are necessary for ERP implementation to be successful. However, in practice, companies still experience complexity. For that, it needs to be considered related a business process reference model is essential to enhance efficiency in implementing the ERP used. This research discusses the business process reference model based on the ERP dynamics great plain (GP) application aggregated using management reporter (MR) to help users better understand the practical overview. The methodology utilizes a reference model based on Microsoft Dynamics GP guidelines with a business process redesign approach. This contributes to developing business processes to help users understand using the ERP dynamics GP application.
Social media and optimization of the promotion of Lake Toba tourism destinati...CSITiaesprime
Tourism is one of the largest contributors to Indonesia's foreign exchange earnings, surpassing taxation, energy, and gas. This study seeks to investigate the use of social media to optimize the promotion of Lake Toba as a tourist destination, which has been impacted by the COVID-19 pandemic. Using interview techniques and live field observations, it was discovered that social media, particularly the Instagram platform, play a significant role in promoting Lake Toba tourism. The Department of Culture and Tourism of the North Sumatra Province uses landscape photography as its primary promotion method, which has proved to be more effective and interesting than conventional methods such as the distribution of brochures or the use of manuals. The capture procedure and techniques for landscape photography were carried out by professional photographers in collaboration with the Department of Culture and Tourism of the North Sumatra Province. In addition to providing information, tourism sumut's Instagram account functions as a platform to raise public awareness about Lake Toba tourism and as a promotional medium for North Sumatra's tourist attractions on an international scale. Department of Culture and Tourism of the North Sumatra Province collaborates with travel agencies and local communities to disseminate Lake Toba tourism information.
Caring jacket: health monitoring jacket integrated with the internet of thing...CSITiaesprime
One of the policies that have been made by the World Health Organization (WHO) and the Indonesian government during this COVID-19 pandemic, is to use an oximeter for self-isolation patients. The oximeter is used to monitor the patient if happy hypoxia which is a silent killer, happens to the patient. To maintain body endurance, exercise is needed by COVID-19 patients, but doing too much exercise can also cause decreased immunity. That’s why fatigue level and exercise intensity need to be monitored. When exercising, social distancing protocol should be also reminded because can lower COVID-19 spreading up to 13.6%. To solve this issue, the Caring Jacket is proposed which is a health monitoring jacket integrated with an IoT system. This jacket is equipped with some sensors and the global positioning system (GPS) for tracking. The data from the test showed the temperature reading accuracy is up to 99.38%, the oxygen rate up to 97.31%, the beats per minute (BPM) sensor up to 97.82%, and the precision of all sensors is 97.00% compared with a calibrated device.
Net impact implementation application development life-cycle management in ba...CSITiaesprime
Digital transformation in the banking sector creates a lot of demand for application development, either new development or application enhancement. Continuous demand for reimagining, revamping, and running applications reliably needs to be supported by collaboration tools. Several big banks in Indonesia use Atlassian products, including Jira, Confluence, Bamboo, Bitbucket, and Crowd, to support strategic company projects. We need to measure the net impact of application development life-cycle management (ADLM) as a collaboration tool. Using the deLone and McLean model, process questionnaire data from banks in Indonesia that use ADLM. Processing data using structural equation modeling (SEM), multiple variables are analyzed statistically to establish, estimate, and test the causation model. The conclusions highlight that system quality strongly affected only User Satisfaction (p=0.049 and β=0.39). Information quality strongly affected use (p=0.001 and β=0.84) and strongly affected user satisfaction (p=0.169 and β=0.28). Service quality strongly affected only use (p=0.127 and β=0.31). Conclusion research verifies the information system's achievement approach described by DeLone and McLean. Importantly, it was discovered that system usability and quality were key indicators of ADLM success. To fulfill their objective, ADLM must be developed in a way that is simple to use, adaptable, and functional.
Circularly polarized metamaterial Antenna in energy harvesting wearable commu...CSITiaesprime
When battery powered sensors are spread out in places that are sometimes hard to reach, sustaining them become difficult. Therefore, to develop this technology on a large scale such as in the internet of things (IoT) scenario, it is necessary to figure out how to power them. The proffered solution in this work, is to get energy from the environment using energy harvesting Antennas. This work presents a wearable circular polarized efficient receiving and transmitting sensors for medical, IoT, and communication systems at the frequency range of WLAN, and GSM from 900 MHz up to 6 GHz. Using a cascaded system block of a circularly polarized Antenna, a rectifier and t-matching network, the design was successfully simulated. A DC charging voltage of 2.8V was achieved to power-up batteries of the wearable and IoT sensors. The major contribution of this work is the tri-band Antenna system which is able to harvest reflected Wi-Fi frequencies and also GSM frequencies combined in a miniaturized manner. This innovative configuration is a step forward in building devices with over 80% duty cycle.
An ensemble approach for the identification and classification of crime tweet...CSITiaesprime
Twitter is a famous social media platform, which supports short posts limited to 280 characters. Users tweet about many topics like movie reviews, customer service, meals they just ate, and awareness posts. Tweets carrying information about some crime scenes are crime tweets. Crime tweets are crucial and informative and separate classification is required. Identification and classification of crime tweets is a challenging task and has been the researcher’s latest interest. The researchers used different approaches to identify and classify crime tweets. This research has used an ensemble approach for the identification and classification of crime tweets. Tweepy and Twint libraries were used to collect datasets from Twitter. Both libraries use contrasting methods for extracting tweets from Twitter. This research has applied many ensemble approaches for the identification and classification of crime tweets. Logistic regression (LR), support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), and random forest (RF) Classifier assigned with the weights of 1,2,1,1 and 1 respectively ensemble together by a soft weighted Voting classifier along with term frequency – inverse document frequency (TF-IDF) vectorizer gives the best performance with an accuracy of 96.2% on the testing dataset.
National archival platform system design using the microservice-based service...CSITiaesprime
Archives play a vital function concerning the dynamics of people and nations as an instrument to treasure information in diverse domains of politics, society, economics, culture, science, and technology. The acceleration of digital transformation triggers the implementation of a smart government that supports better public services. The smart government encourages a national archival system to facilitate archive producers and users. The four electronic-based government system (SPBE) factors in the archival sector and open archival information system (OAIS) as a data preservation standard are the benchmarks in developing this study's national archival platform system. An improved service computing system engineering (SCSE) framework adapted to the microservice architecture is used to aid the design of the national archival platform system. The proposed design met the four-factor service design validation of coupling, cohesion, complexity, and reusability. Also, the prototype suggests what resources are needed to put the design into action by passing the performance test of availability measurement.
Antispoofing in face biometrics: A comprehensive study on software-based tech...CSITiaesprime
The vulnerability of the face recognition system to spoofing attacks has piqued the biometric community's interest, motivating them to develop anti-spoofing techniques to secure it. Photo, video, or mask attacks can compromise face biometric systems (types of presentation attacks). Spoofing attacks are detected using liveness detection techniques, which determine whether the facial image presented at a biometric system is a live face or a fake version of it. We discuss the classification of face anti-spoofing techniques in this paper. Anti-spoofing techniques are divided into two categories: hardware and software methods. Hardware-based techniques are summarized briefly. A comprehensive study on software-based countermeasures for presentation attacks is discussed, which are further divided into static and dynamic methods. We cited a few publicly available presentation attack datasets and calculated a few metrics to demonstrate the value of anti-spoofing techniques.
The antecedent e-government quality for public behaviour intention, and exten...CSITiaesprime
An The main objective of the study is to identify the antecedent of leadership quality, public satisfaction and public behaviour intention of e-government service. Also, this study integrated e-government quality to expectation-confirmation model. In order to achieve these goals, observational research was then carried out to collect primary information, using the method of data dissemination and obtaining the opinion of 360 from the public using the e-government service and some of the e-government and software quality experts. The results of the study show that the positive association among the e-government services quality and public perceived usefulness, public expectation confirmation, leadership quality and public satisfaction that also play a positive role on the public behavior intention.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Pakistan sign language to Urdu translator using Kinect
1. Computer Science and Information Technologies
Vol. 3, No. 3, November 2022, pp. 186~193
ISSN: 2722-3221, DOI: 10.11591/csit.v3i3.pp186-193 186
Journal homepage: http://iaesprime.com/index.php/csit
Pakistan sign language to Urdu translator using Kinect
Saad Ahmed, Hasnain Shafiq, Yamna Raheel, Noor Chishti, Syed Muhammad Asad
Department of Computer Science, IQRA University, Karachi, Pakistan
Article Info ABSTRACT
Article history:
Received Mar 10, 2022
Revised Oct 27, 2022
Accepted Nov 4, 2022
The lack of a standardized sign language, and the inability to communicate
with the hearing community through sign language, are the two major issues
confronting Pakistan's deaf and dumb society. In this research, we have
proposed an approach to help eradicate one of the issues. Now, using the
proposed framework, the deaf community can communicate with normal
people. The purpose of this work is to reduce the struggles of hearing-
impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL)
to Urdu language translator is being developed to accomplish this. The
system’s dynamic sign language segment works in three phases: acquiring
key points from the dataset, training a long short-term memory (LSTM)
model, and making real-time predictions using sequences through openCV
integrated with the Kinect device. The system’s static sign language segment
works in three phases: acquiring an image-based dataset, training a model
garden, and making real-time predictions using openCV integrated with the
Kinect device. It also allows the hearing user to input Urdu audio to the
Kinect microphone. The proposed sign language translator can detect and
predict the PSL performed in front of the Kinect device and produce
translations in Urdu.
Keywords:
Kinect
Long short-term memory
Model garden
Object detection
OpenCV
Sign language translator
This is an open access article under the CC BY-SA license.
Corresponding Author:
Saad Ahmed
Department of Computer Science, IQRA University
Karachi, Pakistan
Email: saadahmed@iqra.edu.pk
1. INTRODUCTION
Communication is the foundation upon which people understand each other. There are different
types of communication, such as verbal communication where people engage with each other face-to-face,
using their devices, and using applications such as Zoom. Another type of communication is nonverbal
communication, which includes facial expressions, body poses, eye contact, hand movements, and touch.
Sign language is a nonverbal type of communication. They are languages used to communicate using
simultaneous hand motions, the orientation of the fingers and hands, arm or body movements, and facial
expressions. It is mainly used by the hearing-impaired.
There are many kinds of sign languages. Sign language is not universal like spoken languages; it is
unique and different in every country, even in countries with the same spoken language. Sign language is not
an interpretation of a spoken language; it is a deaf person’s native or local language. It is a natural and
complete language with its own grammatical structure. In Pakistan, there are many different sign languages
in the different provinces, cities, and villages, and due to this, people from different cities or villages can’t
communicate with each other through sign language. This proposed method focuses on Pakistan sign
language (PSL) and Urdu language [1].
This sign-based communication can't be perceived by everybody; therefore, we have developed a
system that will act as a bridge between the deaf and hearing people to fill the communication gap that lies
2. Comput Sci Inf Technol ISSN: 2722-3221
Pakistan sign language to Urdu translator using Kinect (Saad Ahmed)
187
between the hearing impaired and hearing people of Pakistan [2]. The majority of the working community
currently cannot use sign language. It is essential for the growth of the country and workplace that deaf adults
are employed and given the chance to work among people. This will help both communities come through
and improve the current standards [3].
We have designed a PSL interpreter that will perform immediate sign language translations and
audio-to-text translations [4]. The system's sign language module is trained upon key points that have been
extracted from multiple frames using media pipe holistic. The key points are collected from a video that is
captured through a Kinect device [5]. Then an LSTM model is built which is trained using the key points that
have been gathered for dynamic sign language [6]. An image-based dataset is created which is trained using
an object detection model garden for static sign language. After successful training PSL is then detected in
real-time to perform Urdu translations. The audio to Urdu module uses the Kinect’s microphone to input
Urdu audio which is then translated to Urdu text.
The design of the Chinese sign language recognition system incorporates a Specific Hand (SHS)
descriptor and encoder-decoder long short-term memory (LSTM) structure for recognizing isolated Chinese
sign words. The Microsoft Kinect 2.0 device is used for data input. The database is designed based on a
Specific Hand Shape (SHS) descriptor utilizing a convolutional neural network (CNN). The recognition
system captures the color image, depth map, and skeletal image to begin with. The hand regions and skeletal
joint locations of every word of the isolated Chinese sign language are extracted from the database that was
designed; this occurs after the data pre-processing process. After this, the system extracts both the features,
i.e., the specific hand shape (SHS), and the trajectory. The final stage includes an encoder-decoder LSTM
network that is then trained using the SHS and trajectory features and then applied for the recognition of
signs [7].
Another implementation of a sign language study a Kinect-based Taiwanese sign-language
recognition system has presented a solution using hidden Markov models to recognize the direction of the
hands and an SVM to recognize the hand shape. Hand information is extracted that provides skeletal data
from 20 joints: hip center, spine, shoulder center, head, left shoulder, left elbow, left wrist, left hand, right
shoulder, right elbow, right wrist, right hand, left hip, left knee, left ankle, left foot, right hip, right knee, right
ankle, and right foot. Each joint's data includes the X, Y, and Z position values. The positions of the wrist,
shoulder, spine, and hip are used to localize the positions of the hands. Then the positions of the wrists are
recorded as a gesture trajectory over a certain time interval. Velocity, angle, distance, and distance between
the two hands of the gesture trajectory are extracted as features. HMMs are then used to recognize the hand
directions from the extracted features. The position of the hand is classified into six areas as X and Y, i.e.,
spine position, trajectory, palm segmentation, direction recognition, hand position, and handshape. The
experiment is running and yielding results of around 84% [8].
Another approach is hierarchical LSTM (HLSTM) for sign language targets to interpret video into
understandable text and language to help work out vision-based sign language translation (SLT). To solve the
issue of continuous sign language translation (CSLT), a hierarchical LSTM encoder-decoder model with
visual content and word embedding was developed for SLT. It tackles different granularities by conveying
spatio-temporal transitions among frames, clips, and viseme units. First, it uses 3D CNN to investigate the
spatiotemporal cues of video clips and then packs appropriate visual themes using online key clip mining
with adaptive variable length. After pooling the recurrent outputs of the top layer of HLSTM, a temporal
attention-aware weighting mechanism is proposed to balance the intrinsic relationship among viseme source
positions. Lastly, another two LSTM layers are used to separately retrieve verb vectors and translate
semantics. After preserving original visual content with 3D CNN and the top layer of HLSTM, it shortens the
encoding time step of the bottom two LSTM layers with less computational complexity while attaining more
nonlinearity. The model performs well, particularly in independent tests for seen sentences with
discriminative capability [9].
Another proposed technique is hybrid deep architecture, which consists of a temporal convolution
module (TCOV), a bidirectional gated recurrent unit module (BGRU), and a fusion layer module (FL) to
address the CSLT problem. The design is based on an end-to-end trainable network that benefits from both
TCOV and BGRU modules. BGRU keeps the long-term temporal context transition pattern (global pattern),
while TCOV focuses on the short-term temporal pattern (local pattern) on adjacent clip features. A fusion
layer with MLP that integrates different feature embedding representations to learn the complementary
relationship is proposed. It measures the mutual accommodation extent of TCOV and BGRU. The
performance of the model with CTC constraints is about the same as that of other methods with multiple
iterations [10].
An overview of sign language and hand gesture recognition techniques describes how they are
recognized. Image processing, computer vision, and machine learning are used in many methods. Sign
language covers mostly the upper body, from the waist up. The gesture approach initially yields 94%, but if
the individual changes, the percentage drops to 40%, thus it is abandoned and work on alternative ways
3. ISSN: 2722-3221
Comput Sci Inf Technol, Vol. 3, No. 3, November 2022: 186-193
188
begins. 3D-modeled appearance-based hand gestures Hand gesture recognition is essential for feature
extraction and categorization. Dynamic sign languages use video, while static gesture recognition uses single
frames of graphics. Vision-based methods differ in data gathering. Camera frames are data. Kinect and LMC
are depth-sensitive 3D cameras. Image and video inputs are modified during image preprocessing to increase
performance. Segmentation depends on the image's backdrop and skin tone, making it unreliable. To increase
performance, active approaches in image pre-processing change image or video inputs. IMU sensors like
gyroscopes and accelerometers are utilized in data gloves for gesture and sign language detection. Wi-Fi-
based gesture control is also utilized for gesture recognition. Many new works are being made utilizing these
methods [11]. In the approach of isolated sign language recognition with Depth Cameras, they used a depth
camera sensor with data provided by a depth camera is presented. In the introduced method, sequences of
depth maps of dynamic sign language gestures are divided into smaller regions (cells). Then, statistical
information is used to describe the cells. Since gesture executions have different lengths, the dynamic time
warping (DTW) technique with the nearest neighbour rule is often employed for their comparison. However,
due to time-consuming computations, The DTW limits the usability of the classifier [12]–[15].
2. METHOD
Pakistan sign language (PSL) to the Urdu language consists of letters, words, and sentence-level
translation which is then distributed into static and dynamic sign language. Static sign language translation is
achieved using tensor flow object detection model garden. We collected an image-based dataset using Kinect
which was distributed into 34 classes that include Urdu letters. This data was labelled and then distributed
into a set of test and train data. The model garden [16] was used for the training purpose and real-time sign
language translation was performed using openCv2 which was integrated with Kinect as shown in Figure 1.
Pakistan Sign
Langugae
Object
Detection
PSL TO Urdu
Translator
Sign Language Translator
Urdu Text
Urdu Audio
Kinect
Google Speech To
Text
API
Static Sign Stop
Figure 1. Static sign language translation
Dynamic sign language is achieved using googles media pipe holistic library through which we
extracted the key points of both hands faces and shoulders. We created a dataset of 4 dynamic signs using
Kinect in which we extracted key points through the video of 30 frames which consisted of 60 sequences.
This dataset was distributed into a set of test and train data. This data set was trained using recurrent
neural network (RNN) architecture called LSTM which consisted of 3 LSTM layers and 3 dense layers as
shown in Figure 2 [17], [18]. The categorical Accuracy of the model is shown in the form of a graph in
Figure 3 and the model was trained for around 2000 Epochs. Epoch Loss is shown as a graph in Figure 4.
The trained dataset was saved and evaluated using confusion matrix and we achieved an accuracy of
1.0 as shown in Figure 5. Real-time dynamic sign language translation shown in Figure 6 was performed
using open CV2 and ML that was integrated with Kinect. [16], [18]–[22] Urdu audio to Urdu text translation
is performed using the google speech App which performs complete translation of all the letters, Words, and
Sentences as shown in Figure 7.
4. Comput Sci Inf Technol ISSN: 2722-3221
Pakistan sign language to Urdu translator using Kinect (Saad Ahmed)
189
Figure 2. Model summary
Figure 3. Categorical accuracy
Figure 4. Epoch loss
5. ISSN: 2722-3221
Comput Sci Inf Technol, Vol. 3, No. 3, November 2022: 186-193
190
Figure 5. Confusion matrix
Pakistan Sign
Langugae
LSTM
PSL TO Urdu
Translator
Sign Language Translator
Urdu Text
Urdu Audio
Kinect
Google Speech To
Text
API
Dynamic SIgn Stop
Figure 6. Dynamic sign language translation
Urdu
Audio
Urdu
Text
PSL TO Urdu
Translator
Audio Translator Kinect
Google
Speech to
text Api
Figure 7. Audio to Urdu Translator
The text gathered from sign language is then converted into audio using Google Text to Speech App
which helps in converting Urdu text into Urdu audio and a complete system of PSL to Urdu Translator is
formed as shown in Figure 8. This is a very user friend system [23] which can facilitate both hearing
impaired person [24] and the normal person to communicate with each other without facing any challenges
[25].
Pakistan
Sign
Langugae
LSTM
Object
Detection
Model Garden
Dynamic Sign
Static Sign
Urdu
Audio
Google
Speech To
Text
Api
Urdu
Text
PSL TO Urdu
Translator
Sign Language Translator
Audio Translator
Urdu
Text
Urdu
Audio
Kinect
Kinect
Google Speech
To Text
Api
Figure 8. PSL to Urdu Translator
6. Comput Sci Inf Technol ISSN: 2722-3221
Pakistan sign language to Urdu translator using Kinect (Saad Ahmed)
191
3. RESULTS AND DISCUSSION
This system has attempted to give a solution to the barrier of communication faced by the Pakistani
deaf and dumb society by developing a sign language translator application with a user-friendly GUI and
better functionalities. It uses a novel approach for PSL recognition using Kinect sensors. A vast amount of
videos or sequences and frames for acquiring the key points is used to develop the dataset for training and
testing. The dataset can be made in real-time and existing videos or datasets can be used. The dataset
consisting of the key points for the training is then further transformed into NumPy arrays.
The dataset is then split into test and train sets. The data is then fed to an LSTM network to train
upon. After successful training sign language prediction can be made. Some of the real-time inputs and
results achieved are shown in Figure 9 and Table 1. The same technique of key point extraction has been
used to make real-time sign language predictions afterward as well. The key points of the user are collected
by the Kinect sensors which capture the sequences of the facial, hands, and pose landmarks to be processed
frame by frame to match with the training datasets. After the user has successfully performed signs, real-time
sign language predictions are made. These predictions are then viewed by the user in the form of Urdu text
and Urdu audio.
Figure 9. Real-time input sign language to the model
Table 1. Accuracy achieved
Sign language alphabet Result in %
ا 80%
ب 85%
ت 63%
خ 65%
س 87%
ص 79%
ز 64%
ض 64%
م 63%
ل 85%
4. CONCLUSION
In this research work, we have proposed a methodology to help the deaf community in Pakistan, we
have designed and developed a framework to solve the problem hearing impaired people face to
communicate with normal people. The purpose behind this is to help reduce the struggle of the hearing-
impaired people of Pakistan and make them a more useful part of our society. The solution is simple,
effective, and affordable. This proposed system was tested in the Computer Science laboratory of IQRA
University. Experimental results have shown that this KINECT-based system has shown promising results
and is reliably meeting the requirements to solve the communication problem faced by hearing-impaired
people.
REFERENCES
[1] H. Tahir Jameel and S. Bibi, “Benefits of sign language for the deaf students in classroom learning,” Article in International
Journal of ADVANCED AND APPLIED SCIENCES, vol. 3, no. 6, pp. 24–26, 2016.
[2] M. Burton, “Evaluation of sign language learning tools: Understanding features for improved collaboration and communication
between a parent and a child,” Iowa State University, 2013, doi: 10.31274/etd-180810-3593.
7. ISSN: 2722-3221
Comput Sci Inf Technol, Vol. 3, No. 3, November 2022: 186-193
192
[3] A. Van Staden, G. Badenhorst, and E. Ridge, “The benefits of sign language for deaf learners with language challenges,” Per
Linguam, vol. 25, no. 1, 2011, doi: 10.5785/25-1-28.
[4] Suharjito, R. Anderson, F. Wiryana, M. C. Ariesta, and G. P. Kusuma, “Sign language recognition application systems for deaf-
mute people: A review based on input-process-output,” Procedia Computer Science, vol. 116, pp. 441–448, 2017, doi:
10.1016/j.procs.2017.10.028.
[5] Hee-Deok Yang, “Sign language recognition using kinect,” Journal of Advanced Engineering and Technology, vol. 8, no. 4, pp.
299–303, 2015, doi: 10.35272/jaet.2015.8.4.299.
[6] V. Hernandez, T. Suzuki, and G. Venture, “Convolutional and recurrent neural network for human activity recognition:
Application on American sign language,” PLoS ONE, vol. 15, no. 2, 2020, doi: 10.1371/journal.pone.0228869.
[7] X. Li, C. Mao, S. Huang, and Z. Ye, “Chinese sign language recognition based on SHS descriptor and encoder-decoder LSTM
model,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 10568 LNCS, pp. 719–728, 2017, doi: 10.1007/978-3-319-69923-3_77.
[8] G. C. Lee, F. H. Yeh, and Y. H. Hsiao, “Kinect-based Taiwanese sign-language recognition system,” Multimedia Tools and
Applications, vol. 75, no. 1, pp. 261–279, 2016, doi: 10.1007/s11042-014-2290-x.
[9] D. Guo, W. Zhou, H. Li, and M. Wang, “Hierarchical LSTM for sign language translation,” 32nd AAAI Conference on Artificial
Intelligence, AAAI 2018, pp. 6845–6852, 2018, doi: 10.1609/aaai.v32i1.12235.
[10] S. Wang, D. Guo, W. G. Zhou, Z. J. Zha, and M. Wang, “Connectionist temporal fusion for sign language translation,” MM 2018
- Proceedings of the 2018 ACM Multimedia Conference, pp. 1483–1491, 2018, doi: 10.1145/3240508.3240671.
[11] M. J. Cheok, Z. Omar, and M. H. Jaward, “A review of hand gesture and sign language recognition techniques,” International
Journal of Machine Learning and Cybernetics, vol. 10, no. 1, pp. 131–153, 2019, doi: 10.1007/s13042-017-0705-5.
[12] M. Oszust and J. Krupski, “Isolated sign language recognition with depth cameras,” Procedia Computer Science, vol. 192, pp.
2085–2094, 2021, doi: 10.1016/j.procs.2021.08.216.
[13] B. S. Parton, “Sign language recognition and translation: A multidisciplined approach from the field of artificial intelligence,”
Journal of Deaf Studies and Deaf Education, vol. 11, no. 1, pp. 94–101, 2006, doi: 10.1093/deafed/enj003.
[14] N. Adaloglou et al., “A comprehensive study on deep learning-based methods for sign language recognition,” IEEE Transactions
on Multimedia, vol. 24, pp. 1750–1762, 2022, doi: 10.1109/TMM.2021.3070438.
[15] D. Bragg et al., “Sign language recognition, generation, and translation: An interdisciplinary perspective,” ASSETS 2019 - 21st
International ACM SIGACCESS Conference on Computers and Accessibility, pp. 16–31, 2019, doi: 10.1145/3308561.3353774.
[16] R. C. Dalawis, K. D. R. Olayao, E. G. I. Ramos, and M. J. C. Samonte, “Kinect-based sign language recognition of static and
dynamic hand movements,” Eighth International Conference on Graphic and Image Processing (ICGIP 2016), vol. 10225, p.
102250I, 2017, doi: 10.1117/12.2266729.
[17] C. K. M. Lee, K. K. H. Ng, C. H. Chen, H. C. W. Lau, S. Y. Chung, and T. Tsoi, “American sign language recognition and
training method with recurrent neural network,” Expert Systems with Applications, vol. 167, 2021, doi:
10.1016/j.eswa.2020.114403.
[18] Manisha U. Kakde, Mahender G. Nakrani, and Amit M. Rawate, “A review paper on sign language recognition system for deaf
and dumb people using image processing,” International Journal of Engineering Research and, vol. V5, no. 03, 2016, doi:
10.17577/ijertv5is031036.
[19] K. Amrutha and P. Prabu, “ML based sign language recognition system,” 2021 International Conference on Innovative Trends in
Information Technology, ICITIIT 2021, 2021, doi: 10.1109/ICITIIT51526.2021.9399594.
[20] R. Elakkiya, “Machine learning based sign language recognition: A review and its research frontier,” Journal of Ambient
Intelligence and Humanized Computing, 2020, doi: 10.1007/s12652-020-02396-y.
[21] M. Al-Qurishi, T. Khalid, and R. Souissi, “Deep learning for sign language recognition: Current techniques, benchmarks, and
open issues,” IEEE Access, vol. 9, pp. 126917–126951, 2021, doi: 10.1109/ACCESS.2021.3110912.
[22] I. Papastratis, C. Chatzikonstantinou, D. Konstantinidis, K. Dimitropoulos, and P. Daras, “Artificial intelligence technologies for
sign language,” Sensors, vol. 21, no. 17, 2021, doi: 10.3390/s21175843.
[23] U. Zeshan, “Aspects of Pakistan sign language,” Sign Language Studies, vol. 1092, no. 1, pp. 253–296, 1996, doi:
10.1353/sls.1996.0015.
[24] N. S. Khan, A. Abid, K. Abid, U. Farooq, M. S. Farooq, and H. Jameel, “Speak Pakistan: Challenges in developing Pakistan sign
language using information technology,” South Asian Studies: A research journal of South Asian Studies, vol. 30, no. 2, pp. 367–
379, 2015, [Online]. Available: http://journals.pu.edu.pk/journals/index.php/IJSAS/article/view/3027.
[25] F. Shah, M. S. Shah, W. Akram, A. Manzoor, R. O. Mahmoud, and D. S. Abdelminaam, “Sign language recognition using
multiple Kernel learning: A case study of Pakistan sign language,” IEEE Access, vol. 9, pp. 67548–67558, 2021, doi:
10.1109/ACCESS.2021.3077386.
BIOGRAPHIES OF AUTHORS
Saad Ahmed received MS. degree in Computer Science from Hamdard
University, Karachi Pakistan, in 2012 and a Ph.D. degree in Computer Science from the NED
University of Engineering and Technology Karachi Pakistan 2019. He currently works as an
assistant professor at the Department of Computer Science, IQRA University Karachi
Pakistan. His current research interests include Natural Language Processing, Data mining,
and Big Data Analysis and its applications in interdisciplinary domains. He can be contacted at
email: saadahmed@iqra.edu.pk.
8. Comput Sci Inf Technol ISSN: 2722-3221
Pakistan sign language to Urdu translator using Kinect (Saad Ahmed)
193
Hasnain Shafiq is a BS holder in Computer Science from IQRA University. He
is currently working in the software industry and pursuing different certifications from
platforms such as Datacamp, Coursera in the domain of Data Science. He can be contacted at
email: hasnainshafeeq@gmail.com.
Yamna Raheel is a BS degree holder in Computer Science from IQRA
University. She is pursuing teaching as a profession and currently enrolled in different courses
related to technology. She can be contacted at email: yamnaraheel02@gmail.com.
Noor Chishti is a BS degree holder in Computer Science from IQRA University.
She is currently pursuing certification in the domain of cloud computing from AWS. She can
be contacted at email: noorrchishti@gmail.com.
Syed Muhammad Asad is a BS degree holder in Computer Science from IQRA
University. He is currently working in software industry as an Apex Developer. He can be
contacted at email: asadsyed924@gmail.com.