SlideShare a Scribd company logo
1 of 8
Download to read offline
Bulletin of Electrical Engineering and Informatics
Vol. 9, No. 5, October 2020, pp. 2082~2089
ISSN: 2302-9285, DOI: 10.11591/eei.v9i5.2440  2082
Journal homepage: http://beei.org
A gesture recognition system for the Colombian sign language
based on convolutional neural networks
Fredy H. Martínez S.1
, Faiber Robayo Betancourt2
, Mario Arbulú3
1
Universidad Distrital Francisco José de Caldas, Facultad Tecnológica, Colombia
2
Universidad Surcolombiana, Facultad de Ingeniería, Departamento de Electrónica, Colombia
3
Universidad Nacional Abierta y a Distancia (UNAD), Colombia
Article Info ABSTRACT
Article history:
Received Jan 20, 2020
Revised Mar 6, 2020
Accepted Apr 4, 2020
Sign languages (or signed languages) are languages that use visual
techniques, primarily with the hands, to transmit information and enable
communication with deaf-mutes people. This language is traditionally only
learned by people with this limitation, which is why communication between
deaf and non-deaf people is difficult. To solve this problem we propose
an autonomous model based on convolutional networks to translate
the Colombian Sign Language (CSL) into normal Spanish text. The scheme
uses characteristic images of each static sign of the language within a base of
24000 images (1000 images per category, with 24 categories) to train a deep
convolutional network of the NASNet type (Neural Architecture Search
Network). The images in each category were taken from different people
with positional variations to cover any angle of view. The performance
evaluation showed that the system is capable of recognizing all 24 signs used
with an 88% recognition rate.
Keywords:
Convolutional network
Deaf-mutes people
Nasnet
Sign language
Spanish text
Visual techniques
This is an open access article under the CC BY-SA license.
Corresponding Author:
Fredy H. Martínez S,
Universidad Distrital Francisco José de Caldas,
Cl. 68d Bis ASur #49F-70, Bogotá, Colombia.
Email: fhmartinezs@udistrital.edu.co
1. INTRODUCTION
The term deaf-mute is used to describe people who are deaf from birth and who have great difficulty
communicating by voice [1, 2]. Formerly it was thought that deaf-mute people could not communicate,
however, this is no longer correct, many visual sign languages have developed, both written and spoken [3].
Many countries have their own sign language which is learned by their deaf community [4, 5]. The problem
is that normally this language is unknown to non-deaf people, which makes communication outside the group
impossible. However, against the current precepts of equality and social inclusion, it is clear that these sign
languages are as important as the spoken language [6, 7]. It is estimated that 5% of the world’s population
has disabling hearing loss, in Colombia by 2019 this corresponds to just over 2.4 million people [8].
Sign language is a language formed by gestures that can be visually identified [9].
Normally, postures of the hands and fingers, and even facial expressions are used, all of them with a certain
meaning to form symbols according to a code [10]. Each sign language is made up of tens or even hundreds
of different signs, and in many cases, the differences between them are just small changes in the hand.
In addition, like written language, each person prints his or her personal mark on the representation of these
signs [9]. Sign language recognition systems develop schemes and algorithms that allow these symbols to be
correctly identified and translated into text recognizable by the community at large. There are two strategies
for the development of these systems, schemes based on artificial vision and schemes based on sensors.
Bulletin of Electr Eng & Inf ISSN: 2302-9285 
A gesture recognition system for the colombian sign… (Fredy H. Martínez S.)
2083
In the first case, these are artificial systems that use digital processing from images (or video frames)
captured from digital cameras [11, 12]. The algorithms usually use filters and digital image processing,
or pattern recognition schemes using bio-inspired schemes [13-15]. In the second case, sensors are used to
detect specific elements in the hands such as gloves, colors, flex sensors or textures [16, 17]. This scheme
requires the modification of normal language communication conditions, which is why it is a less
attractive solution.
Among the strategies implemented in these recognition systems, those with the highest performance
and impact on development and research should be highlighted, particularly those that can be used without
modifying the natural behavior of the person. A widely used strategy both for the case of digital image
processing (which is where it is, in fact, most used) and for the case of detection of specific sensors
is the segmentation of the signal’s field information (hand), and subsequent extraction of characteristics from
each segment for subsequent classification [11, 18]. One of the most widely used tools for the classification
process is the neural networks, and more recently, the deep learning [19, 20]. These tools have the advantage
of being able to generalize specific characteristics of the signs and detect them in different hands and people,
in a similar way as the human brain does [21].
It should also be noted that there are three translation strategies that can be performed by a sign
language translation system, these are spelling (letter to letter), single words and continuous sentences
separated by signs [16]. Fingerspelling is used in situations where new words, names of people and places,
or words without known signs are spelled in the air by hand movement [22, 23]. This is the most basic
scheme, and where we find the most research. Isolated word recognition analyzes a sequence of images or
hand movement signals that ultimately represent a complete signal [24, 25]. In this scheme, letter recognition
should be supported by a written language dictionary to identify complete words. Finally, the most complex
scheme is that of continuous sentences, which integrates grammatical strategies, and which is, in fact,
the most important for real-life communication situations [26, 27].
This article proposes a strategy for the recognition of static symbols of the Colombian Sign
Language (CSL) for their translation into Spanish letters (fingerspelling). The system uses a deep neural
network trained and tuned with a database of 24000 images distributed along 24 categories. The performance
of the scheme was measured using traditional machine learning metrics. The rest of this article is structured
as follows. Section 2 presents preliminary concepts and problem formulation. Section 3 illustrates the design
profile and development methodology. Section 4 we present the preliminary results. And finally,
in Section 5, we present our conclusions.
2. PROBLEM FORMULATION
For some years now, the research group has been carrying out a research project aimed at
developing a robotic platform (assistive robot) for the care of children, sick people and the elderly.
Within this project, specific works are framed on path planning in observable dynamic environments,
direct human-machine interaction, automatic identification of emotions in humans, and the problem here
posed of identification of visual signs corresponding to the CSL. All these problems separately correspond to
open research problems in robotics, which the research group tries to solve to implement complex tasks in
service robotics.
In the specific case of the visual sign recognition system, the aim is to adapt the robotic platform for
direct interaction with deaf-mute people. The robot has two cameras in the head to capture images with signs
and has a touch screen to display information in response to the user. Therefore, the proposed identification
system should use as an input mechanism the digital image of the sign represented by the user, and as
verification and response mechanism the representation of the identified letter on the screen. The idea is to
develop a recognition method that can be scaled by increasing the size of the vocabulary, and that would be
robust to the variations of the angle of image capture, and those due to the specific characteristics of
the people (type of hand, skin color, clothing, height or variability in the position of the fingers).
Different people have different hands with different dimensions and geometry, for this reason, the same
signal made by two different people can produce different measurements when analyzing landmarks in
images. These are variations that should not alter the operation of the classifier.
To represent a letter on the CSL, one of the hands is used (the use of the right hand is assumed,
but the scheme can well be trained to identify either hand, insensitivity to the dominant hand) and all its
fingers. Most signs correspond to static positions of the fingers, however, some letters require movement of
the fingers and hand (letters Ñ, S, and Z). For this first model we will use only static positions of the hand,
reason why these three letters are excluded, which implies that the classifier has a total of 24
categories as shown in Figure 1.
 ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089
2084
Figure 1. CSL alphabet used in the proposed recognition system
The robot must be in front of the user to capture the image corresponding to the sign, but there is not
a correct fixed distance for the capture. Many sign language identification systems are distance-dependent,
so the same sign represented by the same person but captured at different distances can produce different
results. This is a problem for real systems since the sign is not distance-dependent from the user's
perspective. However, the distance to the robot (and therefore to the image capture camera) must be
consistent with the distance between two people talking, i.e. between about 0.2 to 0.7 m. The background of
the image is also not restricted to a specific type or color since in real operation of the system the user can
wear any clothing and be located in any space of the house, school or place of interaction with the robot.
The system must be able to identify the parts of the foreground and the information encoded there.
3. RESEARCH METHOD
To implement the identification algorithm we use the NASNet (Neural Architecture Search
Network) deep neural network. This convolutional network was introduced in early 2018 by the Google
Brain team, and in its design, they sought to define a building block with high performance in
the categorization of in a small set of images (CIFAR-10) and then generalized the block to a larger data set
(ImageNet). In this way, this architecture achieves a high classification capacity and a reduced number of
parameters as shown in Figure 2.
Figure 2. ImageNet architecture (NASNet)
Bulletin of Electr Eng & Inf ISSN: 2302-9285 
A gesture recognition system for the colombian sign… (Fredy H. Martínez S.)
2085
We built our training database that can be seen in Figure 3. Our dataset was built by capturing
images for each of the 24 categories in Figure 1. In total, we used 1000 images in each category for a total of
24000 images. The same number of images in each category was maintained to achieve a balance of classes
in the trained model. To separate the images within each category, the images corresponding to each class
were placed together in a folder labeled with the letter c, an order number, and the corresponding letter.
Different people were used to capturing the images, in different locations, viewing angles, distances to
the camera and lighting conditions. This was done to try to make the classification algorithm identify
the information of common interest in each category, and to increase the robustness against changes in
the environment.
To improve the performance of the network we randomly mix the images during training.
With the same intention, we scale all images to a size of 256x256 pixels. In the model we have not
considered the aspect ratio of the images, we believe that the information in the images is not altered by this
change, besides, the image size is important for the training of the network. For training purposes we
normalize the color value of each RGB matrix of the images to the range of 0 to 1, this corresponds to
the operating value of the neural network. The database was randomly divided into two groups, one for
training and one for testing. We used 75% of the data for training and 25% for performance evaluation.
In the network design, the number of nodes in the input layer is defined according to the size at which
the images are resized, i.e. 255x255x3 considering the three-color matrices. The number of nodes in
the output layer is defined by the number of categories for the classification of the images, i.e. 24 nodes.
As optimization and cost metrics in training, we use stochastic gradient descent, categorical cross-entropy,
accuracy, and mse.
Figure 3. Sample database of CSL images used for model training
The neural network had a total of 4,295,084 parameters, of which a total of 4,258,346 were adjusted
(trained). The neural model training code was developed in Python 3.7.3 within the Keras framework (using
TensorFlow backend). As libraries, we use Scikit Learn 0.20.3, Pandas 0.24.2, Nltk 3.4, Numpy 1.16.2,
and Scipy 1.2.1.
 ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089
2086
4. RESULTS AND DISCUSSION
The training was performed during 10 epochs, and as metrics of adjustment during the training,
the function of loss of cross-entropy (categorical cross-entropy), the accuracy (or success rate) and the MSE
(Mean Squared Error) were calculated. The results during the training of these three metrics, for both training
and validation data, are shown in Figures 4, 5 and 6. As can be seen, the reduction of error is constant for
both training and validation data, with equivalent behavior for accuracy, which follows correct learning
without problems of over-or under-adjustment.
After the model was trained, its performance was checked against the validation data by means of its
Confusion Matrix, from the Precision, Recall and F1-score metrics, and by means of the ROC
(Receiver Operator Characteristic) curve. Figures 7, 8 and 9 show the results achieved. The diagonal in
Figure 7 shows a very good classification hot zone, which coincides with the data by category of the metrics
summarized in Figure 8.
Figure 4. Training and validation loss
Figure 5. Training and validation accuracy
Bulletin of Electr Eng & Inf ISSN: 2302-9285 
A gesture recognition system for the colombian sign… (Fredy H. Martínez S.)
2087
Figure 6. Training and validation MSE (Mean Squared Error)
Figure 7. Confusion matrix
Figure 8. Precision, recall and F1-score metrics
 ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089
2088
Figure 9. ROC curve and ROC area for each class
The ROC (Receiver Operator Characteristic) curve in Figure 9 graphically represents through curves
(areas under the curves) the true positive and false positive rates in the model classification; the best
categorization is achieved with the largest areas (blue).
5. CONCLUSION
In this paper, we show the development of a visual recognition system of the CSL in normal text in
Spanish using a NASNET-type convolutional network. The CSL has static symbols that represent each of
the letters used in the Spanish language, Colombia's mother tongue. The model was developed to be
integrated into an assistive robot. The images used in the training (1000 in each of the 24 categories) were
randomly mixed and resized to 256x256 pixels. For the network training, we used categorical cross-entropy
loss, stochastic gradient descent, accuracy, and MSE. The design allows for the possibility of scaling to new
symbols. The results showed a high performance, 88% accuracy measured on images unknown to
the network (images not used in training and with unstructured environment). The model's confusion matrix
shows a very low percentage of false positives, which is supported by the ROC curve of each category with
average values of 0.99, indicating a very high accuracy. In addition, these values are consistent with
the recall and f1-score metrics, which denotes an important contribution of the model, since it uses
semi-structured images without previous processing, very close to real conditions, unlike previous studies
where the images are conditioned in position, background and lighting, and use pre-processing.
ACKNOWLEDGEMENTS
This work was supported by the Universidad Distrital, the Universidad Surcolombiana,
and the Universidad Nacional Abierta y a Distancia (UNAD). The views expressed in this paper are not
necessarily endorsed by the above-mentioned universities. The authors thank the ARMOS research group for
the evaluation carried out on prototypes.
REFERENCES
[1] N. Nagori and V. Malode, “Communication interface for deaf-mute people using microsoft kinect,” In
International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT 2016), 2016.
doi: 10.1109/ICACDOT.2016.7877664.
[2] A. R. Hasdak, et al., "Deaf-Vibe: A Vibrotactile Communication Device Based on Morse Code for Deaf-Mute
Individuals," 2018 9th IEEE Control and System Graduate Research Colloquium (ICSGRC), pp. 39-44, 2018, doi:
10.1109/ICSGRC.2018.8657547.
[3] A. B. Jani, N. A. Kotak and A. K. Roy, "Sensor Based Hand Gesture Recognition System for English Alphabets
Used in Sign Language of Deaf-Mute People," 2018 IEEE SENSORS, pp. 1-4, 2018, doi:
10.1109/ICSENS.2018.8589574.
[4] Haq E. S., Suwardiyanto, D., M. Huda, “Indonesian sign language recognition application for two-way
communication deaf-mute people,” In 3rd International Conference on Information Technology, Information
System and Electrical Engineering (ICITISEE 2018), pages 313–318, 2018. doi: 10.1109/ICITISEE.2018.8720982.
Bulletin of Electr Eng & Inf ISSN: 2302-9285 
A gesture recognition system for the colombian sign… (Fredy H. Martínez S.)
2089
[5] Y. F. Admasu and K. Raimond, "Ethiopian sign language recognition using Artificial Neural Network," 2010 10th
International Conference on Intelligent Systems Design and Applications, pp. 995-1000, 2010, doi:
10.1109/ISDA.2010.5687057.
[6] L. Peluso, J. Larrinaga, and A. Lodi. Public policies on deaf education. comparative analysis between uruguay and
brazil. Second International Handbook of Urban Education, vol. 2, no. 1, pp. 613–626, 2017, doi:
https://doi.org/10.1007/978-3-319-40317-5_33.
[7] F. Lan, “The role of art education in the improvement of employment competitiveness of deaf college students,”
In 2nd International Conference on Art Studies: Science, Experience, Education (ICASSEE 2018), pages 879–883,
2018, doi: https://doi.org/10.2991/icassee-18.2018.181.
[8] L. Vallejo, “El censo de 2018 y sus implicaciones en Colombia,” Apuntes del Censo, vol. 38, no. 67, pp.9–12,
2019, doi: https://doi.org/10.19053/01203053.v36.n64.2017.6511.
[9] S. Goldin and D. Brentari, “Gesture, sign, and language: The coming of age of sign language and gesture studies,”
Behavioral and Brain Sciences, vol. 40, no. 46, pp. 1–60, 2017, doi: https://doi.org/10.1017/S0140525X15001247.
[10] Y. Motamedi, et al., “The cultural evolution of complex linguistic constructions in artificial sign languages,”
In 39th Annual Meeting of the Cognitive Science Society (CogSci 2017), pp. 2760–2765, 2017.
[11] J. Guerrero and W. Pérez, “FPGA-based translation system from colombian sign language to text,” DYNA, vol. 82,
no. 189, pp. 172–181, 2015, doi: http://dx.doi.org/10.15446/dyna.v82n189.43075.
[12] B. Villa, V. Valencia, and J. Berrio. Digital image processing applied on static sign language recognition system.
Prospectiva, vol. 16, no. 2, pp. 41–48, 2018, doi: http://dx.doi.org/10.15665/rp.v16i2.1488.
[13] P. Kumar, et al., “Coupled hmm-based multi-sensor data fusion for sign language recognition,” Pattern
Recognition Letters, vol. 86, pp. 1–8, 2017, doi: https://doi.org/10.1016/j.patrec.2016.12.004.
[14] S. Hore, et al., “Indian sign language recognition using optimized neural networks,” Advances in Intelligent
Systems and Computing, vol. 455, no. 1, pp. 553–563, 2017, doi: https://doi.org/10.1007/978-3-319-38771-0_54.
[15] C. López, “Evaluación de desempeño de dos técnicas de optimización bio-inspiradas: Algoritmos genéticos y
enjambre de partículas,” Tekhnê, vol.11, no. 1, pp. 49–58, 2014.
[16] S. M. Kamal, et al., "Technical Approaches to Chinese Sign Language Processing: A Review," in IEEE Access,
vol. 7, pp. 96926-96935, 2019, doi: 10.1109/ACCESS.2019.2929174.
[17] J. Gałka, et al., "Inertial Motion Sensing Glove for Sign Language Gesture Acquisition and Recognition," in IEEE
Sensors Journal, vol. 16, no. 16, pp. 6310-6316, Aug.15, 2016, doi: 10.1109/JSEN.2016.2583542.
[18] S. Chand, A. Singh, and R. Kumar, “A survey on manual and non-manual sign language recognition for isolated
and continuous sign,” International Journal of Applied Pattern Recognition, vol. 3, no. 2, pp. 99–134, 2016,
doi: 10.1504/IJAPR.2016.079048.
[19] F. Martínez, C. Penagos, and L. Pacheco, ”Deep regression models for local interaction in multi-agent robot tasks,”
Lecture Notes in Computer Science, vol. 10942, pp. 66–73, 2018, doi: 10.1007/978-3-319-93818-9_7.
[20] F. Martínez, A. Rendón, and M. Arbulú, “A data-driven path planner for small autonomous robots using deep
regression models,” Lecture Notes in Computer Science, vol. 10943, pp. 596–603, 2018,
doi: https://doi.org/10.1007/978-3-319-93803-5_56.
[21] E. B. Villagomez, et al., "Hand Gesture Recognition for Deaf-Mute using Fuzzy-Neural Network," 2019 IEEE
International Conference on Consumer Electronics - Asia (ICCE-Asia), pp. 30-33, 2019, doi: 10.1109/ICCE-
Asia46551.2019.8942220.
[22] J. Scott, S. Hansen, and A. Lederberg, “Fingerspelling and print: Understanding the word reading of deaf children,”
American Annals of the Deaf, vol. 164, no. 4, pp. 429–449, 2019, doi: 10.1353/aad.2019.0026.
[23] A. Lederberg, L. Branum, and M. Webb., “Modality and interrelations among language, reading, spoken
phonological awareness, and fingerspelling,” Journal of deaf studies and deaf education, vol. 24, no. 4,
pp. 408-423, 2019, doi: 10.1093/deafed/enz011.
[24] Q. Zhang, et al,. “Myosign: enabling end-to-end sign language recognition with wearables,” In 24th International
Conference on Intelligent User Interfaces, pp 650–660, March 2019. doi:
https://doi.org/10.1145/3301275.3302296.
[25] S. Fakhfakh and Y. Jemaa, “Gesture recognition system for isolated word sign language based on key-point
trajectory matrix,” Computación y Sistemas, vol. 22, no. 4, pp. 1415–1430, 2018, doi: 10.13053/CyS-22-4-3046.
[26] P. V. V. Kishore, et al., "Optical Flow Hand Tracking and Active Contour Hand Shape Features for Continuous
Sign Language Recognition with Artificial Neural Networks," 2016 IEEE 6th International Conference on
Advanced Computing (IACC), pp. 346-351, 2016, doi: 10.1109/IACC.2016.71.
[27] K. Li, Z. Zhou, and C. Lee, “Sign transition modeling and a scalable solution to continuous sign language
recognition for real-world applications,” ACM Transactions on Accessible Computing, vol. 8, no. 2, pp. 1–23, 2016,
doi: https://doi.org/10.1145/2850421.

More Related Content

What's hot

IRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET Journal
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;TabassumBanu5
 
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...INFOGAIN PUBLICATION
 
Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMijtsrd
 
Paper Design and Simulation of Communication Aid For Disabled based on region...
Paper Design and Simulation of Communication Aid For Disabled based on region...Paper Design and Simulation of Communication Aid For Disabled based on region...
Paper Design and Simulation of Communication Aid For Disabled based on region...Syambabu Choda
 
IRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
 
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...ijtsrd
 
A Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionA Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
 
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEM
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEMAUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEM
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEMcscpconf
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Languageijsrd.com
 
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardDevelopment of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
 
IRJET- Gesture Recognition for Indian Sign Language using HOG and SVM
IRJET-  	  Gesture Recognition for Indian Sign Language using HOG and SVMIRJET-  	  Gesture Recognition for Indian Sign Language using HOG and SVM
IRJET- Gesture Recognition for Indian Sign Language using HOG and SVMIRJET Journal
 
Sign Language Recognition based on Hands symbols Classification
Sign Language Recognition based on Hands symbols ClassificationSign Language Recognition based on Hands symbols Classification
Sign Language Recognition based on Hands symbols ClassificationTriloki Gupta
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionIAEME Publication
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognitiondbpublications
 
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014Journal For Research
 
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET Journal
 

What's hot (20)

IRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and DumbIRJET- Hand Gesture Recognition for Deaf and Dumb
IRJET- Hand Gesture Recognition for Deaf and Dumb
 
Projects1920 c6 fina;
Projects1920 c6 fina;Projects1920 c6 fina;
Projects1920 c6 fina;
 
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
A Real-Time Letter Recognition Model for Arabic Sign Language Using Kinect an...
 
Real time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVMReal time Myanmar Sign Language Recognition System using PCA and SVM
Real time Myanmar Sign Language Recognition System using PCA and SVM
 
Paper Design and Simulation of Communication Aid For Disabled based on region...
Paper Design and Simulation of Communication Aid For Disabled based on region...Paper Design and Simulation of Communication Aid For Disabled based on region...
Paper Design and Simulation of Communication Aid For Disabled based on region...
 
IRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET- Hand Gesture based Recognition using CNN Methodology
IRJET- Hand Gesture based Recognition using CNN Methodology
 
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...
GRS '“ Gesture based Recognition System for Indian Sign Language Recognition ...
 
A Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionA Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
A Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
 
Ay4102371374
Ay4102371374Ay4102371374
Ay4102371374
 
D0361015021
D0361015021D0361015021
D0361015021
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural Network
 
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEM
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEMAUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEM
AUTOMATIC TRANSLATION OF ARABIC SIGN TO ARABIC TEXT (ATASAT) SYSTEM
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Language
 
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardDevelopment of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
 
IRJET- Gesture Recognition for Indian Sign Language using HOG and SVM
IRJET-  	  Gesture Recognition for Indian Sign Language using HOG and SVMIRJET-  	  Gesture Recognition for Indian Sign Language using HOG and SVM
IRJET- Gesture Recognition for Indian Sign Language using HOG and SVM
 
Sign Language Recognition based on Hands symbols Classification
Sign Language Recognition based on Hands symbols ClassificationSign Language Recognition based on Hands symbols Classification
Sign Language Recognition based on Hands symbols Classification
 
Appearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognitionAppearance based static hand gesture alphabet recognition
Appearance based static hand gesture alphabet recognition
 
Review on Hand Gesture Recognition
Review on Hand Gesture RecognitionReview on Hand Gesture Recognition
Review on Hand Gesture Recognition
 
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
 
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
 

Similar to A gesture recognition system for the Colombian sign language based on convolutional neural networks

Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeIRJET Journal
 
Vision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionVision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionIJAAS Team
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysispaperpublications3
 
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-CamHand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbRSIS International
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language DetectorIRJET Journal
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language DetectionIRJET Journal
 
electronics-11-01780-v2.pdf
electronics-11-01780-v2.pdfelectronics-11-01780-v2.pdf
electronics-11-01780-v2.pdfNaveenkushwaha18
 
Communication among blind, deaf and dumb People
Communication among blind, deaf and dumb PeopleCommunication among blind, deaf and dumb People
Communication among blind, deaf and dumb PeopleIJAEMSJORNAL
 
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...IRJET Journal
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONijnlc
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Languageijtsrd
 
Pakistan sign language to Urdu translator using Kinect
Pakistan sign language to Urdu translator using KinectPakistan sign language to Urdu translator using Kinect
Pakistan sign language to Urdu translator using KinectCSITiaesprime
 
Ijaia040203
Ijaia040203Ijaia040203
Ijaia040203ijaia
 
Review Paper on Two Way Communication System with Binary Code Medium for Peop...
Review Paper on Two Way Communication System with Binary Code Medium for Peop...Review Paper on Two Way Communication System with Binary Code Medium for Peop...
Review Paper on Two Way Communication System with Binary Code Medium for Peop...IRJET Journal
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET Journal
 
Identifier of human emotions based on convolutional neural network for assist...
Identifier of human emotions based on convolutional neural network for assist...Identifier of human emotions based on convolutional neural network for assist...
Identifier of human emotions based on convolutional neural network for assist...TELKOMNIKA JOURNAL
 

Similar to A gesture recognition system for the Colombian sign language based on convolutional neural networks (20)

183
183183
183
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using Mediapipe
 
Vision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language RecognitionVision Based Approach to Sign Language Recognition
Vision Based Approach to Sign Language Recognition
 
Sign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture AnalysisSign Language Recognition with Gesture Analysis
Sign Language Recognition with Gesture Analysis
 
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-CamHand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
 
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
[IJET-V1I5P9] Author: Prutha Gandhi, Dhanashri Dalvi, Pallavi Gaikwad, Shubha...
 
Basic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and DumbBasic Gesture Based Communication for Deaf and Dumb
Basic Gesture Based Communication for Deaf and Dumb
 
Real-Time Sign Language Detector
Real-Time Sign Language DetectorReal-Time Sign Language Detector
Real-Time Sign Language Detector
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
electronics-11-01780-v2.pdf
electronics-11-01780-v2.pdfelectronics-11-01780-v2.pdf
electronics-11-01780-v2.pdf
 
Communication among blind, deaf and dumb People
Communication among blind, deaf and dumb PeopleCommunication among blind, deaf and dumb People
Communication among blind, deaf and dumb People
 
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...
IRJET-A System for Recognition of Indian Sign Language for Deaf People using ...
 
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATIONA SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
A SIGNATURE BASED DRAVIDIAN SIGN LANGUAGE RECOGNITION BY SPARSE REPRESENTATION
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Language
 
Pakistan sign language to Urdu translator using Kinect
Pakistan sign language to Urdu translator using KinectPakistan sign language to Urdu translator using Kinect
Pakistan sign language to Urdu translator using Kinect
 
Ijaia040203
Ijaia040203Ijaia040203
Ijaia040203
 
Review Paper on Two Way Communication System with Binary Code Medium for Peop...
Review Paper on Two Way Communication System with Binary Code Medium for Peop...Review Paper on Two Way Communication System with Binary Code Medium for Peop...
Review Paper on Two Way Communication System with Binary Code Medium for Peop...
 
IRJET - Paint using Hand Gesture
IRJET - Paint using Hand GestureIRJET - Paint using Hand Gesture
IRJET - Paint using Hand Gesture
 
Identifier of human emotions based on convolutional neural network for assist...
Identifier of human emotions based on convolutional neural network for assist...Identifier of human emotions based on convolutional neural network for assist...
Identifier of human emotions based on convolutional neural network for assist...
 

More from journalBEEI

Square transposition: an approach to the transposition process in block cipher
Square transposition: an approach to the transposition process in block cipherSquare transposition: an approach to the transposition process in block cipher
Square transposition: an approach to the transposition process in block cipherjournalBEEI
 
Hyper-parameter optimization of convolutional neural network based on particl...
Hyper-parameter optimization of convolutional neural network based on particl...Hyper-parameter optimization of convolutional neural network based on particl...
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
 
Supervised machine learning based liver disease prediction approach with LASS...
Supervised machine learning based liver disease prediction approach with LASS...Supervised machine learning based liver disease prediction approach with LASS...
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
 
A secure and energy saving protocol for wireless sensor networks
A secure and energy saving protocol for wireless sensor networksA secure and energy saving protocol for wireless sensor networks
A secure and energy saving protocol for wireless sensor networksjournalBEEI
 
Plant leaf identification system using convolutional neural network
Plant leaf identification system using convolutional neural networkPlant leaf identification system using convolutional neural network
Plant leaf identification system using convolutional neural networkjournalBEEI
 
Customized moodle-based learning management system for socially disadvantaged...
Customized moodle-based learning management system for socially disadvantaged...Customized moodle-based learning management system for socially disadvantaged...
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
 
Understanding the role of individual learner in adaptive and personalized e-l...
Understanding the role of individual learner in adaptive and personalized e-l...Understanding the role of individual learner in adaptive and personalized e-l...
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
 
Prototype mobile contactless transaction system in traditional markets to sup...
Prototype mobile contactless transaction system in traditional markets to sup...Prototype mobile contactless transaction system in traditional markets to sup...
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
 
Wireless HART stack using multiprocessor technique with laxity algorithm
Wireless HART stack using multiprocessor technique with laxity algorithmWireless HART stack using multiprocessor technique with laxity algorithm
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
 
Implementation of double-layer loaded on octagon microstrip yagi antenna
Implementation of double-layer loaded on octagon microstrip yagi antennaImplementation of double-layer loaded on octagon microstrip yagi antenna
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
 
The calculation of the field of an antenna located near the human head
The calculation of the field of an antenna located near the human headThe calculation of the field of an antenna located near the human head
The calculation of the field of an antenna located near the human headjournalBEEI
 
Exact secure outage probability performance of uplinkdownlink multiple access...
Exact secure outage probability performance of uplinkdownlink multiple access...Exact secure outage probability performance of uplinkdownlink multiple access...
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
 
Design of a dual-band antenna for energy harvesting application
Design of a dual-band antenna for energy harvesting applicationDesign of a dual-band antenna for energy harvesting application
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
 
Transforming data-centric eXtensible markup language into relational database...
Transforming data-centric eXtensible markup language into relational database...Transforming data-centric eXtensible markup language into relational database...
Transforming data-centric eXtensible markup language into relational database...journalBEEI
 
Key performance requirement of future next wireless networks (6G)
Key performance requirement of future next wireless networks (6G)Key performance requirement of future next wireless networks (6G)
Key performance requirement of future next wireless networks (6G)journalBEEI
 
Noise resistance territorial intensity-based optical flow using inverse confi...
Noise resistance territorial intensity-based optical flow using inverse confi...Noise resistance territorial intensity-based optical flow using inverse confi...
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
 
Modeling climate phenomenon with software grids analysis and display system i...
Modeling climate phenomenon with software grids analysis and display system i...Modeling climate phenomenon with software grids analysis and display system i...
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
 
An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
 
Parking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationParking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
 
Quality of service performances of video and voice transmission in universal ...
Quality of service performances of video and voice transmission in universal ...Quality of service performances of video and voice transmission in universal ...
Quality of service performances of video and voice transmission in universal ...journalBEEI
 

More from journalBEEI (20)

Square transposition: an approach to the transposition process in block cipher
Square transposition: an approach to the transposition process in block cipherSquare transposition: an approach to the transposition process in block cipher
Square transposition: an approach to the transposition process in block cipher
 
Hyper-parameter optimization of convolutional neural network based on particl...
Hyper-parameter optimization of convolutional neural network based on particl...Hyper-parameter optimization of convolutional neural network based on particl...
Hyper-parameter optimization of convolutional neural network based on particl...
 
Supervised machine learning based liver disease prediction approach with LASS...
Supervised machine learning based liver disease prediction approach with LASS...Supervised machine learning based liver disease prediction approach with LASS...
Supervised machine learning based liver disease prediction approach with LASS...
 
A secure and energy saving protocol for wireless sensor networks
A secure and energy saving protocol for wireless sensor networksA secure and energy saving protocol for wireless sensor networks
A secure and energy saving protocol for wireless sensor networks
 
Plant leaf identification system using convolutional neural network
Plant leaf identification system using convolutional neural networkPlant leaf identification system using convolutional neural network
Plant leaf identification system using convolutional neural network
 
Customized moodle-based learning management system for socially disadvantaged...
Customized moodle-based learning management system for socially disadvantaged...Customized moodle-based learning management system for socially disadvantaged...
Customized moodle-based learning management system for socially disadvantaged...
 
Understanding the role of individual learner in adaptive and personalized e-l...
Understanding the role of individual learner in adaptive and personalized e-l...Understanding the role of individual learner in adaptive and personalized e-l...
Understanding the role of individual learner in adaptive and personalized e-l...
 
Prototype mobile contactless transaction system in traditional markets to sup...
Prototype mobile contactless transaction system in traditional markets to sup...Prototype mobile contactless transaction system in traditional markets to sup...
Prototype mobile contactless transaction system in traditional markets to sup...
 
Wireless HART stack using multiprocessor technique with laxity algorithm
Wireless HART stack using multiprocessor technique with laxity algorithmWireless HART stack using multiprocessor technique with laxity algorithm
Wireless HART stack using multiprocessor technique with laxity algorithm
 
Implementation of double-layer loaded on octagon microstrip yagi antenna
Implementation of double-layer loaded on octagon microstrip yagi antennaImplementation of double-layer loaded on octagon microstrip yagi antenna
Implementation of double-layer loaded on octagon microstrip yagi antenna
 
The calculation of the field of an antenna located near the human head
The calculation of the field of an antenna located near the human headThe calculation of the field of an antenna located near the human head
The calculation of the field of an antenna located near the human head
 
Exact secure outage probability performance of uplinkdownlink multiple access...
Exact secure outage probability performance of uplinkdownlink multiple access...Exact secure outage probability performance of uplinkdownlink multiple access...
Exact secure outage probability performance of uplinkdownlink multiple access...
 
Design of a dual-band antenna for energy harvesting application
Design of a dual-band antenna for energy harvesting applicationDesign of a dual-band antenna for energy harvesting application
Design of a dual-band antenna for energy harvesting application
 
Transforming data-centric eXtensible markup language into relational database...
Transforming data-centric eXtensible markup language into relational database...Transforming data-centric eXtensible markup language into relational database...
Transforming data-centric eXtensible markup language into relational database...
 
Key performance requirement of future next wireless networks (6G)
Key performance requirement of future next wireless networks (6G)Key performance requirement of future next wireless networks (6G)
Key performance requirement of future next wireless networks (6G)
 
Noise resistance territorial intensity-based optical flow using inverse confi...
Noise resistance territorial intensity-based optical flow using inverse confi...Noise resistance territorial intensity-based optical flow using inverse confi...
Noise resistance territorial intensity-based optical flow using inverse confi...
 
Modeling climate phenomenon with software grids analysis and display system i...
Modeling climate phenomenon with software grids analysis and display system i...Modeling climate phenomenon with software grids analysis and display system i...
Modeling climate phenomenon with software grids analysis and display system i...
 
An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...
 
Parking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentationParking detection system using background subtraction and HSV color segmentation
Parking detection system using background subtraction and HSV color segmentation
 
Quality of service performances of video and voice transmission in universal ...
Quality of service performances of video and voice transmission in universal ...Quality of service performances of video and voice transmission in universal ...
Quality of service performances of video and voice transmission in universal ...
 

Recently uploaded

SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 

Recently uploaded (20)

9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 

A gesture recognition system for the Colombian sign language based on convolutional neural networks

  • 1. Bulletin of Electrical Engineering and Informatics Vol. 9, No. 5, October 2020, pp. 2082~2089 ISSN: 2302-9285, DOI: 10.11591/eei.v9i5.2440  2082 Journal homepage: http://beei.org A gesture recognition system for the Colombian sign language based on convolutional neural networks Fredy H. Martínez S.1 , Faiber Robayo Betancourt2 , Mario Arbulú3 1 Universidad Distrital Francisco José de Caldas, Facultad Tecnológica, Colombia 2 Universidad Surcolombiana, Facultad de Ingeniería, Departamento de Electrónica, Colombia 3 Universidad Nacional Abierta y a Distancia (UNAD), Colombia Article Info ABSTRACT Article history: Received Jan 20, 2020 Revised Mar 6, 2020 Accepted Apr 4, 2020 Sign languages (or signed languages) are languages that use visual techniques, primarily with the hands, to transmit information and enable communication with deaf-mutes people. This language is traditionally only learned by people with this limitation, which is why communication between deaf and non-deaf people is difficult. To solve this problem we propose an autonomous model based on convolutional networks to translate the Colombian Sign Language (CSL) into normal Spanish text. The scheme uses characteristic images of each static sign of the language within a base of 24000 images (1000 images per category, with 24 categories) to train a deep convolutional network of the NASNet type (Neural Architecture Search Network). The images in each category were taken from different people with positional variations to cover any angle of view. The performance evaluation showed that the system is capable of recognizing all 24 signs used with an 88% recognition rate. Keywords: Convolutional network Deaf-mutes people Nasnet Sign language Spanish text Visual techniques This is an open access article under the CC BY-SA license. Corresponding Author: Fredy H. Martínez S, Universidad Distrital Francisco José de Caldas, Cl. 68d Bis ASur #49F-70, Bogotá, Colombia. Email: fhmartinezs@udistrital.edu.co 1. INTRODUCTION The term deaf-mute is used to describe people who are deaf from birth and who have great difficulty communicating by voice [1, 2]. Formerly it was thought that deaf-mute people could not communicate, however, this is no longer correct, many visual sign languages have developed, both written and spoken [3]. Many countries have their own sign language which is learned by their deaf community [4, 5]. The problem is that normally this language is unknown to non-deaf people, which makes communication outside the group impossible. However, against the current precepts of equality and social inclusion, it is clear that these sign languages are as important as the spoken language [6, 7]. It is estimated that 5% of the world’s population has disabling hearing loss, in Colombia by 2019 this corresponds to just over 2.4 million people [8]. Sign language is a language formed by gestures that can be visually identified [9]. Normally, postures of the hands and fingers, and even facial expressions are used, all of them with a certain meaning to form symbols according to a code [10]. Each sign language is made up of tens or even hundreds of different signs, and in many cases, the differences between them are just small changes in the hand. In addition, like written language, each person prints his or her personal mark on the representation of these signs [9]. Sign language recognition systems develop schemes and algorithms that allow these symbols to be correctly identified and translated into text recognizable by the community at large. There are two strategies for the development of these systems, schemes based on artificial vision and schemes based on sensors.
  • 2. Bulletin of Electr Eng & Inf ISSN: 2302-9285  A gesture recognition system for the colombian sign… (Fredy H. Martínez S.) 2083 In the first case, these are artificial systems that use digital processing from images (or video frames) captured from digital cameras [11, 12]. The algorithms usually use filters and digital image processing, or pattern recognition schemes using bio-inspired schemes [13-15]. In the second case, sensors are used to detect specific elements in the hands such as gloves, colors, flex sensors or textures [16, 17]. This scheme requires the modification of normal language communication conditions, which is why it is a less attractive solution. Among the strategies implemented in these recognition systems, those with the highest performance and impact on development and research should be highlighted, particularly those that can be used without modifying the natural behavior of the person. A widely used strategy both for the case of digital image processing (which is where it is, in fact, most used) and for the case of detection of specific sensors is the segmentation of the signal’s field information (hand), and subsequent extraction of characteristics from each segment for subsequent classification [11, 18]. One of the most widely used tools for the classification process is the neural networks, and more recently, the deep learning [19, 20]. These tools have the advantage of being able to generalize specific characteristics of the signs and detect them in different hands and people, in a similar way as the human brain does [21]. It should also be noted that there are three translation strategies that can be performed by a sign language translation system, these are spelling (letter to letter), single words and continuous sentences separated by signs [16]. Fingerspelling is used in situations where new words, names of people and places, or words without known signs are spelled in the air by hand movement [22, 23]. This is the most basic scheme, and where we find the most research. Isolated word recognition analyzes a sequence of images or hand movement signals that ultimately represent a complete signal [24, 25]. In this scheme, letter recognition should be supported by a written language dictionary to identify complete words. Finally, the most complex scheme is that of continuous sentences, which integrates grammatical strategies, and which is, in fact, the most important for real-life communication situations [26, 27]. This article proposes a strategy for the recognition of static symbols of the Colombian Sign Language (CSL) for their translation into Spanish letters (fingerspelling). The system uses a deep neural network trained and tuned with a database of 24000 images distributed along 24 categories. The performance of the scheme was measured using traditional machine learning metrics. The rest of this article is structured as follows. Section 2 presents preliminary concepts and problem formulation. Section 3 illustrates the design profile and development methodology. Section 4 we present the preliminary results. And finally, in Section 5, we present our conclusions. 2. PROBLEM FORMULATION For some years now, the research group has been carrying out a research project aimed at developing a robotic platform (assistive robot) for the care of children, sick people and the elderly. Within this project, specific works are framed on path planning in observable dynamic environments, direct human-machine interaction, automatic identification of emotions in humans, and the problem here posed of identification of visual signs corresponding to the CSL. All these problems separately correspond to open research problems in robotics, which the research group tries to solve to implement complex tasks in service robotics. In the specific case of the visual sign recognition system, the aim is to adapt the robotic platform for direct interaction with deaf-mute people. The robot has two cameras in the head to capture images with signs and has a touch screen to display information in response to the user. Therefore, the proposed identification system should use as an input mechanism the digital image of the sign represented by the user, and as verification and response mechanism the representation of the identified letter on the screen. The idea is to develop a recognition method that can be scaled by increasing the size of the vocabulary, and that would be robust to the variations of the angle of image capture, and those due to the specific characteristics of the people (type of hand, skin color, clothing, height or variability in the position of the fingers). Different people have different hands with different dimensions and geometry, for this reason, the same signal made by two different people can produce different measurements when analyzing landmarks in images. These are variations that should not alter the operation of the classifier. To represent a letter on the CSL, one of the hands is used (the use of the right hand is assumed, but the scheme can well be trained to identify either hand, insensitivity to the dominant hand) and all its fingers. Most signs correspond to static positions of the fingers, however, some letters require movement of the fingers and hand (letters Ñ, S, and Z). For this first model we will use only static positions of the hand, reason why these three letters are excluded, which implies that the classifier has a total of 24 categories as shown in Figure 1.
  • 3.  ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089 2084 Figure 1. CSL alphabet used in the proposed recognition system The robot must be in front of the user to capture the image corresponding to the sign, but there is not a correct fixed distance for the capture. Many sign language identification systems are distance-dependent, so the same sign represented by the same person but captured at different distances can produce different results. This is a problem for real systems since the sign is not distance-dependent from the user's perspective. However, the distance to the robot (and therefore to the image capture camera) must be consistent with the distance between two people talking, i.e. between about 0.2 to 0.7 m. The background of the image is also not restricted to a specific type or color since in real operation of the system the user can wear any clothing and be located in any space of the house, school or place of interaction with the robot. The system must be able to identify the parts of the foreground and the information encoded there. 3. RESEARCH METHOD To implement the identification algorithm we use the NASNet (Neural Architecture Search Network) deep neural network. This convolutional network was introduced in early 2018 by the Google Brain team, and in its design, they sought to define a building block with high performance in the categorization of in a small set of images (CIFAR-10) and then generalized the block to a larger data set (ImageNet). In this way, this architecture achieves a high classification capacity and a reduced number of parameters as shown in Figure 2. Figure 2. ImageNet architecture (NASNet)
  • 4. Bulletin of Electr Eng & Inf ISSN: 2302-9285  A gesture recognition system for the colombian sign… (Fredy H. Martínez S.) 2085 We built our training database that can be seen in Figure 3. Our dataset was built by capturing images for each of the 24 categories in Figure 1. In total, we used 1000 images in each category for a total of 24000 images. The same number of images in each category was maintained to achieve a balance of classes in the trained model. To separate the images within each category, the images corresponding to each class were placed together in a folder labeled with the letter c, an order number, and the corresponding letter. Different people were used to capturing the images, in different locations, viewing angles, distances to the camera and lighting conditions. This was done to try to make the classification algorithm identify the information of common interest in each category, and to increase the robustness against changes in the environment. To improve the performance of the network we randomly mix the images during training. With the same intention, we scale all images to a size of 256x256 pixels. In the model we have not considered the aspect ratio of the images, we believe that the information in the images is not altered by this change, besides, the image size is important for the training of the network. For training purposes we normalize the color value of each RGB matrix of the images to the range of 0 to 1, this corresponds to the operating value of the neural network. The database was randomly divided into two groups, one for training and one for testing. We used 75% of the data for training and 25% for performance evaluation. In the network design, the number of nodes in the input layer is defined according to the size at which the images are resized, i.e. 255x255x3 considering the three-color matrices. The number of nodes in the output layer is defined by the number of categories for the classification of the images, i.e. 24 nodes. As optimization and cost metrics in training, we use stochastic gradient descent, categorical cross-entropy, accuracy, and mse. Figure 3. Sample database of CSL images used for model training The neural network had a total of 4,295,084 parameters, of which a total of 4,258,346 were adjusted (trained). The neural model training code was developed in Python 3.7.3 within the Keras framework (using TensorFlow backend). As libraries, we use Scikit Learn 0.20.3, Pandas 0.24.2, Nltk 3.4, Numpy 1.16.2, and Scipy 1.2.1.
  • 5.  ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089 2086 4. RESULTS AND DISCUSSION The training was performed during 10 epochs, and as metrics of adjustment during the training, the function of loss of cross-entropy (categorical cross-entropy), the accuracy (or success rate) and the MSE (Mean Squared Error) were calculated. The results during the training of these three metrics, for both training and validation data, are shown in Figures 4, 5 and 6. As can be seen, the reduction of error is constant for both training and validation data, with equivalent behavior for accuracy, which follows correct learning without problems of over-or under-adjustment. After the model was trained, its performance was checked against the validation data by means of its Confusion Matrix, from the Precision, Recall and F1-score metrics, and by means of the ROC (Receiver Operator Characteristic) curve. Figures 7, 8 and 9 show the results achieved. The diagonal in Figure 7 shows a very good classification hot zone, which coincides with the data by category of the metrics summarized in Figure 8. Figure 4. Training and validation loss Figure 5. Training and validation accuracy
  • 6. Bulletin of Electr Eng & Inf ISSN: 2302-9285  A gesture recognition system for the colombian sign… (Fredy H. Martínez S.) 2087 Figure 6. Training and validation MSE (Mean Squared Error) Figure 7. Confusion matrix Figure 8. Precision, recall and F1-score metrics
  • 7.  ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 9, No. 5, October 2020 : 2082 – 2089 2088 Figure 9. ROC curve and ROC area for each class The ROC (Receiver Operator Characteristic) curve in Figure 9 graphically represents through curves (areas under the curves) the true positive and false positive rates in the model classification; the best categorization is achieved with the largest areas (blue). 5. CONCLUSION In this paper, we show the development of a visual recognition system of the CSL in normal text in Spanish using a NASNET-type convolutional network. The CSL has static symbols that represent each of the letters used in the Spanish language, Colombia's mother tongue. The model was developed to be integrated into an assistive robot. The images used in the training (1000 in each of the 24 categories) were randomly mixed and resized to 256x256 pixels. For the network training, we used categorical cross-entropy loss, stochastic gradient descent, accuracy, and MSE. The design allows for the possibility of scaling to new symbols. The results showed a high performance, 88% accuracy measured on images unknown to the network (images not used in training and with unstructured environment). The model's confusion matrix shows a very low percentage of false positives, which is supported by the ROC curve of each category with average values of 0.99, indicating a very high accuracy. In addition, these values are consistent with the recall and f1-score metrics, which denotes an important contribution of the model, since it uses semi-structured images without previous processing, very close to real conditions, unlike previous studies where the images are conditioned in position, background and lighting, and use pre-processing. ACKNOWLEDGEMENTS This work was supported by the Universidad Distrital, the Universidad Surcolombiana, and the Universidad Nacional Abierta y a Distancia (UNAD). The views expressed in this paper are not necessarily endorsed by the above-mentioned universities. The authors thank the ARMOS research group for the evaluation carried out on prototypes. REFERENCES [1] N. Nagori and V. Malode, “Communication interface for deaf-mute people using microsoft kinect,” In International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT 2016), 2016. doi: 10.1109/ICACDOT.2016.7877664. [2] A. R. Hasdak, et al., "Deaf-Vibe: A Vibrotactile Communication Device Based on Morse Code for Deaf-Mute Individuals," 2018 9th IEEE Control and System Graduate Research Colloquium (ICSGRC), pp. 39-44, 2018, doi: 10.1109/ICSGRC.2018.8657547. [3] A. B. Jani, N. A. Kotak and A. K. Roy, "Sensor Based Hand Gesture Recognition System for English Alphabets Used in Sign Language of Deaf-Mute People," 2018 IEEE SENSORS, pp. 1-4, 2018, doi: 10.1109/ICSENS.2018.8589574. [4] Haq E. S., Suwardiyanto, D., M. Huda, “Indonesian sign language recognition application for two-way communication deaf-mute people,” In 3rd International Conference on Information Technology, Information System and Electrical Engineering (ICITISEE 2018), pages 313–318, 2018. doi: 10.1109/ICITISEE.2018.8720982.
  • 8. Bulletin of Electr Eng & Inf ISSN: 2302-9285  A gesture recognition system for the colombian sign… (Fredy H. Martínez S.) 2089 [5] Y. F. Admasu and K. Raimond, "Ethiopian sign language recognition using Artificial Neural Network," 2010 10th International Conference on Intelligent Systems Design and Applications, pp. 995-1000, 2010, doi: 10.1109/ISDA.2010.5687057. [6] L. Peluso, J. Larrinaga, and A. Lodi. Public policies on deaf education. comparative analysis between uruguay and brazil. Second International Handbook of Urban Education, vol. 2, no. 1, pp. 613–626, 2017, doi: https://doi.org/10.1007/978-3-319-40317-5_33. [7] F. Lan, “The role of art education in the improvement of employment competitiveness of deaf college students,” In 2nd International Conference on Art Studies: Science, Experience, Education (ICASSEE 2018), pages 879–883, 2018, doi: https://doi.org/10.2991/icassee-18.2018.181. [8] L. Vallejo, “El censo de 2018 y sus implicaciones en Colombia,” Apuntes del Censo, vol. 38, no. 67, pp.9–12, 2019, doi: https://doi.org/10.19053/01203053.v36.n64.2017.6511. [9] S. Goldin and D. Brentari, “Gesture, sign, and language: The coming of age of sign language and gesture studies,” Behavioral and Brain Sciences, vol. 40, no. 46, pp. 1–60, 2017, doi: https://doi.org/10.1017/S0140525X15001247. [10] Y. Motamedi, et al., “The cultural evolution of complex linguistic constructions in artificial sign languages,” In 39th Annual Meeting of the Cognitive Science Society (CogSci 2017), pp. 2760–2765, 2017. [11] J. Guerrero and W. Pérez, “FPGA-based translation system from colombian sign language to text,” DYNA, vol. 82, no. 189, pp. 172–181, 2015, doi: http://dx.doi.org/10.15446/dyna.v82n189.43075. [12] B. Villa, V. Valencia, and J. Berrio. Digital image processing applied on static sign language recognition system. Prospectiva, vol. 16, no. 2, pp. 41–48, 2018, doi: http://dx.doi.org/10.15665/rp.v16i2.1488. [13] P. Kumar, et al., “Coupled hmm-based multi-sensor data fusion for sign language recognition,” Pattern Recognition Letters, vol. 86, pp. 1–8, 2017, doi: https://doi.org/10.1016/j.patrec.2016.12.004. [14] S. Hore, et al., “Indian sign language recognition using optimized neural networks,” Advances in Intelligent Systems and Computing, vol. 455, no. 1, pp. 553–563, 2017, doi: https://doi.org/10.1007/978-3-319-38771-0_54. [15] C. López, “Evaluación de desempeño de dos técnicas de optimización bio-inspiradas: Algoritmos genéticos y enjambre de partículas,” Tekhnê, vol.11, no. 1, pp. 49–58, 2014. [16] S. M. Kamal, et al., "Technical Approaches to Chinese Sign Language Processing: A Review," in IEEE Access, vol. 7, pp. 96926-96935, 2019, doi: 10.1109/ACCESS.2019.2929174. [17] J. Gałka, et al., "Inertial Motion Sensing Glove for Sign Language Gesture Acquisition and Recognition," in IEEE Sensors Journal, vol. 16, no. 16, pp. 6310-6316, Aug.15, 2016, doi: 10.1109/JSEN.2016.2583542. [18] S. Chand, A. Singh, and R. Kumar, “A survey on manual and non-manual sign language recognition for isolated and continuous sign,” International Journal of Applied Pattern Recognition, vol. 3, no. 2, pp. 99–134, 2016, doi: 10.1504/IJAPR.2016.079048. [19] F. Martínez, C. Penagos, and L. Pacheco, ”Deep regression models for local interaction in multi-agent robot tasks,” Lecture Notes in Computer Science, vol. 10942, pp. 66–73, 2018, doi: 10.1007/978-3-319-93818-9_7. [20] F. Martínez, A. Rendón, and M. Arbulú, “A data-driven path planner for small autonomous robots using deep regression models,” Lecture Notes in Computer Science, vol. 10943, pp. 596–603, 2018, doi: https://doi.org/10.1007/978-3-319-93803-5_56. [21] E. B. Villagomez, et al., "Hand Gesture Recognition for Deaf-Mute using Fuzzy-Neural Network," 2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia), pp. 30-33, 2019, doi: 10.1109/ICCE- Asia46551.2019.8942220. [22] J. Scott, S. Hansen, and A. Lederberg, “Fingerspelling and print: Understanding the word reading of deaf children,” American Annals of the Deaf, vol. 164, no. 4, pp. 429–449, 2019, doi: 10.1353/aad.2019.0026. [23] A. Lederberg, L. Branum, and M. Webb., “Modality and interrelations among language, reading, spoken phonological awareness, and fingerspelling,” Journal of deaf studies and deaf education, vol. 24, no. 4, pp. 408-423, 2019, doi: 10.1093/deafed/enz011. [24] Q. Zhang, et al,. “Myosign: enabling end-to-end sign language recognition with wearables,” In 24th International Conference on Intelligent User Interfaces, pp 650–660, March 2019. doi: https://doi.org/10.1145/3301275.3302296. [25] S. Fakhfakh and Y. Jemaa, “Gesture recognition system for isolated word sign language based on key-point trajectory matrix,” Computación y Sistemas, vol. 22, no. 4, pp. 1415–1430, 2018, doi: 10.13053/CyS-22-4-3046. [26] P. V. V. Kishore, et al., "Optical Flow Hand Tracking and Active Contour Hand Shape Features for Continuous Sign Language Recognition with Artificial Neural Networks," 2016 IEEE 6th International Conference on Advanced Computing (IACC), pp. 346-351, 2016, doi: 10.1109/IACC.2016.71. [27] K. Li, Z. Zhou, and C. Lee, “Sign transition modeling and a scalable solution to continuous sign language recognition for real-world applications,” ACM Transactions on Accessible Computing, vol. 8, no. 2, pp. 1–23, 2016, doi: https://doi.org/10.1145/2850421.