MACHINE LEARNING BASEDSIGN LANGUAGE INTERPRETER
University College of Engineering Kakinada, JNTUK Kakinada
Department of Electronics and Communication Engineering
Under the guidance of
Dr. P. Pushpalatha
Assistant Professor
Department of ECE
Presented by
V. Padmaja (20021A0449)
B. Sandeep Singh (20021A0405)
L. Sai Rohith (20021A0447)
D. Shiva Lalith (20021A0438)
2.
Contents
• Abstract
• Introduction
•Objective
• Flowchart for sign language recognition system
• Non-vision based approach
• Proposed smart glove approach using ML
• Interfacing flex sensors and accelerometer with ESP32 microcontroller
- Components required and their description
- Connections
- Procedure
• Results
3.
• Machine Learning(ML) Algorithms
• Dataset Description
- Data collection
- Data pre-processing
- Training and testing sets
• Model Evaluation
• Model Deployment
• Transmission of data from ESP32 through Wi-Fi
• Results
• Conclusion
• Future Scope
• References
Contd.
4.
• Traditional methodsfor sign language recognition rely heavily on visual recognition systems, often overlooking
the potential of tactile feedback.
• The main aim of our project is to design a hand glove that will make the sign language understandable to all by
converting it into speech and text message.
• The proposed system integrates flex sensors embedded along the length of each finger within the glove. These
sensors capture the tiny details of hand gestures, transmitting the data to an ESP32 microcontroller for
processing.
• Machine Learning (ML) classifiers like Support Vector Machines (SVM), Multi-Layer Perceptron (MLP),
Random Forest (RF), and Logistic Regression (LR) serve as the backbone of this recognition system.
• By training these classifiers on a comprehensive dataset comprising many hand gestures, the smart glove
achieves accuracy and reliability in understanding sign language.
Abstract
5.
Introduction
• In theworld’s whole population, there are approximately 72 million deaf-mute people.
• In order to bridge the gap between the deaf and normal community, a smart glove is an excellent resort. Smart
gloves can be implemented in either a vision-based or non-vision-based manner.
• In a vision-based approach a camera reads the sign and uses deep learning algorithms to identify the sign and
convert to speech.
• However, this approach can be computationally complex and slower in converting the signs to speech.
• Therefore, we propose to apply non-vision based approach where sensors (usually, Flex sensors and
Accelerometer) are used to gather the information about the signs and use traditional Machine Learning (ML)
algorithms to estimate the sign and convert to speech.
6.
Objective
• Design andintegrate flex sensors, accelerometer, and ESP32 microcontroller to create a comprehensive
sensory input system.
• Employ machine learning techniques to process data from flex sensors and accelerometer embedded within
the ESP32 platform.
• Implement real-time speech and voice conversion algorithms on the ESP32 platform to translate sensory input
into understandable speech output.
7.
sign language
recognition system
computer
vision
obtainingraw data using
ESP32 and flex sensors
through
MATLAB AA
with camera
into csv through
python script
through MATLAB
AA without camera
.csv file fed into
ML model
ML model
prediction
text to speech
translation
speaker
output
Fig: Flowchart for sign language recognition
system
8.
Non-vision based approach
•The glove uses the ML algorithms to translate sign language to speech.
The system makes use of five flex sensors, MPU6050 accelerometer, and
ESP32 microcontroller.
• Flex sensors were used to determine the bending angle of signs,
accelerometer for the position/orientation of the sign.
• The data was read and then sent to the microcontroller, by the user’s PC to
run a python script to figure out the corresponding output.
9.
wear the smartglove
make a gesture
ASL
data collected from flex sensors
and accelerometer
data fed to the ML model
ML model predict the letter
letter displayed in
Arduino IDE
letter spoken on the speaker
Fig: Proposed smart glove approach for ASL using ML
10.
The hardware componentsrequired for interfacing are
1. ESP32 microcontroller
2. Flex Sensors
3. Jumper wires
4. Bread Board
Interfacing flex sensors and accelerometer with ESP32
11.
ESP32 microcontroller
• ESP32microcontroller comes with an on-chip 32-bit
microcontroller with integrated Wi-Fi + Bluetooth +
BLE features that targets a wide range of
applications.
• It is a series of low-power and low-cost developed by
Espressif Systems.
• The processor had a clock rate of over 240 MHz,
which made for a relatively high data processing
speed.
• ESP32 pins are numbered from GPIO0 to GPIO39.
• Low-power capabilities make it suitable for battery-
operated and energy-efficient applications.
12.
Flex sensor
Fig: Flexsensors
• Flex sensors are a type of variable resistor whose resistance increases as they
are bent.
• Flex sensor is a thin strip torsion sensor that comes in different lengths (2.2 or
4.5 in), has a tolerance of ±30%, power rate of 0.5W and operates at low
voltage levels.
• They measure the twisting angle in one direction in which the resistance
changes according to the bend of the sensor.
• We plan to use five flex sensors which are positioned on each finger of the
glove in order to read the signs.
13.
• Connect oneterminal of the flex sensor to GND.
• Connect the other leg of the flex sensor to a fixed resistor (e.g., 100kΩ).
• Connect the other end of the fixed resistor to 3.3V
• Connect the junction between the flex sensor and the resistor to a GPIO pin on the ESP32 (e.g., GPIO34) to
form a voltage divider.
Circuit Connections
1. Write yourArduino code in the Arduino IDE's editor and save it.
2. Click on the "Verify" button in the toolbar or go to "Sketch" > "Verify/Compile" to compile your code.
3. Connect the ESP32 microcontroller to your computer.
4. Upload the code to the ESP32, select ESP32 board and select the COM port.
5. Open the Serial Monitor to view the sensor readings.
NOTE:
6. Make sure to install the necessary libraries for the MPU6050 accelerometer. You can find the MPU6050
library in the Arduino Library Manager.
7. Adjust the pin assignments according to the actual hardware connections.
Procedure
Machine Learning (ML)Algorithms
Support Vector Machines (SVM) is capable of
performing both linear and non-linear classification
tasks.
• In a linear SVM, the algorithm finds the hyperplane
that best separates the classes in the feature space. If
the data is not linearly separable.
• The kernel trick is a key concept in SVM that allows
it to handle non-linearly separable data by implicitly
mapping the input features into a higher-
dimensional space.
from sklearn.svm import SVC
svm_model = SVC(kernel='linear') # Create SVM
model with linear kernel
18.
Multi-Layer Perceptron (MLP)is a type of artificial
neural network
• It is a complex supervised ML algorithm that relies on
neural networks connections to map the data and make a
prediction. MLP is used when data belong to a specific
label or class.
• MLP is typically trained using the backpropagation
algorithm. During training, the network adjusts its
weights based on the error between the predicted output
and the true output.
from sklearn.neural_network import MLPClassifier
mlp_model = MLPClassifier(hidden_layer_sizes=(100,),
max_iter=1000) # Create MLP model with 100 neurons in
a single hidden layer
19.
Random Forests (RF)is an ensemble of multiple
decision trees.
• It operates by constructing a multitude of decision
trees which are initially constructed and combined in a
random fashion to build a forest of trees.
• The advantage of RF is that it minimizes the
overfitting problems in the trained model, by
incorporating a Bagging step which uses bootstrap
aggregation.
from sklearn.ensemble import RandomForestClassifier
rf_model = RandomForestClassifier(n_estimators=100)
# Create Random Forest model with 100 trees
20.
Logistic Regression (LR)performs a classification for
discrete data types. LR can be of three types,
• Binary Logistic Regression provides binary outcome
of two class values for instance, A or B, yes or no.
• Multinomial logistic regression extends binary
logistic regression to cases where the outcome
variable has more than two categories.
• Ordinal Logistic Regression are available for
providing predictions of multi class values of
different categories.
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression() # Create Logistic
Regression model
21.
Dataset Description
The processof data collection, pre-processing, training and testing process are described.
Data Collection:
• The effectiveness of machine learning (ML) hinges greatly on both the quantity and quality of data, the
collection phase stands as a pivotal step in our methodology.
• It's crucial to ensure that the data faithfully represents the system variables, as the classification accuracy of
ML algorithms heavily relies on this factor.
• Our dataset consisted of 1,500 instances for each letter and word.
• Each instance encompassed a total of eight features sourced from flex sensors and accelerometers.
• The features derived from the flex sensors were denoted as flex_1, flex_2, flex_3, flex_4, and flex_5, while
those derived from the accelerometer were labeled as ACCx, ACCy, and ACCz.
Data Pre-processing:
• Inthis stage, we transformed the raw data into comprehensible information and conducted data cleaning
procedures.
• This step holds significant importance in machine learning (ML) as it aims to reduce errors in the data and
handle any missing information.
• Through the labeling process, we assigned suitable class labels to each data instance, enabling supervised
learning.
• Moreover, we determined the anticipated output word using the labeled data, which acts as the target variable
for the ML model to understand and forecast.
• File name was considered as a target in the ml model.
24.
Training and Testingsets:
• Following the data collection and labeling stages, the subsequent step entailed training the model.
• This instructs the model on how to forecast an output given a specific set of inputs.
• To facilitate this, the complete dataset was divided into two portions: 80% for training, encompassing 1200
instances, and the remaining 20% for testing, constituting 300 instances.
• Python code was crafted utilizing open-source libraries like NumPy and Pandas to extract data from .csv files
into arrays for further processing.
Transmitting data fromESP32 to an API over Wi-Fi
1
Set Up ESP32 Wi-Fi
Connection 2
Configure the ESP32 to
connect to your Wi-Fi
network. Provide the
SSID and password of
your Wi-Fi network
3
Read Sensor Data
4
Interface with the flex
sensors and accelerometer
connected to the ESP32.
Read the sensor data from
each sensor
5
Format Data into JSON 6
Construct a JSON object
to encapsulate the sensor
readings
7
Send HTTP POST
Request to API
8
Utilize the ESP32's
networking capabilities to
send an HTTP POST
request to the API
endpoint
9
Handle API Response (if
necessary)
Process any response
received from the API
like parsing the response
and taking appropriate
actions
31.
Results
When thumb fingeris bent, according to our dataset of flex sensors values, we got the output “hello” in Arduino IDE
terminal as well as speech output from laptop speaker.
32.
When all thefingers are bent at the same time, according to our dataset of flex sensors values, we got the output “goodbye”
in Arduino IDE terminal as well as speech output from laptop speaker.
33.
Conclusion
• Hence, designedand implemented a non-vision-based smart glove using ML classifiers.
• Displayed the corresponding output for sign in Arduino IDE terminal.
• Among the four ML classifiers (LR, SVM, MLP and RF), the RF classifier performed the best with a
classification accuracy of 99.7% followed by MLP with 99.08%
• The hardware involved ESP32 microcontroller, five flex sensors and an accelerometer to provide sign
language recognition and conversion of signs to speech and text.
34.
Future Scope
• Furthermore,we see the potential to modify this sign language interpreter for deployment on mobile devices
or wearable platforms.
• This adaptation would empower users to access sign language translation services conveniently while on-the-
go, further enhancing communication accessibility.
35.
References
1. Smart_Glove_for_Bi-lingual_Sign_Language_Recognition_using_Machine_Learning.pdf
2. Flexsensors. https://components101.com/sensors/ flex-sensor-working-circuit-datasheet,[Last accessed on
21 Nov. 2022].
3. Ahmad Sami Al-Shamayleh, Rodina Ahmad, Mohammad AM Abushariah, Khubaib Amjad Alam, and
Nazean Jomhari. A systematic literature review on vision based gesture recognition techniques. Multimedia
Tools and Applications, 77(21):28121–28184, 2018.
4. N Arun, R Vignesh, B Madhav, Arun Kumar, and S Sasikala. Flex sensor dataset: Towards enhancing the
performance of sign language detection system. In 2022 International Conference on Computer
Communication and Informatics (ICCCI), pages 01–05. IEEE, 2022.
5. Bijay Sapkota, Mayank K Gurung, Prabhat Mali, and Rabin Gupta. Smart glove for sign language
translation using arduino. In 1st KEC Conference Proceedings, volume 1, pages 5–11, 2018.
6. Dhawal L Patel, Harshal S Tapase, Paraful A Landge, Parmeshwar P More, and AP Bagade. Smart hand
gloves for disable people. Int. Res. J. Eng. Technol.(IRJET), 5:1423–1426, 2018.
7. C. Dong, M. C. Leu, Z. Yin, “American Sign Language alphabet recognition using Microsoft Kinect,” in
Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, USA,
2015, pp. 44–52.
36.
8. Firas ARaheem and Hadeer A Raheem. American sign language recognition using sensory glove and neural
network. AL-yarmouk J, 11:171–182, 2019.
9. Lane, H.L.; Grosjean, F. Recent Perspective on American Sign Language; Psychology Press Ltd: New York,
NY, USA, 2017.
10. Mohammad AAlzubaidi, Mwaffaq Otoom, and Areen M Abu Rwaq. A novel assistive glove to convert
arabic sign language into speech. Transactions on Asian and Low-Resource Language Information
Processing, 2022.
11. Tariq Jamil. Design and implementation of an intelligent system to translate arabic text into arabic sign
language. In 2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pages 1–
4. IEEE, 2020.
12. Tushar Chouhan, Ankit Panse, Anvesh Kumar Voona, and SM Sameer. Smart glove with gesture recognition
ability for the hearing and speech impaired. In 2014 IEEE Global Humanitarian Technology Conference
South Asia Satellite (GHTC-SAS), pages 105–110. IEEE, 2014.
13. Wu, J.; Sun, L.; Jafari, R. A Wearable System for Recognizing American Sign Language in Real-Time Using
IMU and Surface EMG Sensors. IEEE J. Biomed. Health Inf. 2016, 20, 1281–1290.
Contd.