SlideShare a Scribd company logo
Intelligent
Prosthetic Hand
M a s t e r s T h e s i s
P r e s e n t e d b y : A l y M o s l h i
1.Literature
REVIEW
R e v i e w o f t h e p r e v i o u s W o r k s
1.
Deep Learning for
Electromyographic
Hand Gesture Signal
Classification Using
Transfer Learning
I E E E X p lor e
2 0 1 9
N o rway
2019
Year of
publication
237
Paper
Citation
7445
Full text
Views
18
Paper
PAPER OVERVIEW
- ST AT ISTIC S -
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
ABSTRACT
• In the recent years deep learning have showed effectiveness in detecting features from large amount of data.
However, within the field of electromyography-based gesture recognition it was seldom employed as they
require an unreasonable amount of effort from a single person, to generate tens of thousands of examples.
• This work assumes that we can generate general/informative features can be generated when aggregating
signals from multiple users. thus, reducing the recording burden while enhancing gesture recognition. Using
transfer learning
• Datasets: Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is
employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband
dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants.
• Three different deep learning networks employing three different modalities as input (raw EMG,
Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset.
• achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and
68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet.
• Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows
users to adapt their muscle activation strategy which reduces the degradation in accuracy normally
experienced over time.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
I. INTRODUCTION
• Importance of the prosthetics, what is EMG signal, and we can record them then use AI to control bionics.
• The literature review showed that the papers used to get features from the signals to predict the gesture,
recently deep learning was introduced. shifting the paradigm from feature engineering to feature learning.
One of the main important parameters for successful prediction using AI is the amount of data. The
traditional approach was to collect very large amount of data from a single user to allow the deep learning
model predict his gesture, but the paper propose aggregating the readings from multiple users to generalize
the prediction for everyone. Consequently, deep learning offers a particularly attractive context from which
to develop a Transfer Learning (TL) algorithm to leverage inter-user data by pre-training a model on
multiple subjects before training it on a new participant.
• As such, the main contribution of this work is to present a new TL scheme employing a convolutional
network (ConvNet) to leverage inter-user data within the context of sEMG-based gesture recognition.
• there is another paprt [7], followed that approach before. This paper continue the work, improving the TL
algorithm to reduce its computational load and improving its performance. Additionally, three new ConvNet
architectures, employing three different input modalities, specifically designed for the robust and efficient
classification of sEMG signals are presented. The raw signal, short-time Fourier transform-based
spectrogram and Continuous Wavelet Transform (CWT) are considered for the characterization of the sEMG
signals to be fed to these ConvNets. Another major contribution of this article is the publication of a new
sEMG-based gesture classification dataset comprised of 36 able-bodied participants.
I. INTRODUCTION
• This dataset and the implementation of the ConvNets along with their TL augmented version are made
readily available. Finally, this paper further expands the aforementioned conference paper by proposing a
use-case experiment on the effect of real-time feedback on the online performance of a classifier without
recalibration over a period of fourteen days.
• The paper organization:
• Overview of related work for gesture recognition using deep learning and transfer learning.
• Presenting the new data-sets being used, with the data acquisition and the preprocessing. Also the NinaPro DB5
dataset.
• Presentation of the different state-of-the-art feature sets employed in this work is given
• The networks’ architecture.
• Presenting the TL model used
• Comparison between the state of the art in gesture recognition
• A real-time use case.
• Results and discussion>
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
II. RELATED WORK
• EMG signals vary from subjects even with precious placement of the electrodes. That is why classifiers
trained from a user performs slightly better than random when someone else use it. But not anyway close to
precision. Therefore, many complicated techniques have been acquired to get the inter-user information.
For example, research has been one on the common features between the original subject and a new user.
Another research on finding a pretrained model to remove the need to work with data from multiple
subjects. These non deep learning approaches showed way better results than the non augmented versions.
• Short time Fourier transform (STFT) was not used a lot in the past decades for the classification of the EMG
data. Maybe the reason why it isn’t being used is the large number of features it produce that would be
computationally expensive. Moreover, STFT have also been shown to be less accurate than wavelet transform
on their own to classify EMG signals. Recently anyway the STFT features in the form of spectrograms was
applied as input feature space for classification of sEMG data by leveraging ConvNets.
• Continuous wavelet transform (CWT) features have been used for electroencephalography and EMG signal
analysis but mainly for lower limbs. Also wavelet based features have been used in the past for sEMG based
hand gesture classification. The wavelet based however that was used was on discrete waverlet transform
and the wavelet packet transform not continuous wavelet transform. Then mentions some reasons why they
used them not CWT. Anyway this paper for the first time introduces the CWT for utilizing the sEMG gesture
recognition.
• Recently the convNets have started to be used for the gesture recognition using single array and matrix of
electrodes. To the best of our knowledge, this paper, which is an extension of [7], is the first time inter-user
data is leveraged through TL for training deep learning algorithms on sEMG data.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
III. sEMG Datasets
• A) Myo Dataset:
• One of the main contribution of this paper is to provide a new publicity available sEMG-based hand gesture
recognition dataset referred to Myo dataset. The dataset is formed from two sub-datasets one for the
training and one for the evaluation. The first subset (the former) is composed of 19 abled bodied
participents, is used to build, validate and optimize. While the second subset (the latter) is composed of 17
abled bodied data and it is used for the final testing. This is the largest myoelectric data set available.
• The data acquisition protocol was approved by the Comit´es d’ ´ Ethique de la Recherche avec des ˆetres
humains de l’Universit´e Laval (approbation number: 2017-026/21-02- 2016) and informed consent was
obtained from all participants.
1. Recording hardware:
• the dataset was recorded using Myo armband , 8 channels, dry electrode, low sampling rate (200 Hz). Low
consumer grade armband.
• the armband is simply slipped to the forearm which is way easier than gel-based electrodes which requires
shaving the skin to obtain optimal contact between the skin’s subject and electrodes.
• The limitations of the armband comes still. The dry electrodes are less accurate and robust to motion artifact
than gel based ones.
• The recommended EMG frequency is in the range of 5-500Hz. Which requires a sampling frequency minimum of
1000Hz. The myo armband is limited to 200Hz. This loss was a main reason to impact the ability of various
classifiers. As such, robust and adequate classification techniques are needed to process the collected signals
accurately.
III. sEMG Datasets
• 2) Time Window Length:
• For real time control in a closed loop, time latency is important factor to consider. A paper recommended a time of
300ms. Even another papers recommended time between 100-250 ms. And as the performance of the classifier is
more important than time, the window size was taken equal to 260ms was selected to achieve a reasonable numbers of
samples between each prediction due to the low frequency of myoelectric.
• 3)Labeled Data Acquisition Protocol:
• The seven hand gestures considered in the work are shown here, the labeled data were recorded by letting the user to
hold each gesture for five seconds. The data recording was only applied when the user was familiar with the four
gestures. A time of five seconds were given for the user between each gesture and this time wasn’t recorded. The
recording of the seven gestures each for five seconds is called a cycle (35 sec). With four cycles is formed a round. In
case of the pre-trained data, a single round is available per subject (140 sec per participant). For the evaluation dataset
three rounds are available with the first round utilized for training (i.e., 140s per participant) and the last two for
testing (i.e., 240s per participant).
• During the recordings, the subjects were asked to stand and hold their
forearm parallel to the floor. And the armband was tightened to avoid bias in
the same orientation shown. The dataset is formed from the row EMG
signals.
• Signal processing must be applied to efficiently train a classifier on the data
recorded by the Myo armband. The data is first separated by applying sliding
windows of 52 samples (260ms) with an overlap of 235ms (i.e. 7x190
samples for one cycle (5s of data)). Employing windows of 260ms allows
40ms for the pre-processing and classification process, while still staying
within the 300ms target.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
IV. Classis EMG Classification
• Before using deep learning for classification, they used feature engineering where they manually finds trends
in EMG signals to perform each gesture and over years there were many different efficient ways in time and
frequency domain.
• Another thing that the paper will do is to test four of the proposed feature sets (features for manually
identifying hand gestures) from the literature review on five different classifiers (SVM, ANN, RF, K-NN and
LDA) then compare them with the proposed TL deep learning. The hyper parameters for each classifier were
selected by employing three-fold cross-validation alongside random search, testing 50 different
combinations of hyperparameters for each participant’s dataset for each classifier. The hyperparameters
considered for each classifier are presented in Appendix D.
• He made something called dimensionality reduction and showed that for most of the cases it reduced the
computational power and increased the performance.
• The four features that were selected for the comparison are:
• Time Domain Features (TD)
• Enhanced TD
• Nina Pro Features
• SampleEn pipeline
IV. Classis EMG Classification
• Before using deep learning for classification, they used feature engineering where they manually finds trends
in EMG signals to perform each gesture and over years there were many different efficient ways in time and
frequency domain.
• Another thing that the paper will do is to test four of the proposed feature sets (features for manually
identifying hand gestures) from the literature review on five different classifiers (SVM, ANN, RF, K-NN and
LDA) then compare them with the proposed TL deep learning. The hyper parameters for each classifier were
selected by employing three-fold cross-validation alongside random search, testing 50 different
combinations of hyperparameters for each participant’s dataset for each classifier. The hyperparameters
considered for each classifier are presented in Appendix D.
• He made something called dimensionality reduction and showed that for most of the cases it reduced the
computational power and increased the performance.
• The four features that were selected for the comparison are:
• Time Domain Features (TD)
• Enhanced TD
• Nina Pro Features
• SampleEn pipeline
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
V. Deep Learning Classifiers Overview
• As Mentioned before, due to the limitations of data from a single individual, to address the overfitting
problem, he used: Monte carlo dropout, Batch normalization and early stopping.
• A)Patch normalization, each patch of examples was normalized separately using the mean and the variance.
• B)Proposed CNN architecture
• Here he will be using a slow-fusion architecture based ConvNet in this work is due to the similarities between videos data
and the proposed characterization of sEMG signals, as both representations have analogous structures (i.e. Time x Spatial x
Spatial for videos) and can describe non-stationary information. Three architectures will be used
V. Deep Learning Classifiers Overview
• 1-ConvNet for Spectrograms: the spectrograms which are fed to the CNN were calculated using Hann
windows with the length 28 and an overlap of 20 yielding a matrix of 4x15. The first frequency band was
removed in an effort to reduce baseline drift and motion artifact. As the armband features eight channels,
eight such spectrograms were calculated, yielding a final matrix of 4x8x14 (Time x Channel x Frequency).
• The CNN was performed on a python framework called “Theano”
• Adam optimizer, Alpha = 0.00681, MC Dropout = 0.5, batch size= 128, early stop = 10%
V. Deep Learning Classifiers Overview
• 2-ConvNet for Continuous Wavelet Transforms:
• Has similar fashion as the previous one. Both the Morlet and Mexican Hat wavelet were considered for this
work and the Mexican hat performed better.
• The CWTs were calculated with 32 scales yielding a 32x52 matrix. Down sampling is firstly applied. Similar to spectrogram,
the last row of the calculated CWT was removed to reduce the motion artifacts. Additionally, the last column of the
calculated CWT was also removed as to provide an even number of time-columns from which to perform the slow-fusion
process. The final matrix shape is thus 12x8x7 (i.e. Time x Channel x Scale).
• All hyperparameters stayed the same except for Alpha = 0.08799
V. Deep Learning Classifiers Overview
• 3-ConvNet for raw EMG:
• This network will help assess if employing time-frequency features lead to sufficient gains in accuracy
performance to justify the increase in computational cost.
• The architecture was taken from another paper performing the same classification using PyTorch v.0.4.1.
with a change in learning rate obtained from performing cross-validation.
• Finally he made some enhancements to make the raw CNN like that:
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
VI. Transfer Learning
• Starts by saying that TL is the solution when you have few data. And he will perform something called
automatic alignment to solve the sensor’s misalignment problem from a subject to another.
• A) Progressive Neural Networks Fine-tuning is the most used TL technique which train a model source
domain, and when you have a new task, you use these trained weights as the initial weights for new tasks.
However, it suffers from many things.
• PNN is used to address these problems, it pretrain the model on a source domain and freeze its weights,
When a new task appears, a new network, with random initialization, is created and connected in a layer-
wise fashion to the original network.
• B) Adaptive Batch NormalizationIn opposition to the PNN architecture, which uses a different network for the
source and the target, AdaBatch employs the same network for both tasks. The TL occurs by freezing all the
network’s weights (learned during pre-training) when training on the target, except for the parameters
associated with BN. The hypothesis behind this technique is that the label-related
• C) Proposed Transfer Learning Architecture d
VI. Transfer Learning
• C) Proposed Transfer Learning Architecture
The main tenet behind TL is that similar tasks
can be completed in similar ways. The
difficulty in this paper’s context is then to
learn a mapping between the source and
target task as to leverage information learned
during pre-training.
• How he will use TL is he will train the CNN in
section v on the pre-training data (Named the
source network), then freeze the weights we
have except for BN. Then have another CNN
with nearly identical architecture (Named the
secondary network) with a different
initialization. The way the two networks are
connected is shown in the below figure. Then
we use the training dataset to train the whole
system together.
• The whole system is referred to as the target
network.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
VII. Classifier Comparison
VII. Classifier Comparison
VII. Classifier Comparison
VII. Classifier Comparison
VII. Classifier Comparison
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
VIII. Case Study
• This last experiment section proposes a use-case study of real-time performance of the classifier over a
period of 14 days for eight able-bodied participants. The main goal of this use-case experiment is to
evaluate if users can self-adapt and improve the way they perform gestures based on visual feedback from
complex classifiers.
• The eight participants were randomly separated into two equal groups. With and without feedback
• The details of the test (period, duration, accuracy …) is being told.
• The cases reported muscle fatigue that is why the test was function in time.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
IX. Discussion
• the TL augmented ConvNets significantly outperformed their nonaugmented versions regardless of the
number of training cycles.
• Increasing the number of cycles increased the performance
• Overall, the proposed TL-augmented ConvNets were competitive with the current state-of-the-art.
• Furthermore, the TL method outperformed the non-augmented ConvNets on the out-of-sample experiment. This suggests
that the proposed TL algorithm enables the network to learn features that can generalize not only across participants but
also for never-seen before gestures. As such, the weights learned from the pretraining dataset can easily be re-used for
other work that employs the Myo Armband with different gestures.
•
• This paper proposed the architecture to be two identical networks each perform different task. As such further
differentiation of both networks might lead to increased performance. Thus, a future work might be change the second layer
architecture.
• the real-time experiment accuracy was less than the Evaluation Dataset accuracy (95.42% vs 98.31% respectively). This is
likely due to the reaction delay of the participants, but more importantly to the transition between gestures. These
transitions are not part of the training dataset, because they are too time consuming to record as the number of possible
transitions …... Consequently, it is expected that the classifiers predictive power on transition data is poor in these
circumstances. As such, being able to accurately detect such transitions in an unsupervised way might have a greater impact
on the system’s responsiveness than simply reducing the window size. This and the aforementioned point will be
investigated in future works.
IX. Discussion
• The main limitation of this study is the absence of tests with amputees. Additionally, the issue of electrode
shifts has not been explicitly studied and the variability introduced by various limb positions was not
considered when recording the dataset.
• A limitation of the proposed TL scheme is its difficulty to adapt when the new user cannot wear the same amount of
electrodes as the group used for pre-training. This is because changing the number of channels changes the representation
of the phenomena (i.e. muscle contraction) being fed to the algorithm. The most straightforward way of addressing this
would be to numerically remove the relevant channels from the dataset used for pre-training. Then re-running the proposed
TL algorithm on an architecture adapted to the new representation fed as input. Another solution is to consider the EMG
channels in a similar way as color channels in image. This type of architecture seems, however, to perform worse than the
ones presented in this paper
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . s E M G D a t a s e t s
I V . C l a s s i s E M G C l a s s i f i c a t i o n
V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w
V I . T r a n s f e r L e a r n i n g
V I I . C l a s s i f i e r C o m p a r i s o n
V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d
M e d i u m - T e r m P e r f o r m a n c e
I X . D i s c u s s i o n
X . C o n c l u s i o n
X I . A c k n o w l e d g e m e n t
VIII. Conclusion
• This paper presented three novel covNets that showed competitiveness with the current classifier.
• On the newly proposed evaluation dataset, the TL augmented ConvNet achieves an average accuracy of
98.31% over 17 participants.
• On the nina-pro, DB5(18 hand/wrist gestures), the proposed classifier achieved an average accuracy of
68.98% over 10 participants on a single Myo Armband.
• This dataset showed that the proposed TL algorithm learns sufficiently general features to significantly
enhance the performance of ConvNets on out-ofsample gestures.
• Future works will focus on adapting and testing the proposed TL algorithm on upper-extremity amputees. This will provide
additional challenges due to the greater muscle variability across amputees and the decrease in classification accuracy
compared to able-bodied participants [35].
PAPER END
PAPER OVERVIEW
2.
Intelligent Human-
Computer Interaction
Based on Surface EMG
Gesture Recognition
I E E E X p lor e
2 0 1 9
C h ina
2019
Year of
publication
64
Paper
Citation
2970
Full text
Views
10
Paper
PAPER OVERVIEW
- ST AT ISTIC S -
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
Abstract
• Urban intelligence is an emerging concept which guides a series of infrastructure developments in modern
smart cities. Human-computer interaction (HCI) is the interface between residents and the smart cities, it
plays a key role in bridging the gap in applicating information technologies in modern cities.
• Hand gestures have been widely acknowledged as a promising HCI method.
• state-of-the-art signal processing technologies are not robust in feature extraction and pattern recognition
with sEMG signals
• In this paper, linear discriminant analysis (LDA) and extreme learning machine (ELM) are implemented in
hand gesture recognition system.
• The characteristic map slope (CMS) is extracted by using the feature re-extraction method because CMS can
strengthen the relationship of features cross time domain and enhance the feasibility of cross-time
identification.
• This study is focusing on optimizing the time differences in sEMG pattern recognition.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
I. Introduction
• HCI is important could be used for IoT, IT, Cloud Computing…..
• Talks about EMG signals and that we can use it for gesture recognition.
• At present, the research on sEMG signal mainly focuses on the spatial non-stationarity of sEMG, but the time
non-stationarity of sEMG has always been a big challenge.
• In this paper, LDA is used to reduce the dimensionality of high-dimensional signals and eliminate the
redundant information in sEMG.
• The method of feature re-extraction is adopted to extract the characteristic slope value, and the ELM is
optimized by genetic algorithm to establish a gesture recognition system across time dimensions. The signal
is studied in time dimension.
• Paper Overview:
• Section II: reviews the basic methods of gesture recognition, including feature extraction, feature reduction and pattern
recognition based on neural network.
• Section III: elaborates the whole experimental process, including the sEMG signal col- lection plans. We proposed a
feature extraction method for reducing time non-stationarity, establishes GA-ELM optimization recognition network,
and analyzes the algorithm process of each step.
• Section IV: Results and conclusion
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
II. Related Work
• Read the generation of EMG. “interesting”.
• Pattern recognition is formed from three stages: Signal detection, signal representation and signal classification. The
accuracy is usually from 50-70%.
• There has been wide research on feature extraction, feature dimensionality reduction and pattern recognition as: time-
domain processing method “then he explain it RMS, Zero crossing, wavelength…”. The next paragraph he discuss the
“Frequency domain processing method”.
• These two methods yet ignores the chaotic and instability features of the EMG. In exercise the EMG is non-linear, thus Non-
liner dynamic method can construct multi-dimensional dynamic model based on one-dimensional time series to extract
more hidden information. The main non-liner characteristics include correlation dimension, entropy, complexity and
Lyapunov exponent.
• Mention some reasons, Therefore, when performing pattern recognition, it is necessary to adopt the method of feature
dimension reduction.
• Explains two ways for feature dimension reduction. feature selection and feature transformation.
• At present, the widely used pattern recognition methods of sEMG mainly include support vector machine (SVM), radial basis
function (RBF) algorithm, artificial neural net- work (ANN), hidden Markov model (HMM) and linear discriminant classifier.
ANN is the best. But here anyway he will use simpler extreme learning machine ( genetic algorithm).
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
II. Method
• A) ACQUISITION: the nine gestures that will be selected depends on moving
fingers and wrist.
• Didn’t talk about the data size, just mentioned it was within three days.
• B) DIMENSION REDUCTION AND FEATURE FUSION:
1) PRELIMINARY FEATURE EXTRACTION: window functions are used to segment these
continuous signals into appropriate sizes. Mainly three features were captured. Two
related to the time domain: RMS and waveform length. And one related to frequency
domain: median amplitude spectrum. These three parameters are the input for the
identification network.
ACQUISITION PREPROCESSING
CLASSIFIER
2) FEATURE FUSION AND DIMENSION REDUCTION: The direct extracted features, RMS, WL and MAS, are in high dimensional
feature spaces, which is not suitable for classification. In order to improve the accuracy of gesture recognition and generalize
the classier, reducing the dimension of the feature space is very important. (not sure but kind of normalizing). The output is
matrix X3
3) FEATURE RE-EXTRACTION:
Here we’ll find the features that will be used as input for the ANN. Just the time domain signal won’t be enough, and he
mentions why.
Then he mentions that they had to map the matrix X3 to a new eigen vector matrix X4 (feature matrix). The purpose of the
feature matrix it maps the values of each gesture to a new separate then the other gestures (uses an equation to map).
• C) Classifier design and parameter optimization: genetic algorithm in GA-ELM is designed to optimize the initial
weights and thresholds of the network.
• Extreme learning machine and genetic algorithm:
It’s a single hidden layer feedforward neural network.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
III. RESULTS
• In order to illustrate the advantages of this method, the data before and after dimensionality reduction are used
as input of ELM classifier. And the results are compared.
• ANALYSIS OF TWO EXPERIMENTAL RESULTS BEFORE GA OPTIMIZATION
• The average recognition accuracy of ELM network after LDA dimension reduction is 67.18%, and that of ELM
network after feature extraction is 75.74%.
• EXPERIMENTAL RESULTS ANALYSIS OF GA OPTIMIZATION
• The results show that the accuracy is 79.3%
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . R e l a t e d W o r k
I I I . M e t h o d
I V . R e s u l t A n a l y s i s o f
G e s t u r e R e c o g n i t i o n
V . C o n c l u s i o n
IV. CONCLUSION
• In this paper, the second feature extraction of static gesture based on sEMG is studied. Theoretical and
experimental results have been obtained, however there are still problems need to be further explored. The
extraction of eigenvalue slope improves the recognition accuracy in our work, to define new features or feature
selection methods are promising research directions in the future.
PAPER END
PAPER OVERVIEW
3.
EMG-Based Gesture
Recognition: Is It Time
to
Change Focus from the
Forearm to the Wrist?
I E E E X p lor e
2 0 2 2
C anada
2020
Year of
publication
4
Paper
Citation
837
Full text
Views
10
Paper
PAPER OVERVIEW
- ST AT ISTIC S -
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
Abstract
• This study presents a comprehensive and systematic investigation of the feasibility of hand gesture recognition
using EMG signals recorded at the wrist. A direct comparison of signal and information quality is conducted
between concurrently recorded wrist and forearm signals. Both signals were collected simultaneously from 21
subjects while they performed a selection of 17 different single-finger gestures, multi-finger gestures and wrist
gestures.
• Wrist EMG signals yielded consistently higher (p < 0:05) signal quality metrics than forearm signals for gestures
that involved fine finger movements, while maintaining comparable quality for wrist gestures.
• Classifiers trained and tested using wrist EMG signals achieved average accuracy levels of 92.1% for single-
finger gestures, 91.2% for multi-finger gestures and 94.7% for the conventional wrist gestures.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
I. Introduction
• Applications for sEMG gesture recognition for IoT.
• Pattern recognition is widely used using forearm due to its use in prosthetics. The most commercial use is even
in prosthetics
• Early attempts for commercializing the PR was with Myo and gForce and both was for amputees. Here we say
there are more things we can do using PR than amputees. And the PR using forearm has been just limited in the
labs because customers aren’t used to wear bands around their forearm. That is why we won’t be focusing on
reading the data from forearm, instead from the wrist.
• Other technologies has also been used for PR like pressure sensing, ultrasound, data gloves……. But just kept
in labs and there isn’t any commercial products and mention some reasons why
• IMU was being effective when combined with EMG to detect hand positions and air gestures. But IMU failed to
detect salient hand gestures that don’t involve the movement of the wrist joint.
• Another reason for wrist instead of hand that it should be comfy to be worn for long time and socially accepted.
• Few studies used the wrist EMG. Some used it combined with forearm EMG. Some with the muscles inside the
hand (very high accuracy).
• Another study compared between the EMG at the wrist and at the forearm and didn’t find a difference in
strength or snr
I. Introduction
• Some studies focused only on the wrest but in a very limited scope. Consequently, for the first time, this study
encompasses a comprehensive and systematic investigation of the feasibility of using wrist EMG signals for
hand gesture recognition with a broad set of finger and hand gestures. Using many subjects.
• The first objective of the paper is to show the importance of wrist EMG PR.
• The second objective of this work is therefore to explore how the body of knowledge from the forearm EMG
literature, such as feature engineering and dimensionality reduction, can be applied to wrist EMG signals.
• Finally, the paper is concluded with a set of recommendations for future research studies and the
commercialization of wrist-based EMG wearables.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
II. Methodology
• A) Data Collection: For this study, EMG data were collected from 21 ablebodied participants (25:247:56 years,
14 males, 7 females, 19 right-handed) . Each participant wore 8 electrodes, 4 on the wrist and 4 on the
forearem, sampling rate was set to 1kHz. And talk more about the electrodes position.
• Participents were asked to perform 18 gesture. These gestures included common contractions consistent with 5
single-finger extensions, 6 multi-finger gestures and 6 wrist gestures.
• EMG data were collected from each participant while performing 4 repetitions of each gesture for 2 seconds
with breaks of 2 seconds between gestures. Hence, a balanced dataset of 72 repetitions per participant was
collected.
• B)Evaluating the signal quality: here he compares the signals from forearm with signals from wrist. The
comparison had Four signal quality indices were evaluated to quantify the amount of useful information
contained in the signals and to evaluate signal noise.: 1) Forearm-to-Wrist Signal Strength Ratio (FWR). 2)
Signal-to-Noise Ratio (SNR). 3) Signal-to-Motion Artifact Ratio (SMR). 4) Power Spectrum Deformation ().
• Each of them is detailly discussed.
• C) Feature extraction and engineering: Feature engineering techniques were applied to assess the performance
of individual state-of-the-art EMG features and a feature set previously identified in the forearm literature.
• EMG signals were segmented to 150ms sliding windows with 50% overlap. A total of 23 time-domain and
frequency-domain features were extracted from each channel as listed in Table I.
II. Methodology
• This resulted in a feature vector of length 280 per gesture repetition, considering all 8 channels and given that
HIST and AR features each produce 10 and 4 values, respectively. This feature set has proven to provide good
hand gestures recognition rates using forearm EMG signals.
• Class separation and feature repeatability were evaluated in multi-dimensional feature space using two
statistical indices: Davies-Bouldin index (DBI) and repeatability index (RI).
• D) Dimensionality Reduction:
• 1- feature selection: The sequential floating feature selection (SFFS) algorithm was applied to identify the most
important features for gesture recognition from the wrist and forearm EMG signals.
• 2- Feature projection: To visualize differences in the underlying information content, principle component
analysis (PCA) was applied to all wrist and forearm EMG features listed in Table I.
• E)Classification Algorithm: LDA and SVM (with a linear kernel) classifiers were employed in this study using the
leave-one-repetition-out cross-validation technique to estimate classification accuracies.
• F) Statistical Analysis: t-test was performed
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
III. Results
III. Results
III. Results
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
II. Discussion
• A) Wrist electrode configuration:
• some of the forearm muscles that control extension/flexion of the medial four fingers, such as the extensor
digiti minimi (EDM), extensor digitorum (ED) and flexor digitorum superficialis (FDS), extend down to the wrist.
Also, the extensor carpi ulnaris (ECU) muscle, which aids in hand extension and abduction, extends from the
proximal forearm down to the wrist [36]. Thus, the electrical activity of all these muscles can still be recorded at
the wrist level. Furthermore, there is a unique group of muscles that are only present at the wrist, near the
distal end of the forearm, proximal to the ulnar styloid process, and control fine finger and thumb movements.
• Hence, the electrodes placed more distally, near the wrist joint, are likely to have measured EMG activity
relevant to fine finger movements.
• B) Feasibility of Wrist-Based EMG Gesture Recognition:
• Results showed that wrist EMG signals had higher signal quality compared to the forearm signals for multiple
gestures involving fine finger movements. Indeed, results showed that gestures that involved the movement of
the thumb and index fingers had significantly higher wrist signal strength (FWR< 1) and SNR values when
compared to forearm signals.
• in the current study, the results found that wrist EMG had higher amplitudes and SNR values than forearm
signals for gestures that involved finger motions.
• READ THE REST….
II. Discussion
• Future studies should thus explore the effect of dynamic and confounding factors on the performance of wrist-
based EMG pattern recognition systems. These factors, as have been outlined in the forearm literature, may
include limb position, electrode shift, and the effect of contraction intensity on the pattern recognition
performance. Another point to consider in future studies is the difference between the continuous classification
approach in prosthetics versus the discrete gesture approach in HCI (e.g. snapping fingers). Other directions for
researchers to pursue are the performance of wrist-worn EMG systems over time and how to maintain
performance over many days without the need to retrain the system.
1.Contents
A b s t r a c t
I . I n t r o d u c t i o n
I I . M e t h o d o l o g y
I I I . R e s u l t s
I V . D i s c u s s i o n
V . C o n c l u s i o n
III. Conclusion
• Wrist-based EMG classifiers achieved average accuracy levels of more than 91.2% for fine finger gestures and
94.7% for the conventional wrist gestures. This study highlights the great potential of wrist EMG signals and
paves the way for reliable wrist-based EMG wearables.
PAPER END
PAPER OVERVIEW
4.
MASTERS THESIS
MYOELECTRIC HUMAN COMPUTER
INTERACTION USING LSTM-CNN
NEURAL NETWORK FOR DYNAMIC
HAND GESTURES RECOGNITION
I E E E X p lor e
2 0 2 1
U S A
2021
Year of
publication
0
Paper
Citation
62
Full text
Views
54
Paper
PAPER OVERVIEW
- ST AT ISTIC S -
ABSTRACT
• There are two goals for this research:
• 1- Decease the effect of arm positions when recognizing a gestures. To tackle this issue, a CNN-LSTM neural
network is introduced. Compared to Dr. Shin’s work, the new model is able to classify more gestures with more
positions.
• 2- Apply the new model to a human-computer interaction system. A 7-DoF Kinova robot arm.
1.Contents
1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W
1.1 Literature Review
1.1.1 ElectroMyoGraphic Signal
1.1.2 Myoelectric Control
‘1.2 Observation
1.3 Thesis Objective
2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N
2.1 Introduction
2.2 Proposed Method.
2.2.1 EMG Gestures Dataset
2.2.2 CNN-LSTM network Structure
2.2.3 Training Results
2.3 Real Time Gestures Recognition
2.3.1 Real Time Recognition System
2.3.2 Data Collection
2.4 Results
2.4.1 Real-time Classification Accuracy.
2.4.2 Results Analysis
2.4.3 Discussion
3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M
C O N T R O L
4 . S U M M A R Y A N D C O N C L U S I O N S
1.Contents
1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W
1.1 Literature Review
1.1.1 ElectroMyoGraphic Signal
1.1.2 Myoelectric Control
‘1.2 Observation
1.3 Thesis Objective
2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N
2.1 Introduction
2.2 Proposed Method.
2.2.1 EMG Gestures Dataset
2.2.2 CNN-LSTM network Structure
2.2.3 Training Results
2.3 Real Time Gestures Recognition
2.3.1 Real Time Recognition System
2.3.2 Data Collection
2.4 Results
2.4.1 Real-time Classification Accuracy.
2.4.2 Results Analysis
2.4.3 Discussion
3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M
C O N T R O L
4 . S U M M A R Y A N D C O N C L U S I O N S
Introduction
• 1. Literature review: Skip, Not important
• 2. ElectroMyoGraphic Signal: Pattern recognition for EMG classification usually consist of data pre-processing,
data segmentation, feature extraction, dimensionality reduction, and classification stages.
• He then explain breafly each of them.
• Keeps mentioning Dr shin’s work. And put a diagram od LSTM that Dr shin used during his work.
Introduction
• 1.1.2 Myoelectric Control: Not important
• 1.2 Observation: Gestures include dynamic and static motions recognition is a time-series problem. Therefore,
recurrent neural network(RNN) is proper for this.
• 1.3)Thesis Objective: This project aims to design a system that enables people to use gestures with an MYO
armband to manipulate a robot arm. This system has two main parts: 1) recognition part: using neural networks
to recognize dynamic gestures. 2) control part: implementing a relative functions on a robot arm based on
different gestures.
This system should also ensure:
• Position Independent: Ensure that gestures could be recognized while limb positions change.
• Respond Promptly: Ensure that the time of completing a task by using myoelectric control is not much longer
than other control methods.
• Two devices are used in this research. First is the Myo armband. Seocnd one is a 7 DoF Kinova robot arm
1.Contents
1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W
1.1 Literature Review
1.1.1 ElectroMyoGraphic Signal
1.1.2 Myoelectric Control
‘1.2 Observation
1.3 Thesis Objective
2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N
2.1 Introduction
2.2 Proposed Method.
2.2.1 EMG Gestures Dataset
2.2.2 CNN-LSTM network Structure
2.2.3 Training Results
2.3 Real Time Gestures Recognition
2.3.1 Real Time Recognition System
2.3.2 Data Collection
2.4 Results
2.4.1 Real-time Classification Accuracy.
2.4.2 Results Analysis
2.4.3 Discussion
3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M
C O N T R O L
4 . S U M M A R Y A N D C O N C L U S I O N S
2.1Introduction
• The four Challenges being faced: limb position factor, constrain intensity factor, electrode shift factor and
within/between day factor.
• Mentions a paper that found explained how the limb positions influence in detail: the muscular activity that
maintains limb positions against gravitational force is dependent on the position of the limb[16].
• Dr shin’s work shows that dynamic gestures are in principle more reliable indicators of intent.
2.2 Proposed Method
• 2.2.1 EMG Gestures Dataset: There are five dynamic gestures with five arm positions in the dataset. (dynamic =
includes motion like waving).
• The training data is from seven human subjects. Every gesture at each position would be repeated for 10 times.
After finishing seven experiments, there are 7*5*5*10 = 1750 samples. For each sample, there is a 8*680
matrix where 8 is the the amount of channels and 680 is the time steps (The matrix would be pad zeros to
make every matrix same length). Tn original dataset is in below.
• Due to the small dataset. Augmentation as in case of images
Is added.
• After augmentation
2.2 Proposed Method
• 2.2.2CNN-LSTM network Structure: Explains in detail the CNN and LSTM.
• 2.2.3 Training Result: The dataset is split into training dataset and validation dataset, which are 80% and 20%
respectively. Therefore, for the original dataset, there are 1750*0.8 = 1400 samples for training and 350
samples for validation. And for dataset after the augmentation, there are 10500*0.8 = 8400 samples for
training and 2100 smaples for validation.
• The validation accuracy of original dataset is approximate 90% which is lower than the model trained by new
dataset. The training accuracy is 95.8% and validation accuracy is 92.7%.
2.3 Real Time Gestures Recognition
• 2.3.1 Real Time Recognition System: useless diagram shows the steps of reading a gesture.
• 2.3.2 Data Collection: there are 3000 sample for testing.
• 2.4.1 Real-time Classification Accuracy: overall 84.2%
• Then he add some confusion matrixes.
• 2.4.3 discussion:
• From the previous analysis, there are three observations:
1. The model is relatively position independent. The effect of arm positions still exists but is small.
2. 2. Different gestures impact recognition accuracy. In this model, influence on gestures may be bigger than
positions for classification accuracy. The gestures share less common would get better results.
3. The model is affected by subject dependency. The best and worst results show it clearly. Due to the different
performing habits for different human subjects, more data from variouspeople may help.
2.5 conclusion:
The training and validation accuracy are both over 90%.
Real-time accuracy = The overall accuracy is 84.2%.
1.Contents
1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W
1.1 Literature Review
1.1.1 ElectroMyoGraphic Signal
1.1.2 Myoelectric Control
‘1.2 Observation
1.3 Thesis Objective
2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N
2.1 Introduction
2.2 Proposed Method.
2.2.1 EMG Gestures Dataset
2.2.2 CNN-LSTM network Structure
2.2.3 Training Results
2.3 Real Time Gestures Recognition
2.3.1 Real Time Recognition System
2.3.2 Data Collection
2.4 Results
2.4.1 Real-time Classification Accuracy.
2.4.2 Results Analysis
2.4.3 Discussion
3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M
C O N T R O L
4 . S U M M A R Y A N D C O N C L U S I O N S
4. SUMMARY AND CONCLUSIONS
• Using CNN to generate the features instead of calculating time or frequency domain features manually before
input.
• Compared to many researches which focus on gestures recognition at the same limb position and have bad
performance when the limb position changes, the model in this research is relatively position-invariant.
• Compared to Dr. Shin’s work[3] which involves 3 gestures and 4 limb positions in the real time recognition, this
research includes 5 gestures and 5 limb positions. The model is more robust on gestures and limb positions for
real-time classification.
• Future Work:
1. Removing the trigger(fist) in the real-time system.
2. Speeding up processing time. This could be done by using a better GPU
3. Improving accuracy for the classification. Other advanced methods could be tried such as transfer learning.
Also more data from various types of people my help since the amount of gestures and arm positions is not
small and there are considerable differences among different human beings.
4. Increasing recognizable gestures
5. Reducing the arm position’s effect to gestures recognition. In other words, the gestures performing at the
random arm position could be classified successfully.
6. Decreasing the effect of human subject dependency.
7. Applying the proposed myoelectric control system on other applications like a drone control.
8. Adding feedback in the myoelectric control system.
THANK YOU
T O T H E N E X T S T E P

More Related Content

Similar to thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis th

HOW NEURAL NETWORKS WORK
HOW NEURAL NETWORKS WORKHOW NEURAL NETWORKS WORK
HOW NEURAL NETWORKS WORK
AM Publications
 
Human Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine LearningHuman Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine Learning
ijtsrd
 
International Journal of Engineering Inventions (IJEI),
International Journal of Engineering Inventions (IJEI), International Journal of Engineering Inventions (IJEI),
International Journal of Engineering Inventions (IJEI),
International Journal of Engineering Inventions www.ijeijournal.com
 
04 Client Server Technology
04 Client Server Technology04 Client Server Technology
04 Client Server Technology
Laguna State Polytechnic University
 
Summary Of Thesis
Summary Of ThesisSummary Of Thesis
Summary Of Thesisguestb452d6
 
Going global 2013 key note
Going global 2013 key noteGoing global 2013 key note
Going global 2013 key notejoannefbeale
 
Introduction to Energy Efficiency, EMS and Energy Audit
Introduction to Energy Efficiency, EMS and Energy AuditIntroduction to Energy Efficiency, EMS and Energy Audit
Introduction to Energy Efficiency, EMS and Energy Audit
eecfncci
 
The Smart Way To Invest in AI and ML_SFStartupDay
The Smart Way To Invest in AI and ML_SFStartupDayThe Smart Way To Invest in AI and ML_SFStartupDay
The Smart Way To Invest in AI and ML_SFStartupDay
Amazon Web Services
 
Handwriting identification using deep convolutional neural network method
Handwriting identification using deep convolutional neural network methodHandwriting identification using deep convolutional neural network method
Handwriting identification using deep convolutional neural network method
TELKOMNIKA JOURNAL
 
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
Vijay Srinivas Agneeswaran, Ph.D
 
Forex Coders course for MQL4
Forex Coders course for MQL4Forex Coders course for MQL4
Forex Coders course for MQL4
mrdotcom
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
Swati Bhakhar
 
Gaze detection
Gaze detectionGaze detection
Gaze detection
zeyad algshai
 
Age and Gender Classification using Convolutional Neural Network
Age and Gender Classification using Convolutional Neural NetworkAge and Gender Classification using Convolutional Neural Network
Age and Gender Classification using Convolutional Neural Network
IRJET Journal
 
IRJET- Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
IRJET-  	  Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...IRJET-  	  Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
IRJET- Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
IRJET Journal
 
Brainwave Feature Extraction, Classification & Prediction
Brainwave Feature Extraction, Classification & PredictionBrainwave Feature Extraction, Classification & Prediction
Brainwave Feature Extraction, Classification & Prediction
Olivia Moran
 
IRJET- Anomaly Detection System in CCTV Derived Videos
IRJET- Anomaly Detection System in CCTV Derived VideosIRJET- Anomaly Detection System in CCTV Derived Videos
IRJET- Anomaly Detection System in CCTV Derived Videos
IRJET Journal
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face Images
IRJET Journal
 
01 Introduction To Dbms
01 Introduction To Dbms01 Introduction To Dbms
01 Introduction To Dbms
Laguna State Polytechnic University
 
On Machine Learning and Data Mining
On Machine Learning and Data MiningOn Machine Learning and Data Mining
On Machine Learning and Data Miningbutest
 

Similar to thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis th (20)

HOW NEURAL NETWORKS WORK
HOW NEURAL NETWORKS WORKHOW NEURAL NETWORKS WORK
HOW NEURAL NETWORKS WORK
 
Human Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine LearningHuman Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine Learning
 
International Journal of Engineering Inventions (IJEI),
International Journal of Engineering Inventions (IJEI), International Journal of Engineering Inventions (IJEI),
International Journal of Engineering Inventions (IJEI),
 
04 Client Server Technology
04 Client Server Technology04 Client Server Technology
04 Client Server Technology
 
Summary Of Thesis
Summary Of ThesisSummary Of Thesis
Summary Of Thesis
 
Going global 2013 key note
Going global 2013 key noteGoing global 2013 key note
Going global 2013 key note
 
Introduction to Energy Efficiency, EMS and Energy Audit
Introduction to Energy Efficiency, EMS and Energy AuditIntroduction to Energy Efficiency, EMS and Energy Audit
Introduction to Energy Efficiency, EMS and Energy Audit
 
The Smart Way To Invest in AI and ML_SFStartupDay
The Smart Way To Invest in AI and ML_SFStartupDayThe Smart Way To Invest in AI and ML_SFStartupDay
The Smart Way To Invest in AI and ML_SFStartupDay
 
Handwriting identification using deep convolutional neural network method
Handwriting identification using deep convolutional neural network methodHandwriting identification using deep convolutional neural network method
Handwriting identification using deep convolutional neural network method
 
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
Dl surface statistical_regularities_vs_high_level_concepts_draft_v0.1
 
Forex Coders course for MQL4
Forex Coders course for MQL4Forex Coders course for MQL4
Forex Coders course for MQL4
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
 
Gaze detection
Gaze detectionGaze detection
Gaze detection
 
Age and Gender Classification using Convolutional Neural Network
Age and Gender Classification using Convolutional Neural NetworkAge and Gender Classification using Convolutional Neural Network
Age and Gender Classification using Convolutional Neural Network
 
IRJET- Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
IRJET-  	  Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...IRJET-  	  Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
IRJET- Detection of Writing, Spelling and Arithmetic Dyslexic Problems in...
 
Brainwave Feature Extraction, Classification & Prediction
Brainwave Feature Extraction, Classification & PredictionBrainwave Feature Extraction, Classification & Prediction
Brainwave Feature Extraction, Classification & Prediction
 
IRJET- Anomaly Detection System in CCTV Derived Videos
IRJET- Anomaly Detection System in CCTV Derived VideosIRJET- Anomaly Detection System in CCTV Derived Videos
IRJET- Anomaly Detection System in CCTV Derived Videos
 
IRJET- Automated Detection of Gender from Face Images
IRJET-  	  Automated Detection of Gender from Face ImagesIRJET-  	  Automated Detection of Gender from Face Images
IRJET- Automated Detection of Gender from Face Images
 
01 Introduction To Dbms
01 Introduction To Dbms01 Introduction To Dbms
01 Introduction To Dbms
 
On Machine Learning and Data Mining
On Machine Learning and Data MiningOn Machine Learning and Data Mining
On Machine Learning and Data Mining
 

Recently uploaded

Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
muralinath2
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
ossaicprecious19
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
SAMIR PANDA
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
YOGESH DOGRA
 
in vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptxin vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptx
yusufzako14
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
muralinath2
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
University of Maribor
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
Areesha Ahmad
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
AlaminAfendy1
 
SCHIZOPHRENIA Disorder/ Brain Disorder.pdf
SCHIZOPHRENIA Disorder/ Brain Disorder.pdfSCHIZOPHRENIA Disorder/ Brain Disorder.pdf
SCHIZOPHRENIA Disorder/ Brain Disorder.pdf
SELF-EXPLANATORY
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
Richard Gill
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
Areesha Ahmad
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
subedisuryaofficial
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
IqrimaNabilatulhusni
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
IvanMallco1
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Sérgio Sacani
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
sonaliswain16
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
ChetanK57
 

Recently uploaded (20)

Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
 
in vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptxin vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptx
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
 
SCHIZOPHRENIA Disorder/ Brain Disorder.pdf
SCHIZOPHRENIA Disorder/ Brain Disorder.pdfSCHIZOPHRENIA Disorder/ Brain Disorder.pdf
SCHIZOPHRENIA Disorder/ Brain Disorder.pdf
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
 

thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis thesis th

  • 1. Intelligent Prosthetic Hand M a s t e r s T h e s i s P r e s e n t e d b y : A l y M o s l h i
  • 2. 1.Literature REVIEW R e v i e w o f t h e p r e v i o u s W o r k s
  • 3. 1. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning I E E E X p lor e 2 0 1 9 N o rway
  • 5. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 6. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 7. ABSTRACT • In the recent years deep learning have showed effectiveness in detecting features from large amount of data. However, within the field of electromyography-based gesture recognition it was seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. • This work assumes that we can generate general/informative features can be generated when aggregating signals from multiple users. thus, reducing the recording burden while enhancing gesture recognition. Using transfer learning • Datasets: Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. • Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. • achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. • Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.
  • 8. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 9. I. INTRODUCTION • Importance of the prosthetics, what is EMG signal, and we can record them then use AI to control bionics. • The literature review showed that the papers used to get features from the signals to predict the gesture, recently deep learning was introduced. shifting the paradigm from feature engineering to feature learning. One of the main important parameters for successful prediction using AI is the amount of data. The traditional approach was to collect very large amount of data from a single user to allow the deep learning model predict his gesture, but the paper propose aggregating the readings from multiple users to generalize the prediction for everyone. Consequently, deep learning offers a particularly attractive context from which to develop a Transfer Learning (TL) algorithm to leverage inter-user data by pre-training a model on multiple subjects before training it on a new participant. • As such, the main contribution of this work is to present a new TL scheme employing a convolutional network (ConvNet) to leverage inter-user data within the context of sEMG-based gesture recognition. • there is another paprt [7], followed that approach before. This paper continue the work, improving the TL algorithm to reduce its computational load and improving its performance. Additionally, three new ConvNet architectures, employing three different input modalities, specifically designed for the robust and efficient classification of sEMG signals are presented. The raw signal, short-time Fourier transform-based spectrogram and Continuous Wavelet Transform (CWT) are considered for the characterization of the sEMG signals to be fed to these ConvNets. Another major contribution of this article is the publication of a new sEMG-based gesture classification dataset comprised of 36 able-bodied participants.
  • 10. I. INTRODUCTION • This dataset and the implementation of the ConvNets along with their TL augmented version are made readily available. Finally, this paper further expands the aforementioned conference paper by proposing a use-case experiment on the effect of real-time feedback on the online performance of a classifier without recalibration over a period of fourteen days. • The paper organization: • Overview of related work for gesture recognition using deep learning and transfer learning. • Presenting the new data-sets being used, with the data acquisition and the preprocessing. Also the NinaPro DB5 dataset. • Presentation of the different state-of-the-art feature sets employed in this work is given • The networks’ architecture. • Presenting the TL model used • Comparison between the state of the art in gesture recognition • A real-time use case. • Results and discussion>
  • 11. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 12. II. RELATED WORK • EMG signals vary from subjects even with precious placement of the electrodes. That is why classifiers trained from a user performs slightly better than random when someone else use it. But not anyway close to precision. Therefore, many complicated techniques have been acquired to get the inter-user information. For example, research has been one on the common features between the original subject and a new user. Another research on finding a pretrained model to remove the need to work with data from multiple subjects. These non deep learning approaches showed way better results than the non augmented versions. • Short time Fourier transform (STFT) was not used a lot in the past decades for the classification of the EMG data. Maybe the reason why it isn’t being used is the large number of features it produce that would be computationally expensive. Moreover, STFT have also been shown to be less accurate than wavelet transform on their own to classify EMG signals. Recently anyway the STFT features in the form of spectrograms was applied as input feature space for classification of sEMG data by leveraging ConvNets. • Continuous wavelet transform (CWT) features have been used for electroencephalography and EMG signal analysis but mainly for lower limbs. Also wavelet based features have been used in the past for sEMG based hand gesture classification. The wavelet based however that was used was on discrete waverlet transform and the wavelet packet transform not continuous wavelet transform. Then mentions some reasons why they used them not CWT. Anyway this paper for the first time introduces the CWT for utilizing the sEMG gesture recognition. • Recently the convNets have started to be used for the gesture recognition using single array and matrix of electrodes. To the best of our knowledge, this paper, which is an extension of [7], is the first time inter-user data is leveraged through TL for training deep learning algorithms on sEMG data.
  • 13. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 14. III. sEMG Datasets • A) Myo Dataset: • One of the main contribution of this paper is to provide a new publicity available sEMG-based hand gesture recognition dataset referred to Myo dataset. The dataset is formed from two sub-datasets one for the training and one for the evaluation. The first subset (the former) is composed of 19 abled bodied participents, is used to build, validate and optimize. While the second subset (the latter) is composed of 17 abled bodied data and it is used for the final testing. This is the largest myoelectric data set available. • The data acquisition protocol was approved by the Comit´es d’ ´ Ethique de la Recherche avec des ˆetres humains de l’Universit´e Laval (approbation number: 2017-026/21-02- 2016) and informed consent was obtained from all participants. 1. Recording hardware: • the dataset was recorded using Myo armband , 8 channels, dry electrode, low sampling rate (200 Hz). Low consumer grade armband. • the armband is simply slipped to the forearm which is way easier than gel-based electrodes which requires shaving the skin to obtain optimal contact between the skin’s subject and electrodes. • The limitations of the armband comes still. The dry electrodes are less accurate and robust to motion artifact than gel based ones. • The recommended EMG frequency is in the range of 5-500Hz. Which requires a sampling frequency minimum of 1000Hz. The myo armband is limited to 200Hz. This loss was a main reason to impact the ability of various classifiers. As such, robust and adequate classification techniques are needed to process the collected signals accurately.
  • 15. III. sEMG Datasets • 2) Time Window Length: • For real time control in a closed loop, time latency is important factor to consider. A paper recommended a time of 300ms. Even another papers recommended time between 100-250 ms. And as the performance of the classifier is more important than time, the window size was taken equal to 260ms was selected to achieve a reasonable numbers of samples between each prediction due to the low frequency of myoelectric. • 3)Labeled Data Acquisition Protocol: • The seven hand gestures considered in the work are shown here, the labeled data were recorded by letting the user to hold each gesture for five seconds. The data recording was only applied when the user was familiar with the four gestures. A time of five seconds were given for the user between each gesture and this time wasn’t recorded. The recording of the seven gestures each for five seconds is called a cycle (35 sec). With four cycles is formed a round. In case of the pre-trained data, a single round is available per subject (140 sec per participant). For the evaluation dataset three rounds are available with the first round utilized for training (i.e., 140s per participant) and the last two for testing (i.e., 240s per participant). • During the recordings, the subjects were asked to stand and hold their forearm parallel to the floor. And the armband was tightened to avoid bias in the same orientation shown. The dataset is formed from the row EMG signals. • Signal processing must be applied to efficiently train a classifier on the data recorded by the Myo armband. The data is first separated by applying sliding windows of 52 samples (260ms) with an overlap of 235ms (i.e. 7x190 samples for one cycle (5s of data)). Employing windows of 260ms allows 40ms for the pre-processing and classification process, while still staying within the 300ms target.
  • 16. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 17. IV. Classis EMG Classification • Before using deep learning for classification, they used feature engineering where they manually finds trends in EMG signals to perform each gesture and over years there were many different efficient ways in time and frequency domain. • Another thing that the paper will do is to test four of the proposed feature sets (features for manually identifying hand gestures) from the literature review on five different classifiers (SVM, ANN, RF, K-NN and LDA) then compare them with the proposed TL deep learning. The hyper parameters for each classifier were selected by employing three-fold cross-validation alongside random search, testing 50 different combinations of hyperparameters for each participant’s dataset for each classifier. The hyperparameters considered for each classifier are presented in Appendix D. • He made something called dimensionality reduction and showed that for most of the cases it reduced the computational power and increased the performance. • The four features that were selected for the comparison are: • Time Domain Features (TD) • Enhanced TD • Nina Pro Features • SampleEn pipeline
  • 18. IV. Classis EMG Classification • Before using deep learning for classification, they used feature engineering where they manually finds trends in EMG signals to perform each gesture and over years there were many different efficient ways in time and frequency domain. • Another thing that the paper will do is to test four of the proposed feature sets (features for manually identifying hand gestures) from the literature review on five different classifiers (SVM, ANN, RF, K-NN and LDA) then compare them with the proposed TL deep learning. The hyper parameters for each classifier were selected by employing three-fold cross-validation alongside random search, testing 50 different combinations of hyperparameters for each participant’s dataset for each classifier. The hyperparameters considered for each classifier are presented in Appendix D. • He made something called dimensionality reduction and showed that for most of the cases it reduced the computational power and increased the performance. • The four features that were selected for the comparison are: • Time Domain Features (TD) • Enhanced TD • Nina Pro Features • SampleEn pipeline
  • 19. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 20. V. Deep Learning Classifiers Overview • As Mentioned before, due to the limitations of data from a single individual, to address the overfitting problem, he used: Monte carlo dropout, Batch normalization and early stopping. • A)Patch normalization, each patch of examples was normalized separately using the mean and the variance. • B)Proposed CNN architecture • Here he will be using a slow-fusion architecture based ConvNet in this work is due to the similarities between videos data and the proposed characterization of sEMG signals, as both representations have analogous structures (i.e. Time x Spatial x Spatial for videos) and can describe non-stationary information. Three architectures will be used
  • 21. V. Deep Learning Classifiers Overview • 1-ConvNet for Spectrograms: the spectrograms which are fed to the CNN were calculated using Hann windows with the length 28 and an overlap of 20 yielding a matrix of 4x15. The first frequency band was removed in an effort to reduce baseline drift and motion artifact. As the armband features eight channels, eight such spectrograms were calculated, yielding a final matrix of 4x8x14 (Time x Channel x Frequency). • The CNN was performed on a python framework called “Theano” • Adam optimizer, Alpha = 0.00681, MC Dropout = 0.5, batch size= 128, early stop = 10%
  • 22. V. Deep Learning Classifiers Overview • 2-ConvNet for Continuous Wavelet Transforms: • Has similar fashion as the previous one. Both the Morlet and Mexican Hat wavelet were considered for this work and the Mexican hat performed better. • The CWTs were calculated with 32 scales yielding a 32x52 matrix. Down sampling is firstly applied. Similar to spectrogram, the last row of the calculated CWT was removed to reduce the motion artifacts. Additionally, the last column of the calculated CWT was also removed as to provide an even number of time-columns from which to perform the slow-fusion process. The final matrix shape is thus 12x8x7 (i.e. Time x Channel x Scale). • All hyperparameters stayed the same except for Alpha = 0.08799
  • 23. V. Deep Learning Classifiers Overview • 3-ConvNet for raw EMG: • This network will help assess if employing time-frequency features lead to sufficient gains in accuracy performance to justify the increase in computational cost. • The architecture was taken from another paper performing the same classification using PyTorch v.0.4.1. with a change in learning rate obtained from performing cross-validation. • Finally he made some enhancements to make the raw CNN like that:
  • 24. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 25. VI. Transfer Learning • Starts by saying that TL is the solution when you have few data. And he will perform something called automatic alignment to solve the sensor’s misalignment problem from a subject to another. • A) Progressive Neural Networks Fine-tuning is the most used TL technique which train a model source domain, and when you have a new task, you use these trained weights as the initial weights for new tasks. However, it suffers from many things. • PNN is used to address these problems, it pretrain the model on a source domain and freeze its weights, When a new task appears, a new network, with random initialization, is created and connected in a layer- wise fashion to the original network. • B) Adaptive Batch NormalizationIn opposition to the PNN architecture, which uses a different network for the source and the target, AdaBatch employs the same network for both tasks. The TL occurs by freezing all the network’s weights (learned during pre-training) when training on the target, except for the parameters associated with BN. The hypothesis behind this technique is that the label-related • C) Proposed Transfer Learning Architecture d
  • 26. VI. Transfer Learning • C) Proposed Transfer Learning Architecture The main tenet behind TL is that similar tasks can be completed in similar ways. The difficulty in this paper’s context is then to learn a mapping between the source and target task as to leverage information learned during pre-training. • How he will use TL is he will train the CNN in section v on the pre-training data (Named the source network), then freeze the weights we have except for BN. Then have another CNN with nearly identical architecture (Named the secondary network) with a different initialization. The way the two networks are connected is shown in the below figure. Then we use the training dataset to train the whole system together. • The whole system is referred to as the target network.
  • 27. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 33. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 34. VIII. Case Study • This last experiment section proposes a use-case study of real-time performance of the classifier over a period of 14 days for eight able-bodied participants. The main goal of this use-case experiment is to evaluate if users can self-adapt and improve the way they perform gestures based on visual feedback from complex classifiers. • The eight participants were randomly separated into two equal groups. With and without feedback • The details of the test (period, duration, accuracy …) is being told. • The cases reported muscle fatigue that is why the test was function in time.
  • 35. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 36. IX. Discussion • the TL augmented ConvNets significantly outperformed their nonaugmented versions regardless of the number of training cycles. • Increasing the number of cycles increased the performance • Overall, the proposed TL-augmented ConvNets were competitive with the current state-of-the-art. • Furthermore, the TL method outperformed the non-augmented ConvNets on the out-of-sample experiment. This suggests that the proposed TL algorithm enables the network to learn features that can generalize not only across participants but also for never-seen before gestures. As such, the weights learned from the pretraining dataset can easily be re-used for other work that employs the Myo Armband with different gestures. • • This paper proposed the architecture to be two identical networks each perform different task. As such further differentiation of both networks might lead to increased performance. Thus, a future work might be change the second layer architecture. • the real-time experiment accuracy was less than the Evaluation Dataset accuracy (95.42% vs 98.31% respectively). This is likely due to the reaction delay of the participants, but more importantly to the transition between gestures. These transitions are not part of the training dataset, because they are too time consuming to record as the number of possible transitions …... Consequently, it is expected that the classifiers predictive power on transition data is poor in these circumstances. As such, being able to accurately detect such transitions in an unsupervised way might have a greater impact on the system’s responsiveness than simply reducing the window size. This and the aforementioned point will be investigated in future works.
  • 37. IX. Discussion • The main limitation of this study is the absence of tests with amputees. Additionally, the issue of electrode shifts has not been explicitly studied and the variability introduced by various limb positions was not considered when recording the dataset. • A limitation of the proposed TL scheme is its difficulty to adapt when the new user cannot wear the same amount of electrodes as the group used for pre-training. This is because changing the number of channels changes the representation of the phenomena (i.e. muscle contraction) being fed to the algorithm. The most straightforward way of addressing this would be to numerically remove the relevant channels from the dataset used for pre-training. Then re-running the proposed TL algorithm on an architecture adapted to the new representation fed as input. Another solution is to consider the EMG channels in a similar way as color channels in image. This type of architecture seems, however, to perform worse than the ones presented in this paper
  • 38. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . s E M G D a t a s e t s I V . C l a s s i s E M G C l a s s i f i c a t i o n V . D e e p L e a r n i n g C l a s s i f i e r s O v e r v i e w V I . T r a n s f e r L e a r n i n g V I I . C l a s s i f i e r C o m p a r i s o n V I I I . R e a l - T i m e C l a s s i f i c a t i o n a n d M e d i u m - T e r m P e r f o r m a n c e I X . D i s c u s s i o n X . C o n c l u s i o n X I . A c k n o w l e d g e m e n t
  • 39. VIII. Conclusion • This paper presented three novel covNets that showed competitiveness with the current classifier. • On the newly proposed evaluation dataset, the TL augmented ConvNet achieves an average accuracy of 98.31% over 17 participants. • On the nina-pro, DB5(18 hand/wrist gestures), the proposed classifier achieved an average accuracy of 68.98% over 10 participants on a single Myo Armband. • This dataset showed that the proposed TL algorithm learns sufficiently general features to significantly enhance the performance of ConvNets on out-ofsample gestures. • Future works will focus on adapting and testing the proposed TL algorithm on upper-extremity amputees. This will provide additional challenges due to the greater muscle variability across amputees and the decrease in classification accuracy compared to able-bodied participants [35].
  • 41. 2. Intelligent Human- Computer Interaction Based on Surface EMG Gesture Recognition I E E E X p lor e 2 0 1 9 C h ina
  • 43. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 44. Abstract • Urban intelligence is an emerging concept which guides a series of infrastructure developments in modern smart cities. Human-computer interaction (HCI) is the interface between residents and the smart cities, it plays a key role in bridging the gap in applicating information technologies in modern cities. • Hand gestures have been widely acknowledged as a promising HCI method. • state-of-the-art signal processing technologies are not robust in feature extraction and pattern recognition with sEMG signals • In this paper, linear discriminant analysis (LDA) and extreme learning machine (ELM) are implemented in hand gesture recognition system. • The characteristic map slope (CMS) is extracted by using the feature re-extraction method because CMS can strengthen the relationship of features cross time domain and enhance the feasibility of cross-time identification. • This study is focusing on optimizing the time differences in sEMG pattern recognition.
  • 45. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 46. I. Introduction • HCI is important could be used for IoT, IT, Cloud Computing….. • Talks about EMG signals and that we can use it for gesture recognition. • At present, the research on sEMG signal mainly focuses on the spatial non-stationarity of sEMG, but the time non-stationarity of sEMG has always been a big challenge. • In this paper, LDA is used to reduce the dimensionality of high-dimensional signals and eliminate the redundant information in sEMG. • The method of feature re-extraction is adopted to extract the characteristic slope value, and the ELM is optimized by genetic algorithm to establish a gesture recognition system across time dimensions. The signal is studied in time dimension. • Paper Overview: • Section II: reviews the basic methods of gesture recognition, including feature extraction, feature reduction and pattern recognition based on neural network. • Section III: elaborates the whole experimental process, including the sEMG signal col- lection plans. We proposed a feature extraction method for reducing time non-stationarity, establishes GA-ELM optimization recognition network, and analyzes the algorithm process of each step. • Section IV: Results and conclusion
  • 47. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 48. II. Related Work • Read the generation of EMG. “interesting”. • Pattern recognition is formed from three stages: Signal detection, signal representation and signal classification. The accuracy is usually from 50-70%. • There has been wide research on feature extraction, feature dimensionality reduction and pattern recognition as: time- domain processing method “then he explain it RMS, Zero crossing, wavelength…”. The next paragraph he discuss the “Frequency domain processing method”. • These two methods yet ignores the chaotic and instability features of the EMG. In exercise the EMG is non-linear, thus Non- liner dynamic method can construct multi-dimensional dynamic model based on one-dimensional time series to extract more hidden information. The main non-liner characteristics include correlation dimension, entropy, complexity and Lyapunov exponent. • Mention some reasons, Therefore, when performing pattern recognition, it is necessary to adopt the method of feature dimension reduction. • Explains two ways for feature dimension reduction. feature selection and feature transformation. • At present, the widely used pattern recognition methods of sEMG mainly include support vector machine (SVM), radial basis function (RBF) algorithm, artificial neural net- work (ANN), hidden Markov model (HMM) and linear discriminant classifier. ANN is the best. But here anyway he will use simpler extreme learning machine ( genetic algorithm).
  • 49. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 50. II. Method • A) ACQUISITION: the nine gestures that will be selected depends on moving fingers and wrist. • Didn’t talk about the data size, just mentioned it was within three days. • B) DIMENSION REDUCTION AND FEATURE FUSION: 1) PRELIMINARY FEATURE EXTRACTION: window functions are used to segment these continuous signals into appropriate sizes. Mainly three features were captured. Two related to the time domain: RMS and waveform length. And one related to frequency domain: median amplitude spectrum. These three parameters are the input for the identification network. ACQUISITION PREPROCESSING CLASSIFIER 2) FEATURE FUSION AND DIMENSION REDUCTION: The direct extracted features, RMS, WL and MAS, are in high dimensional feature spaces, which is not suitable for classification. In order to improve the accuracy of gesture recognition and generalize the classier, reducing the dimension of the feature space is very important. (not sure but kind of normalizing). The output is matrix X3 3) FEATURE RE-EXTRACTION: Here we’ll find the features that will be used as input for the ANN. Just the time domain signal won’t be enough, and he mentions why. Then he mentions that they had to map the matrix X3 to a new eigen vector matrix X4 (feature matrix). The purpose of the feature matrix it maps the values of each gesture to a new separate then the other gestures (uses an equation to map). • C) Classifier design and parameter optimization: genetic algorithm in GA-ELM is designed to optimize the initial weights and thresholds of the network. • Extreme learning machine and genetic algorithm: It’s a single hidden layer feedforward neural network.
  • 51. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 52. III. RESULTS • In order to illustrate the advantages of this method, the data before and after dimensionality reduction are used as input of ELM classifier. And the results are compared. • ANALYSIS OF TWO EXPERIMENTAL RESULTS BEFORE GA OPTIMIZATION • The average recognition accuracy of ELM network after LDA dimension reduction is 67.18%, and that of ELM network after feature extraction is 75.74%. • EXPERIMENTAL RESULTS ANALYSIS OF GA OPTIMIZATION • The results show that the accuracy is 79.3%
  • 53. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . R e l a t e d W o r k I I I . M e t h o d I V . R e s u l t A n a l y s i s o f G e s t u r e R e c o g n i t i o n V . C o n c l u s i o n
  • 54. IV. CONCLUSION • In this paper, the second feature extraction of static gesture based on sEMG is studied. Theoretical and experimental results have been obtained, however there are still problems need to be further explored. The extraction of eigenvalue slope improves the recognition accuracy in our work, to define new features or feature selection methods are promising research directions in the future.
  • 56. 3. EMG-Based Gesture Recognition: Is It Time to Change Focus from the Forearm to the Wrist? I E E E X p lor e 2 0 2 2 C anada
  • 58. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 59. Abstract • This study presents a comprehensive and systematic investigation of the feasibility of hand gesture recognition using EMG signals recorded at the wrist. A direct comparison of signal and information quality is conducted between concurrently recorded wrist and forearm signals. Both signals were collected simultaneously from 21 subjects while they performed a selection of 17 different single-finger gestures, multi-finger gestures and wrist gestures. • Wrist EMG signals yielded consistently higher (p < 0:05) signal quality metrics than forearm signals for gestures that involved fine finger movements, while maintaining comparable quality for wrist gestures. • Classifiers trained and tested using wrist EMG signals achieved average accuracy levels of 92.1% for single- finger gestures, 91.2% for multi-finger gestures and 94.7% for the conventional wrist gestures.
  • 60. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 61. I. Introduction • Applications for sEMG gesture recognition for IoT. • Pattern recognition is widely used using forearm due to its use in prosthetics. The most commercial use is even in prosthetics • Early attempts for commercializing the PR was with Myo and gForce and both was for amputees. Here we say there are more things we can do using PR than amputees. And the PR using forearm has been just limited in the labs because customers aren’t used to wear bands around their forearm. That is why we won’t be focusing on reading the data from forearm, instead from the wrist. • Other technologies has also been used for PR like pressure sensing, ultrasound, data gloves……. But just kept in labs and there isn’t any commercial products and mention some reasons why • IMU was being effective when combined with EMG to detect hand positions and air gestures. But IMU failed to detect salient hand gestures that don’t involve the movement of the wrist joint. • Another reason for wrist instead of hand that it should be comfy to be worn for long time and socially accepted. • Few studies used the wrist EMG. Some used it combined with forearm EMG. Some with the muscles inside the hand (very high accuracy). • Another study compared between the EMG at the wrist and at the forearm and didn’t find a difference in strength or snr
  • 62. I. Introduction • Some studies focused only on the wrest but in a very limited scope. Consequently, for the first time, this study encompasses a comprehensive and systematic investigation of the feasibility of using wrist EMG signals for hand gesture recognition with a broad set of finger and hand gestures. Using many subjects. • The first objective of the paper is to show the importance of wrist EMG PR. • The second objective of this work is therefore to explore how the body of knowledge from the forearm EMG literature, such as feature engineering and dimensionality reduction, can be applied to wrist EMG signals. • Finally, the paper is concluded with a set of recommendations for future research studies and the commercialization of wrist-based EMG wearables.
  • 63. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 64. II. Methodology • A) Data Collection: For this study, EMG data were collected from 21 ablebodied participants (25:247:56 years, 14 males, 7 females, 19 right-handed) . Each participant wore 8 electrodes, 4 on the wrist and 4 on the forearem, sampling rate was set to 1kHz. And talk more about the electrodes position. • Participents were asked to perform 18 gesture. These gestures included common contractions consistent with 5 single-finger extensions, 6 multi-finger gestures and 6 wrist gestures. • EMG data were collected from each participant while performing 4 repetitions of each gesture for 2 seconds with breaks of 2 seconds between gestures. Hence, a balanced dataset of 72 repetitions per participant was collected. • B)Evaluating the signal quality: here he compares the signals from forearm with signals from wrist. The comparison had Four signal quality indices were evaluated to quantify the amount of useful information contained in the signals and to evaluate signal noise.: 1) Forearm-to-Wrist Signal Strength Ratio (FWR). 2) Signal-to-Noise Ratio (SNR). 3) Signal-to-Motion Artifact Ratio (SMR). 4) Power Spectrum Deformation (). • Each of them is detailly discussed. • C) Feature extraction and engineering: Feature engineering techniques were applied to assess the performance of individual state-of-the-art EMG features and a feature set previously identified in the forearm literature. • EMG signals were segmented to 150ms sliding windows with 50% overlap. A total of 23 time-domain and frequency-domain features were extracted from each channel as listed in Table I.
  • 65. II. Methodology • This resulted in a feature vector of length 280 per gesture repetition, considering all 8 channels and given that HIST and AR features each produce 10 and 4 values, respectively. This feature set has proven to provide good hand gestures recognition rates using forearm EMG signals. • Class separation and feature repeatability were evaluated in multi-dimensional feature space using two statistical indices: Davies-Bouldin index (DBI) and repeatability index (RI). • D) Dimensionality Reduction: • 1- feature selection: The sequential floating feature selection (SFFS) algorithm was applied to identify the most important features for gesture recognition from the wrist and forearm EMG signals. • 2- Feature projection: To visualize differences in the underlying information content, principle component analysis (PCA) was applied to all wrist and forearm EMG features listed in Table I. • E)Classification Algorithm: LDA and SVM (with a linear kernel) classifiers were employed in this study using the leave-one-repetition-out cross-validation technique to estimate classification accuracies. • F) Statistical Analysis: t-test was performed
  • 66. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 70. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 71. II. Discussion • A) Wrist electrode configuration: • some of the forearm muscles that control extension/flexion of the medial four fingers, such as the extensor digiti minimi (EDM), extensor digitorum (ED) and flexor digitorum superficialis (FDS), extend down to the wrist. Also, the extensor carpi ulnaris (ECU) muscle, which aids in hand extension and abduction, extends from the proximal forearm down to the wrist [36]. Thus, the electrical activity of all these muscles can still be recorded at the wrist level. Furthermore, there is a unique group of muscles that are only present at the wrist, near the distal end of the forearm, proximal to the ulnar styloid process, and control fine finger and thumb movements. • Hence, the electrodes placed more distally, near the wrist joint, are likely to have measured EMG activity relevant to fine finger movements. • B) Feasibility of Wrist-Based EMG Gesture Recognition: • Results showed that wrist EMG signals had higher signal quality compared to the forearm signals for multiple gestures involving fine finger movements. Indeed, results showed that gestures that involved the movement of the thumb and index fingers had significantly higher wrist signal strength (FWR< 1) and SNR values when compared to forearm signals. • in the current study, the results found that wrist EMG had higher amplitudes and SNR values than forearm signals for gestures that involved finger motions. • READ THE REST….
  • 72. II. Discussion • Future studies should thus explore the effect of dynamic and confounding factors on the performance of wrist- based EMG pattern recognition systems. These factors, as have been outlined in the forearm literature, may include limb position, electrode shift, and the effect of contraction intensity on the pattern recognition performance. Another point to consider in future studies is the difference between the continuous classification approach in prosthetics versus the discrete gesture approach in HCI (e.g. snapping fingers). Other directions for researchers to pursue are the performance of wrist-worn EMG systems over time and how to maintain performance over many days without the need to retrain the system.
  • 73. 1.Contents A b s t r a c t I . I n t r o d u c t i o n I I . M e t h o d o l o g y I I I . R e s u l t s I V . D i s c u s s i o n V . C o n c l u s i o n
  • 74. III. Conclusion • Wrist-based EMG classifiers achieved average accuracy levels of more than 91.2% for fine finger gestures and 94.7% for the conventional wrist gestures. This study highlights the great potential of wrist EMG signals and paves the way for reliable wrist-based EMG wearables.
  • 76. 4. MASTERS THESIS MYOELECTRIC HUMAN COMPUTER INTERACTION USING LSTM-CNN NEURAL NETWORK FOR DYNAMIC HAND GESTURES RECOGNITION I E E E X p lor e 2 0 2 1 U S A
  • 78. ABSTRACT • There are two goals for this research: • 1- Decease the effect of arm positions when recognizing a gestures. To tackle this issue, a CNN-LSTM neural network is introduced. Compared to Dr. Shin’s work, the new model is able to classify more gestures with more positions. • 2- Apply the new model to a human-computer interaction system. A 7-DoF Kinova robot arm.
  • 79. 1.Contents 1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W 1.1 Literature Review 1.1.1 ElectroMyoGraphic Signal 1.1.2 Myoelectric Control ‘1.2 Observation 1.3 Thesis Objective 2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N 2.1 Introduction 2.2 Proposed Method. 2.2.1 EMG Gestures Dataset 2.2.2 CNN-LSTM network Structure 2.2.3 Training Results 2.3 Real Time Gestures Recognition 2.3.1 Real Time Recognition System 2.3.2 Data Collection 2.4 Results 2.4.1 Real-time Classification Accuracy. 2.4.2 Results Analysis 2.4.3 Discussion 3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M C O N T R O L 4 . S U M M A R Y A N D C O N C L U S I O N S
  • 80. 1.Contents 1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W 1.1 Literature Review 1.1.1 ElectroMyoGraphic Signal 1.1.2 Myoelectric Control ‘1.2 Observation 1.3 Thesis Objective 2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N 2.1 Introduction 2.2 Proposed Method. 2.2.1 EMG Gestures Dataset 2.2.2 CNN-LSTM network Structure 2.2.3 Training Results 2.3 Real Time Gestures Recognition 2.3.1 Real Time Recognition System 2.3.2 Data Collection 2.4 Results 2.4.1 Real-time Classification Accuracy. 2.4.2 Results Analysis 2.4.3 Discussion 3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M C O N T R O L 4 . S U M M A R Y A N D C O N C L U S I O N S
  • 81. Introduction • 1. Literature review: Skip, Not important • 2. ElectroMyoGraphic Signal: Pattern recognition for EMG classification usually consist of data pre-processing, data segmentation, feature extraction, dimensionality reduction, and classification stages. • He then explain breafly each of them. • Keeps mentioning Dr shin’s work. And put a diagram od LSTM that Dr shin used during his work.
  • 82. Introduction • 1.1.2 Myoelectric Control: Not important • 1.2 Observation: Gestures include dynamic and static motions recognition is a time-series problem. Therefore, recurrent neural network(RNN) is proper for this. • 1.3)Thesis Objective: This project aims to design a system that enables people to use gestures with an MYO armband to manipulate a robot arm. This system has two main parts: 1) recognition part: using neural networks to recognize dynamic gestures. 2) control part: implementing a relative functions on a robot arm based on different gestures. This system should also ensure: • Position Independent: Ensure that gestures could be recognized while limb positions change. • Respond Promptly: Ensure that the time of completing a task by using myoelectric control is not much longer than other control methods. • Two devices are used in this research. First is the Myo armband. Seocnd one is a 7 DoF Kinova robot arm
  • 83. 1.Contents 1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W 1.1 Literature Review 1.1.1 ElectroMyoGraphic Signal 1.1.2 Myoelectric Control ‘1.2 Observation 1.3 Thesis Objective 2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N 2.1 Introduction 2.2 Proposed Method. 2.2.1 EMG Gestures Dataset 2.2.2 CNN-LSTM network Structure 2.2.3 Training Results 2.3 Real Time Gestures Recognition 2.3.1 Real Time Recognition System 2.3.2 Data Collection 2.4 Results 2.4.1 Real-time Classification Accuracy. 2.4.2 Results Analysis 2.4.3 Discussion 3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M C O N T R O L 4 . S U M M A R Y A N D C O N C L U S I O N S
  • 84. 2.1Introduction • The four Challenges being faced: limb position factor, constrain intensity factor, electrode shift factor and within/between day factor. • Mentions a paper that found explained how the limb positions influence in detail: the muscular activity that maintains limb positions against gravitational force is dependent on the position of the limb[16]. • Dr shin’s work shows that dynamic gestures are in principle more reliable indicators of intent.
  • 85. 2.2 Proposed Method • 2.2.1 EMG Gestures Dataset: There are five dynamic gestures with five arm positions in the dataset. (dynamic = includes motion like waving). • The training data is from seven human subjects. Every gesture at each position would be repeated for 10 times. After finishing seven experiments, there are 7*5*5*10 = 1750 samples. For each sample, there is a 8*680 matrix where 8 is the the amount of channels and 680 is the time steps (The matrix would be pad zeros to make every matrix same length). Tn original dataset is in below. • Due to the small dataset. Augmentation as in case of images Is added. • After augmentation
  • 86. 2.2 Proposed Method • 2.2.2CNN-LSTM network Structure: Explains in detail the CNN and LSTM. • 2.2.3 Training Result: The dataset is split into training dataset and validation dataset, which are 80% and 20% respectively. Therefore, for the original dataset, there are 1750*0.8 = 1400 samples for training and 350 samples for validation. And for dataset after the augmentation, there are 10500*0.8 = 8400 samples for training and 2100 smaples for validation. • The validation accuracy of original dataset is approximate 90% which is lower than the model trained by new dataset. The training accuracy is 95.8% and validation accuracy is 92.7%.
  • 87. 2.3 Real Time Gestures Recognition • 2.3.1 Real Time Recognition System: useless diagram shows the steps of reading a gesture. • 2.3.2 Data Collection: there are 3000 sample for testing. • 2.4.1 Real-time Classification Accuracy: overall 84.2% • Then he add some confusion matrixes. • 2.4.3 discussion: • From the previous analysis, there are three observations: 1. The model is relatively position independent. The effect of arm positions still exists but is small. 2. 2. Different gestures impact recognition accuracy. In this model, influence on gestures may be bigger than positions for classification accuracy. The gestures share less common would get better results. 3. The model is affected by subject dependency. The best and worst results show it clearly. Due to the different performing habits for different human subjects, more data from variouspeople may help. 2.5 conclusion: The training and validation accuracy are both over 90%. Real-time accuracy = The overall accuracy is 84.2%.
  • 88. 1.Contents 1 . I N T R O D U C T I O N A N D L I T E R A T U R E R E V I E W 1.1 Literature Review 1.1.1 ElectroMyoGraphic Signal 1.1.2 Myoelectric Control ‘1.2 Observation 1.3 Thesis Objective 2 . P R O P O S E D M E T H O D F O R G E S T U R E R E C O G N I T I O N 2.1 Introduction 2.2 Proposed Method. 2.2.1 EMG Gestures Dataset 2.2.2 CNN-LSTM network Structure 2.2.3 Training Results 2.3 Real Time Gestures Recognition 2.3.1 Real Time Recognition System 2.3.2 Data Collection 2.4 Results 2.4.1 Real-time Classification Accuracy. 2.4.2 Results Analysis 2.4.3 Discussion 3 . R E A L - T I M E H C I S Y S T E M F O R A 7 - D O F R O B O T A R M C O N T R O L 4 . S U M M A R Y A N D C O N C L U S I O N S
  • 89. 4. SUMMARY AND CONCLUSIONS • Using CNN to generate the features instead of calculating time or frequency domain features manually before input. • Compared to many researches which focus on gestures recognition at the same limb position and have bad performance when the limb position changes, the model in this research is relatively position-invariant. • Compared to Dr. Shin’s work[3] which involves 3 gestures and 4 limb positions in the real time recognition, this research includes 5 gestures and 5 limb positions. The model is more robust on gestures and limb positions for real-time classification. • Future Work: 1. Removing the trigger(fist) in the real-time system. 2. Speeding up processing time. This could be done by using a better GPU 3. Improving accuracy for the classification. Other advanced methods could be tried such as transfer learning. Also more data from various types of people my help since the amount of gestures and arm positions is not small and there are considerable differences among different human beings. 4. Increasing recognizable gestures 5. Reducing the arm position’s effect to gestures recognition. In other words, the gestures performing at the random arm position could be classified successfully. 6. Decreasing the effect of human subject dependency. 7. Applying the proposed myoelectric control system on other applications like a drone control. 8. Adding feedback in the myoelectric control system.
  • 90. THANK YOU T O T H E N E X T S T E P

Editor's Notes

  1. Links: https://www.pexels.com/photo/alcohol-architecture-bar-beer-260922/
  2. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  3. Links: https://www.pexels.com/photo/man-and-woman-wearing-black-and-white-striped-aprons-2696064/
  4. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  5. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  6. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  7. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  8. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  9. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  10. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  11. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  12. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  13. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  14. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  15. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  16. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  17. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  18. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  19. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  20. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  21. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  22. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  23. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  24. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  25. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  26. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  27. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  28. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  29. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  30. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  31. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  32. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  33. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  34. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  35. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  36. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  37. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  38. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  39. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  40. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  41. Links: https://www.pexels.com/photo/man-and-woman-wearing-black-and-white-striped-aprons-2696064/
  42. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  43. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  44. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  45. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  46. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  47. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  48. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  49. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  50. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  51. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  52. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  53. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  54. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  55. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  56. Links: https://www.pexels.com/photo/man-and-woman-wearing-black-and-white-striped-aprons-2696064/
  57. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  58. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  59. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  60. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  61. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  62. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  63. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  64. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  65. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  66. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  67. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  68. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  69. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  70. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  71. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  72. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  73. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  74. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  75. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  76. Links: https://www.pexels.com/photo/man-and-woman-wearing-black-and-white-striped-aprons-2696064/
  77. Links: https://www.pexels.com/photo/pendant-bulb-photo-750843/
  78. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  79. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  80. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  81. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  82. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  83. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  84. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  85. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  86. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  87. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  88. Links: https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/https://www.pexels.com/photo/chef-holding-white-tea-cup-887827/
  89. Links: https://www.pexels.com/photo/top-view-of-food-1640772/ https://www.pexels.com/photo/close-up-photo-of-a-cheese-burger-1633578/ https://www.pexels.com/photo/burrito-chicken-delicious-dinner-461198/ https://www.pexels.com/photo/blur-breakfast-close-up-dairy-product-376464/
  90. Links: https://www.pexels.com/photo/alcohol-architecture-bar-beer-260922/