9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition @ PAKDD2021
1. Hierarchical Self Attention Based Autoencoder for
Open-Set Human Activity Recognition
M Tanjid Hasan Tonmoy*, Saif Mahmud*, A K M Mahbubur Rahman,
M Ashraful Amin, Amin Ahsan Ali
Artificial Intelligence and Cybernetics Lab
Independent University, Bangladesh
25th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD-2021)
May 11-14, 2021
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 1 / 20
2. Contents
1 Problem Definition
Human Activity Recognition (HAR)
Open-Set Recognition
Objectives
2 Proposed Method
Overview
Hierarchical Self Attention Encoder
Autoencoder for Open Set Recognition
3 Experiment
Datasets
Results and Discussion
4 Conclusion and Future work
Concluding Remarks
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 2 / 20
3. Problem Definition Human Activity Recognition (HAR)
Human Activity Recognition (HAR)
Automated classification of the activities of specific subjects wearing
heterogeneous sensors placed at different body locations.
Areas of Application
Physiotherapy and Rehabilitation
Healthcare, Fitness Monitoring
Assistive Technology
Can also be accomplished using skeleton co-ordinate data or RGB
video data instead of wearable sensor readings.
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 3 / 20
4. Problem Definition Open-Set Recognition
Open-Set Recognition
Two sub-tasks
Closed-set classification: Prediction from defined set of classes
Open-set identification: Labeling as known or unknown
Reason behind encountering unknown classes in HAR system:
body-worn sensor malfunction
physical disability of the subject performing the activities
performing rehabilitation or workout exercises incorrectly
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 4 / 20
5. Problem Definition Objectives
Objectives
Hierarchical fusion of spatial information and temporal dynamics
Detection of dominant body parts
Detection of important time window within activity session
Modelling latent variable from learned representation of activity
Detection of unknown activity in open-set recognition
Interpretable attention maps for sensor placement and temporal
information
Describing activity labels based on spatio-temporal dependency
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 5 / 20
6. Proposed Method Overview
Proposed Method
Task Definition
Sensor data: S = {S1, S2, S3, ...} is the set of sensors placed at
different body-locations and there are n time-steps of multi-axis data
Task: detection of activity labels given the multidimensional
time-series of sensor signals X of particular duration.
Window & Session
Sensor signals represented hierarchically as activity session composed
of windows representing shorter segments within the sequence
A window is composed of a fixed number of data-points representing
the sensor signal at the corresponding time-stamps
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 6 / 20
7. Proposed Method Hierarchical Self Attention Encoder
Hierarchical Self Attention Encoder
Two types of hierarchy: temporal
and body location-based;
Implemented with the following -
Hierarchical Window Encoder
(HWE)
Session Encoder (SE)
Self attention is the core element in
both of the components and is used
in two ways within the components.
Modular Self Attention
Aggregator Self Attention
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 7 / 20
8. Proposed Method Hierarchical Self Attention Encoder
Modular Self Attention
N identical blocks of multi-headed self attention and position-wise
feed forward layers.
fsa(Q, K, V ) = softmax(
QKT
√
dk
)V (1)
Q = XWQ, K = XWK , V = XWV where WQ, WK , WV are weight
matrices; X is the input and K ∈ IRt×dk
, Q ∈ IRt×dk
and V ∈ IRt×dv
.
Multi-head self attention: Different WQ, WK , WV for each attention
head and combined as in (2).
fmhsa(X) = concat(h(1)
, ...., h(n)
) · Wo (2)
where, h(j)
= fsa(W
(j)
Q X, W
(j)
K X, W
(j)
V X).
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 8 / 20
9. Proposed Method Hierarchical Self Attention Encoder
Aggregator Self Attention
Obtain an aggregate representation from all the time-steps in the
sequence
Difference between Modular and Aggregator blocks:
Ka in (3) is initialized randomly and learned, unlike query and value
which are used in same way as in modular blocks.
Single headed attention is used to simplify the interpretation of the
attention scores
fagr (Qa, Va) = softmax(
QaKT
a
√
dka
)Va (3)
Where, Qa = XWaQ, V = XWaV ; WaQ, WaV are weight matrices; X is
the input to the layer and Qa ∈ IRt×dka
, Va ∈ IRt×dva
, Ka ∈ IR1×dka
.
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 9 / 20
10. Proposed Method Hierarchical Self Attention Encoder
Hierarchical Window Encoder
Combine sensor signal from
different body location within
window
HWE: m (number of sensor
placements) modular self
attention blocks and an
aggregator block
Utilize Modular and Aggregator
self attention as in (4) and (5)
Zwindow = concat(fmhsa(X
(i)
w ), ... ,fmhsa(X
(m)
w ))
(4)
Z̃window = fagr (Zwindow ) (5)
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 10 / 20
11. Proposed Method Hierarchical Self Attention Encoder
Session Encoder & Session Guided Classification
SE contains n (number of windows within the session) identical
HWEs having shared parameters
Similar to the HWE, modular & aggregator Self Attention is used to
obtain the session representation
Window and Session Classification:
Session: The output from SE is passed to dense and softmax layers to
obtain the class label.
Window: Output from the SE is concatenated with each window
representation and passed that to dense and softmax layers.
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 11 / 20
12. Proposed Method Autoencoder for Open Set Recognition
Formulation of Variational Autoencoder
Encoder: Models posterior distribution qφ(z|x) where φ indicates
encoder network parameters
Decoder: Multi-layer feed forward network
Reconstructs the representation obtained from self-attention encoder
Approximates distribution pθ(x|z) where θ is the learned decoder
parameters
Figure: Hierarchical Self-Attention Autoencoder
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 12 / 20
13. Proposed Method Autoencoder for Open Set Recognition
Autoencoder for Open Set Recognition
Objective: Approximating intractable true posterior pθ(z|x) with
qφ(z|x)
Loss function of the autoencoder: Evidence Lower Bound (ELBO) on
the marginal likelihood of the learned representation
Assumption: Known activity classes will demonstrate lower
reconstruction loss in contrast to novel ones
Reconstruction loss threshold: Tuned as hyperpatameter from the
range given below
µ(Lknown) − α · σ(Lknown) (6)
where, Lknown = reconstruction loss of autoencoder on training data
containing known activity classes and α ∈ [0.0, 0.50]
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 13 / 20
14. Experiment Datasets
Datasets
Table: Summary of the Datasets used in experiments. For Skoda, we use a 10%
split for test and validation since it contains data from a single subject. For the
Sensor Used column, A = Accelerometer, G = Gyroscope, M = Magnetometer
Dataset
Sampling
Rate
No. of
Activity
No. of
Subject
Validation
Subject ID
Test
Subject ID
Sensor
Used
PAMAP2 100 Hz 12 9 105 106 A, G
Opportunity 30 Hz 18 4 1 (run 2) 2, 3 (run 4, 5) A, G, M
USC-HAD 100 Hz 12 14 11, 12 13, 14 A, G
Daphnet 64 Hz 2 10 9 2 A
Skoda 98 Hz 11 1 1 (10%) 1 (10%) A
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 14 / 20
15. Experiment Results and Discussion
Window-wise Results on Benchmark Test Set
Table: Performance comparison of the proposed method with baselines in terms
of window-wise results on benchmark test set
Methods PAMAP2 Opportunity USC-HAD Daphnet Skoda
LSTM (2014) 0.75 0.63 0.38 0.68 0.89
CNN (2015) 0.82 0.59 0.41 0.59 0.85
b-LSTM (2016) 0.84 0.68 0.39 0.74 0.91
DeepConvLSTM (2016) 0.75 0.67 0.38 0.84 0.91
Conv AE (2017) 0.80 0.72 0.46 0.73 0.79
DeepConvLSTM + Attn (2018) 0.88 0.71 0.51 0.76 0.91
SADeepSense (2019) 0.66 0.66 0.49 0.80 0.90
AttnSense (2019) 0.89 0.66 0.49 0.80 0.93
Transformer Encoder (2020) 0.96 0.67 0.55 0.82 0.93
Proposed HSA Autoencoder 0.99 0.68 0.55 0.85 0.95
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 15 / 20
16. Experiment Results and Discussion
Leave One Subject Out (LOSO) Performance Evaluation
Figure: Average macro F1 Score where data of one subject held out for
evaluation. Here, Skoda dataset contains single subject, therefore LOSO result is
not reported here.
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 16 / 20
17. Experiment Results and Discussion
Performance Evaluation of Open Set Activity Recognition
Figure: Performance evaluation (accuracy and macro F1 score) of open-set
activity detection
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 17 / 20
18. Experiment Results and Discussion
Attention Maps for Interpretability
Activity: Cleanup
Top 2 rows: Locomotion
and mid-level gestures
Bottom x-axis:
Temporal attention
scores
y-axis in the middle:
Locations of sensors
where (L = Left, R =
Right), (L = Lower, U
= Upper) & A = Arm
Darker color indicates
higher attention score
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 18 / 20
19. Conclusion and Future work Concluding Remarks
Concluding Remarks
Although the proposed hierarchical self-attention model demonstrates
interpretable activity recognition as well as robust feature representation,
the future works listed below will add value to the HAR framework:
Rehabilitation or workout exercise evaluation
Interpretable latent space representation for open-set recognition
Generation of activity description in natural language
Adaptive model according to subject specific variability
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 19 / 20
20. Conclusion and Future work Concluding Remarks
Thank You
(AGenCy Lab @ IUB) Hierarchical Self Attn AE for Open-Set HAR May 11-14, 2021 20 / 20