1. Deep Learning based Human Gait Analysis for
Forensic Applications
Group 27
Project Supervisor: Dr. H. Abeykoon Group Members: 170128N – Dharmapriya E.U.
170356K – Madhawa S.J.
170449A – Pigera A.A.G.S.
2. Background
2
Deep
Learning
Model
Feature vectors
(Training Set)
Feature vectors
(Test Set)
Criminal
Suspects
SCOPE
To Develop a Person Identification system
using the Skeleton-Model based method
for Forensic Application with Supervised
Learning Approach
Problem type – Classification
Model input – Feature vectors including gait
parameters
Model output – Criminal among suspects
(eg: 10 persons)
Evidence video
3. 3
Data Extraction -Kinect sensor
Feature identification-LIBSVM
Specification-
• Use static and dynamic parameters.
• Use Euclidian distance /trigonometry functions.
Limitations-
• Only for the front view.
• Indoor Environment.
• Same specified walking distance.
Literature Review
4. 4
4
Parameter Calculations
Angle between two body parts
Eg: Knee angle
Lengths of fixed parameters
Eg: Length of upper body
Angle of body parts with respect to a plane
Eg: Hip angle
Feature Vector
Data Extraction -Kinect sensor
Feature identification
LSTM autoencoder
Specification:
Detect the abnormal or disguising target.
.
5. 5
Limitations
• One stationary position of Kinect sensor
• Limited to eleven number of parameters
Figure: Structure of the recognition system
6. Data Extraction -Kinect
Limitations
• fixed walking line.
• stationary Kinect
position.
• y coordinate only.
• side view only.
• Done at indoor
environment.
• No parameter
calculations.
Specification
7. 7
Normalization of points
𝑃 =
𝑃𝑖−𝑃𝑛𝑒𝑐𝑘
𝐻𝑛ℎ
P – Normalized body point
𝑃𝑖- Body point
𝑃𝑛𝑒𝑐𝑘- Neck point
𝐻𝑛ℎ- Height between neck & hip
Feature Identification
• Temporal features- LSTM
• Spatial features- CNN
Limitations
• Only Hip, Knee, Ankle points are
considered for analysis.
• Using CASIA B data sets of images.
8. Objectives
• To design an effective deep learning based human recognition system by analyzing gait patterns
and body features.
• To select most reliable gait parameters among data extraction from surveillance videos.
Selected Approach
Recording walking
videos of suspects
Pose Estimation
Gait parameter
Analysis &
preparing data
Data preprocessing
& Feature Vector
Extraction
Neural Network
Model
Evidence Video
Pose Estimation,
Gait analysis &
Feature vector
extraction
Identifying Person
Training
Stage
Testing
Stage
8
10. Camera Calibration
• Method 1: Dividing the parameter lengths from upper body length at each frame
• Method 2: Converting lengths of parameter into meters.
10
11. Gait Parameters
• Time Parameters
11
• Dynamic Parameters
Angle between body
and upper arm
Hip angle
Thigh angle
Knee angle
Step distance
from neck
Foot lifting
distance
Hip to ankle
distance
Ankle angle
Step time
Stride time
Stance time Swing time
• Fixed Parameters
Upper body
length
Thigh
Hip
Foot length
Shank
Shoulder
Length of arm
Lower body
length
12. Time Parameters
12
𝑖
𝑎
𝑇𝑐
𝑏
T – Time parameter
a – Camera view(Side (S)/ Front (F))
b – Name of the parameter
c – Side of parameter (Right (R)/ Left (L)/ Not applicable (NA))
i – Instance number
No. Parameter name Camera
(a)
Name
(b)
Side
(c)
Feature
No.
1 Step time S STP R 1
2 Step time S STP L 2
3 Stride time S STR R 3
4 Stride time S STR L 4
5 Stride frequency S SF R 5
6 Stride frequency S SF L 6
7 Swing time S SW R 7
8 Swing time S SW L 8
9 Foot flat time S FF NA 9
10 Double support
time
S DS NA 10
11 Stance time ratio S SNR NA 11
12 Swing time ratio S SWR NA 12
No. Parameter name Camer
a (a)
Nam
e (b)
Side
(c)
Feature
No.
13 Step time F STP R 41
14 Step time F STP L 42
15 Stride time F STR R 43
16 Stride time F STR L 44
17 Stride frequency F SF R 45
18 Stride frequency F SF L 46
13. Time Parameters
13
Stance time Swing time
Heelstrike Footflat Midstance Toeoff Acceleration Midswing Decelaration
Heelstrike
Hip – Foot index distance Hip – Heel distance Hip – Foot index distance
Stride time
1 Gait = 1 Stride
14. Time Parameters
14
When heel strike occur there are rise of foot index
When toe off occur there are rise of heel
To𝑛+1
Hs𝑛+1
Hs𝑛
To𝑛
Time (s)
Distance
ratio
Heel to Hip Y difference (left leg)
Foot index to Hip Y difference (left leg)
Toe off Heel Strike
Stride Time = 𝐻𝑠𝑛+1 - 𝐻𝑠𝑛
15. Time Parameters
15
𝑅𝐻𝑠𝑛 - Right Heel strike (𝑛𝑡ℎ
gait)
𝐿𝐻𝑠𝑛 - Left Heel strike (𝑛𝑡ℎ
gait)
𝑅𝑇𝑜𝑛 - Right Toe off (𝑛𝑡ℎ
gait)
𝐿𝑇𝑜𝑛 - Left Toe off (𝑛𝑡ℎgait)
16. Dynamic Parameters
16
𝑖
𝑎
𝐷𝑐
𝑏
D – Time parameter
a – Camera view(Side (S)/ Front (F))
b – Name of the parameter
c – Side of parameter (Right (R)/ Left (L)/Not applicable (NA))
i – Instance of data
No. Parameter name Camera
(a)
Name
(b)
Side
(c)
Feature
No.
1 Hip angle S HA R 13
2 Hip angle S HA L 14
3 Thigh angle S TA NA 15
4 Knee angle S KA R 16
5 Knee angle S KA L 17
6 Angle between body & arm S AA R 18
7 Angle between body & arm S AA L 19
8 Foot lifting distance S FL R 20
9 Foot lifting distance S FL L 21
10 Step distance wrt neck S SNK R 22
11 Step distance wrt neck S SNK L 23
12 Hip to ankle distance S HAD R 24
13 Hip to ankle distance S HAD L 25
14 Stride length S SL NA 26
15 Stride velocity S SV NA 27
No. Parameter name Camer
a (a)
Name
(b)
Side
(c)
Feature
No.
16 Pelvic obliquity F PO NA 47
17 Step width F SW NA 48
18 Foot lifting distance F FL R 49
19 Foot lifting distance F FL L 50
20 Hip to ankle
distance
F HAD R 51
21 Hip to ankle
distance
F HAD L 52
22 Stride length F SL NA 53
23 Stride velocity F SV NA 54
17. Dynamic Parameters
17
Angle between body
and upper arm
Hip angle
Thigh angle
Knee angle
Step distance
from neck
Foot lifting
distance
Hip to ankle
distance
Ankle angle
Formula Parameters
𝑡𝑎𝑛−1
(
𝑥2 − 𝑥1
𝑦1 − 𝑦2
) Hip angle, angle between
body and upper arm
|𝐻𝑖𝑝 𝑎𝑛𝑔𝑙𝑒𝑟𝑖𝑔ℎ𝑡 − 𝐻𝑖𝑝 𝑎𝑛𝑔𝑙𝑒𝑙𝑒𝑓𝑡| Thigh angle
𝑎 = (𝑥1 − 𝑥2)2+(𝑦1 − 𝑦2)2+(𝑧1 − 𝑧2)2
b = (𝑥2 − 𝑥3)2+(𝑦2 − 𝑦3)2+(𝑧2 − 𝑧3)2
𝑐 = (𝑥1 − 𝑥3)2+(𝑦1 − 𝑦3)2+(𝑧1 − 𝑧3)2
𝑐𝑜𝑠−1
(
𝑎2
+ 𝑏2
− 𝑐2
2𝑎𝑏
)
Knee angle
Ankle angle
𝑦𝑎𝑛𝑘𝑙𝑒1 − 𝑦𝑎𝑛𝑘𝑙𝑒2 Foot lifting distance
𝑥𝑓𝑜𝑜𝑡 𝑖𝑛𝑑𝑒𝑥 − 𝑥𝑛𝑒𝑐𝑘 Step distance from neck
𝑦ℎ𝑖𝑝 − 𝑦𝑎𝑛𝑘𝑙𝑒 Hip to ankle distance
|𝑥𝑎𝑛𝑘𝑙𝑒 𝑙𝑒𝑓𝑡 − 𝑥𝑎𝑛𝑘𝑙𝑒 𝑟𝑖𝑔ℎ𝑡| Step width
𝑠𝑡𝑟𝑖𝑑𝑒 𝑙𝑒𝑛𝑔𝑡ℎ
𝑠𝑡𝑟𝑖𝑑𝑒 𝑡𝑖𝑚𝑒
Stride velocity
(𝑥1, 𝑦1)
(𝑥2, 𝑦2)
(𝑥3, 𝑦3)
18. Dynamic Parameters – Side view
18
Hip angle – Right leg (Avg of maximum)
Thigh angle (maximum value)
Time (s)
Time (s)
Angle
(deg)
Angle
(deg)
Ankle angle – Right leg
Angle
(deg)
Time (s)
Hip angle
Thigh angle
Ankle angle
Maximum hip angles to
walking direction
19. Dynamic Parameters – Side view
19
Knee angle – Right leg (Data sequence)
Foot lifting distance– Right leg (Avg of maximum)
Time (s)
Time (s)
Distance
ratio
Angle
(deg)
Step distance from neck– Right leg (Avg of maximum)
Time (s)
Distance
ratio
Knee angle
Step distance
from neck
Foot lifting
distance
Minimum knee angles
Maximum lifting distance
20. Dynamic Parameters – Side view
20
20
20
Hip to Ankle distance– Right leg (Avg of minimum)
Distance
ratio
Stride time
Hip to ankle
distance
Time (s)
Stride length = 𝑋2- 𝑋1
𝑋2
𝑋1
Stride velocity =
𝑆𝑡𝑟𝑖𝑑𝑒 𝐿𝑒𝑛𝑔𝑡ℎ
𝑆𝑡𝑟𝑖𝑑𝑒 𝑇𝑖𝑚𝑒
21. Dynamic Parameters – Front view
21
Pelvic obliquity - maximum Step width - maximum
Foot lifting distance– Right leg (Avg of maximum)
Time (s)
Distance
ratio
Angle
(deg)
Time (s)
Distance
ratio
Time (s)
Hip to ankle distance– Right leg (Avg of minimum)
Distance
ratio
Time (s)
Pelvic obliquity
angle
Step width
Foot lifting
distance
Hip to ankle
distance
22. Fixed Parameters
22
𝒊
𝒂
𝑭𝒄
𝒃
F – Fixed parameter
a – Camera view (Side (S)/ Front (F))
b – Name of the parameter
c – Side of parameter (Right (R)/ Left (L)/ Not applicable (NA))
i – Instance of data
N0
.
Parameter name Camera
(a)
Name
(b)
Side
(c)
Feature No.
12 Foot length S FL R 39
13 Foot length S FL L 40
14 Length of lower body F LB NA 55
15 Length of shoulder F LS NA 56
16 Hip size F HS NA 57
17 Length of upper arm F UA R 58
18 Length of upper arm F UA L 59
19 Length of lower arm F LA R 60
20 Length of lower arm F LA L 61
21 Length of arm F A R 62
22 Length of arm F A L 63
23 Length of thigh F TH R 64
24 Length of thigh F TH L 65
25 Length of shank F SH R 66
26 Length of shank F SH L 67
27 Face width F FW NA 68
28 Mouth width F MW NA 69
29 Eye size F ES NA 70
30 Eye to eye distance F EE NA 71
No. Parameter name Camera
(a)
Name
(b)
Side
(c)
Feature
No.
1 Lower body length S LB NA 28
2 Upper arm length S UA R 29
3 Upper arm length S UA L 30
4 Lower arm length S LA R 31
5 Lower arm length S LA L 32
6 Length of arm S A R 33
7 Length of arm S A L 34
8 Length of thigh S TH R 35
9 Length of thigh S TH L 36
10 Length of shank S SH R 37
11 Length of shank S SH L 38
24. Fixed Parameters – Front view
Sample Footer Text 24
Hip length
Length of lower body Length of upper arm (right hand)
Length of lower arm (right hand)
Length of
lower
body
Length of
lower arm
Length of
upper arm
Hip length
25. Fixed Parameters – Front view
25
Length of arm-shoulder to palm (right hand)
Length of arm-shoulder to palm (left hand)
Length of thigh (right )
Length of shank (right )
Length of arm-
shoulder to palm
Length of shank
Length of
thigh
26. Fixed Parameters – Front view
26
Face width Ear to Ear
Mouth width
Distance between eyes Midpoints
Eye size
Face width
Ear to Ear
Mouth width
Eye size
Distance between
eyes Midpoints
27. Fixed Parameters – Side view
27
Length of lower body
Length of lower arm (right hand) Length of thigh (right ) Length of
thigh
Length of
lower arm
Length of
upper arm
Length of
lower
body
28. Fixed Parameters – Side view
28
Length of arm (Right hand)
Length of Shank (right )
Foot length(right leg)
Length of arm-
shoulder to palm
Length of
shank
Foot length
29. Feature Vector
29
[1
𝑆
𝑇𝑅
𝑆𝑇𝑃
]
[1
𝑆
𝑇𝐿
𝑆𝑇𝑃
]
[ 1
𝑆
𝐷𝑅
𝐻𝐴
]
[ 1
𝑆
𝐷𝐿
𝐻𝐴
]
[ 1
𝑆
𝐹𝑁𝐴
𝐿𝐵
]
[ 1
𝑆
𝐹𝑅
𝑈𝐴
]
[1
𝐹
𝑇𝑅
𝑆𝑇𝑃
]
[1
𝐹
𝑇𝐿
𝑆𝑇𝑃
]
[1
𝐹
𝐷𝑁𝐴
𝑃𝑂
]
[1
𝐹
𝐷𝑁𝐴
𝑆𝑊
]
[1
𝐹
𝐹𝑁𝐴
𝐿𝐵
]
[1
𝐹
𝐹𝑁𝐴
𝐿𝑆
]
𝑖
𝑎
𝑃𝑐
𝑏
P – Time parameter (T)
Dynamic parameter (D)
Fixed parameter (F)
a – Camera view
Side (S)
Front (F)
b – Name of the parameter
c – Side of parameter
Right (R)
Left (L)
Not applicable (NA)
i – Instance number
………
Time parameters
12
Dynamic parameters
15
Fixed parameters
13
Time parameters
6
Dynamic parameters
8
Fixed parameters
17
Side
Front
n number of instances
Parameters
71
…
…
…
…
…
…
[𝑛
𝑆
𝑇𝑅
𝑆𝑇𝑃
]
[𝑛
𝑆
𝑇𝐿
𝑆𝑇𝑃
]
[ 𝑛
𝑆𝐷𝑅
𝐻𝐴
]
[ 𝑛
𝑆
𝐷𝐿
𝐻𝐴
]
[ 𝑛
𝑆
𝐹𝑁𝐴
𝐿𝐵
]
[ 𝑛
𝑆
𝐹𝑅
𝑈𝐴
]
[𝑛
𝐹𝑇𝑅
𝑆𝑇𝑃
]
[𝑛
𝐹
𝑇𝐿
𝑆𝑇𝑃
]
[𝑛
𝐹𝐷𝑁𝐴
𝑃𝑂
]
[𝑛
𝐹
𝐷𝑁𝐴
𝑆𝑊
]
[𝑛
𝐹𝐹𝑁𝐴
𝐿𝐵
]
[𝑛
𝐹
𝐹𝑁𝐴
𝐿𝑆
]
…
…
…
…
…
…
[2
𝑆
𝑇𝑅
𝑆𝑇𝑃
]
[2
𝑆
𝑇𝐿
𝑆𝑇𝑃
]
[ 2
𝑆
𝐷𝑅
𝐻𝐴
]
[ 2
𝑆
𝐷𝐿
𝐻𝐴
]
[ 2
𝑆
𝐹𝑁𝐴
𝐿𝐵
]
[ 2
𝑆
𝐹𝑅
𝑈𝐴
]
[2
𝐹
𝑇𝑅
𝑆𝑇𝑃
]
[2
𝐹
𝑇𝐿
𝑆𝑇𝑃
]
[2
𝐹
𝐷𝑁𝐴
𝑃𝑂
]
[2
𝐹
𝐷𝑁𝐴
𝑆𝑊
]
[2
𝐹
𝐹𝑁𝐴
𝐿𝐵
]
[2
𝐹
𝐹𝑁𝐴
𝐿𝑆
]
…
…
…
…
…
…
i = 1 i = 2 i = n
30. Data preparation
30
Taking key points
coordinates from skeleton
Detecting walking
direction
Calculating parameter
values
Normalizing and plotting
Cleaning the values
(distinct value removal, cut
off by upper limit,
quantile)
Appending values in arrays
31. Detecting Walking Directions
31
𝑋𝑏𝑜𝑑𝑦 𝑐𝑒𝑛𝑡𝑒𝑟 =
𝑋𝑙𝑒𝑓𝑡 𝑠ℎ𝑜𝑢𝑙𝑑𝑒𝑟 + 𝑋𝑟𝑖𝑔ℎ𝑡 𝑠ℎ𝑜𝑢𝑙𝑑𝑒𝑟 + 𝑋𝑙𝑒𝑓𝑡 ℎ𝑖𝑝 + 𝑋𝑟𝑖𝑔ℎ𝑡 ℎ𝑖𝑝
4
Variation of x coordinate of the center point of body & rate of change of x coordinate
Turning points of the walking subject
Distance
velocity
Distance
Time (s)
Time (s)
32. Separating plot by direction
32
Walking right to left
Walking left to right
Time (s)
Distance
ratio
Distance
ratio
Time (s)
Distance
ratio
Time (s)
One instance value for feature vector
Hip to ankle distance
33. Data Pre-processing
33
Upper limit = Mean of values + Standard Deviation of time gap values x 0.5
Distribution of values of peak gaps Distribution of values of peak gaps
after removing outliers
Distance
ratio
Time (s)
Distance
ratio
Time (s)
Cut off
outliers
Distribution of values of peak gaps after
removing wrong detections
Hip to ankle distance – left to right Hip to ankle distance – left to right
34. More plots…
Sample Footer Text 2/8/20XX 34
Walking left to right
Walking right to left
Foot lifting distance – left leg Length of thigh - right leg
Walking left to right
Walking right to left
Average fix values for two instances
Time
Time
Time
Time
Time
Distance
ratio
Distance
ratio
Distance
ratio
Distance
ratio
Distance
ratio
35. Future Work
• Developing reliability matrix for calculated data of parameters.
• Preparing feature vectors for selected number of persons.
• Training the neural network and evaluating the model.
• Increasing the precision of model.
• Finishing the documentation.
35
36. Methodology
36
Documentation and Finalizing
Developing GUI
Evaluation of Model
Creating and Evaluating Deep Learning Model
Preparing Feature Vectors
Calculating Reliability of each data of parameter
Analyzing Time, Dynamic and Fixed Parameters
Pose Estimation
Conceptual Design
Problem Identification & Literature Review
37. Work Plan
37
Aug Sep Oct Nov Dec Jan Feb Mar Apr May
Problem
Identification
Literature Review
Conceptual Design
Pose Estimation
Analyzing Gait
Parameters &
Feature Vectors
Calculating
Reliability of each
parameter
Design Deep
Learning Model
Improving accuracy
& Precision of the
model
Develop GUI
38. References
[1]Y. Ye and P. Li, "Disguising Gait Detection and Recognition Based On Deep Learning", Ieeexplore.ieee.org,
2019. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/8997862. [Accessed: 03- Dec-
2021].
[2]G. Bhargavas, K. Harshavardhan, G. Mohan, A. Nikhil and C. Prathap, "Human identification using gait
recognition", Ieeexplore.ieee.org, 2017. [Online]. Available: https://ieeexplore.ieee.org/document/8286638.
[Accessed: 03- Dec- 2021].
[3]B. Kwon and S. Lee, "Human Skeleton Data Augmentation for Person Identification over Deep Neural
Network", MDPI, 2020. .
[4]S. Pulagam, "A Simplified approach using PyCaret for Anomaly Detection", Medium, 2020. [Online].
Available: https://towardsdatascience.com/a-simplified-approach-using-pycaret-for-anomaly-detection-
7d33aca3f066. [Accessed: 06- Dec- 2021].
[5]Skeleton based gait recognition for long and baggy clothes Abrar Alharbi, Fahad Alharbi, Eiji
KamiokaMATEC Web Conf. 277 03005 (2019)DOI: 10.1051/matecconf/201927703005
[6] R. Liao, C. Cao, E. B. Garcia, S. Yu, and Y. Huang, “Pose-based temporal-spatial network (PTSN) for gait
recognition with carrying and clothing variations,” in Proc. Chin. Conf. Biometric Recognit. (CCBR),2017, pp.
474–483.
38
Our purpose is to develop a model to recognize a person by analyzing gait parameters and body features. In forensic applications, sometimes the investigators may fail in finding evidence on finger prints and other types of physical evidences of the criminal. But we can find the criminals in CCTV videos but sometimes there can be difficulties in face recognitions. For that type of situations, gait analysis can be applied to recognize the criminal. If we have a group of suspects for a particular case, we can record their waking videos, prepare feature vectors by analyzing gaits and train a model as a classification. And then we can test the model with the evidence video. Since there are many methods to do gait analysis, in our scope, we use skeleton model based method under supervised learning approach. We found many research papers which have already analyzed gait parameters for person identification.
In this paper, they have developed a human identification system using gait recognition. They used Kinect sensor to record the depth information and extract joint coordinates. They have selected one value from the sets of frames of video sequences such as maximum or mean of each parameters. The created database has converted to libsvm format for the classification. It is an open source library for SVM. SVM model represents feature values as points in coordinate space with a clear gap for different persons. With this algorithm they achieved 93% of accuracy. But they used only front view in an indoor environment. They used Kinect sensors to get skeleton coordinations. But according to our application, we cannot go for Kinect sensors because we have to test with recorded videos. They have not considered about disguising gaits.
This is another paper, the specialty is that they have develop a model to detect fake walk patterns using LSTM based auto encoder. For parameter calculations, they used these formulas. We also use these equations for our calculations. They have also used Kinect sensor to data extraction.
The auto encoder has four LSTM layers. First two layers are encoder. It encodes the feature values of frame sets into a single vector. By comparing with a threshold value, they detected disguising patterns. For disguising gaits, only static features were give to the decoder. After that, decoded gait sequence was input to the neural network which has biLSTM branch and temporal convolutional branch. But they have considered only one angle of Kinect sensor with limited number of parameters.
1)this study is to identify people who are wearing long baggy clothes like Thobe and Abaya
Only side view is considered. Therefore, only the Y-coordinate data is used as input feature because the distance between the Kinect sensor and the walking line was fixed. So there is no parameter calculations. They have predicted that Upper body joints give more information than the lower body joints in the process of gait recognition for long and baggy clothes. Therefore, we consider in taking feature values from upper body of the persons.
The previous explained papers have done pose estimation with Kinect sensor. But this paper has used a pre-trained model of multi-person for 2D pose estimation from video frames sets. They have normalized the points because of the gap between camera and person varies with time. They used the LSTM for temporal features extraction and Convolutional Neural Network (CNN) is used to extract the spatial features from static gait pose frames. They predicted that their skeleton based pose estimation method gives better results for carrying and clothing variations than other methods of gait analysis such as GEI. But they have considered only seven joint coordinates from pose estimation and directly used prepared data sets.
Most of the researchers have used Kinect sensors to extract skeleton data since it is easy. But according to our application, we cannot go for Kinect sensors to record videos. Because our test video is a surveillance recordings. we consider more than one view of videos. We mostly pay attention on analyzing parameters and preprocessing data according to their reliability and then input to the model. And we analyse many parameters that we can get from the skeleton coordination system even though time parameters which have not used in other research papers. and filter the most reliable parameters out of them.
The approach is, First we record the videos of selected number of persons and then,,,
According to our approach, we first recorded our walking video and did pose detection using media pipe library. We use three views to record videos. But seems up view does not give better skeletal information. So we rejected up view and to normalize the lengths we selected two methods.
We analyze three types of parameters. . Time parameters are aquired using the gait cycle. For an example, stance time is the time period which the leg on the floor, and swing time is which the leg is in the air. Dynamic parameters are the parameters which varies with the time. Such as angles and distance variations. Fixed parameters are called the lengths of limbs
18 number of
Each parameter is addressed by an identical notation
In gait cycle, by considering heel strike and toe off stages we can calculate all the time parameters. As the first stage, heel strike happens when the foot starts to touch the floor. At that time, the vertical distance between hip and foot index becomes minimum. The toe off means the moment when the foot leaves the floor. At that time, the distance between hip to heel becomes minimum as shown in fourth stage. By analysing these two y coordinate variations, we can detect points of heel strikes and toe offs.
As shown in blue colour graph,the length between hip to heel becomes minimum, the toe off happens. When ,the length between hip to foot index becomes minimum, the heel strikes happens. All the time parameters can be calculated using this graph. For a example, stride time means, the gap between two adjacent heel strikes of same leg.
These are some more examples of time parameters with equations. Step time is about half of the stride time. Double support time is the time period which the both feet are on the floor. Those are very sensitive features.
These are the equations we used to calculate dynamic parameters.
To calculate angles,
X or y coordinate difference for length variations. Then we plot the feature values of each frames.
We calculate stride time taking the time gap between two valleys. By taking the particular x coordinate value of each valley points from key array, we can calculate stride velocity. Than we can calculate stride velocity of each cycle.
This an example of shoulder length calculation. We use Euclidean distance equation for coordinates of very frames. Then we can have a variation like this, for a one instance, which includes three gait cycles. The quantile function is used to take an average fix values for a instance
When the limbs become unfold, the maximum lengths are received. Therefore, an average value is calculated within the maximum range of values.
This is the feature vector model. The notation of parameter includes five information.
One instance includes 71 parameters. Width
To sectionalize the instances from a video, we thought to automate the system to generate feature values automatically for every instance.
This is the work up to now we have done. We have created a system to take key points coordinates of every frame from the skeleton and save in a key array with time.
Then detecting the walking direction as left to right/ right-left to sectionalize the instances from video.
We sectionalize the training video as left-right and right –left parts of walking video. To identify the turning points, we plot the x coordinate variation of center of body since it is a stable point. The valleys and peaks gives the turning points of the person.
This is an example of separated parameter. A feature value from one left-right or right-left slot is taken for one instance. One slot contains three or four gait cycles. By detecting peaks or valleys, an avg value or sequence is taken for one instance. This variation is taken to calculate stride time. Therefore, avg time is calculated by taking time difference between two adjacent valleys. So we have to remove large gaps between valleys. To do that
we calculate standard deviation of time gaps and set an upper limit after the mean value. Then remove the outliers which lie beyond the upper limits. So we neglect large time gaps.
Sometimes, there were wrong point detections. To remove those points, we calculate mean value of peak gaps, and remove the points which gives lesser values than 70% of mean.
Then we can neglect wrong points like this.
These are some more examples on dividing the parameter plots according to the direction. One dynamic and static parameter.
Now we are studying about developing reliability matrix..
These are some references which we got information.