This slide introduces the research topic of HCI Lab, Gachon University, Korea (Professor Ahyoung Choi)
For more information, please visit our research web site.
https://sites.google.com/view/hcilab/home
Dehradun Call Girls Service ❤️🍑 9675010100 👄🫦Independent Escort Service Dehradun
Wearables and Sensors Research for Health Monitoring
1. Ahyoung Choi, Ph.D.
Associate professor
Department of AI and Software
Gachon Univ., South Korea
Health sensing by wearables
(웨어러블 센서기반 헬스케어 모니터링 기술)
Research topics of HCI Lab.
2. This slide introduces the research topic of HCI Lab, Gachon University, Korea
(Professor Ahyoung Choi)
Please feel free to visit our research web site.
https://sites.google.com/view/hcilab/home
We are looking for positive and passionate persons pursuing the graduate program
and finding a postdoctoral position.
If you have an interest in physiological signal analysis, machine learning, deep
learning, and healthcare/wellness applications, please feel free to contact me by
mail (aychoi@gachon.ac.kr ).
3. Body fat monitoring
Smartphone-based Bioelectrical Impedance Analysis Devices
– Measures impedance from multiple frequencies (5 kHz~200 kHz) with four contact electrodes
– Evaluated the BIA device against standard body composition analysis systems
– Apply multiple regression method with age, height, weight, gender, and race
A. Choi et al., “Smartphone-based Bioelectrical Impedance Analysis Devices for Daily Obesity Management”, Sensors, 15(9), 22151-22166, 2015.
Y. Bhagat, I. Kim, Y. Kim, A. Choi, S. Jo, J. Cho, “Mind your composition”, IEEE PULSE, 6(5), 20-25, 2015 September.
S. Heymsfield et al., “Evaluation of Novel Hand-held Wireless Bioelectrical Impedance Analysis(BIA)”, FASEB Journal, pp. 747-747, 2015.
Cylindrical volume conductor
model
With constant geometry and
composition, impedance is related
directly to the product of specific
resistivity and length of the
conductor and indirectly to its cross
sectional area.
4. Intake counting
Eating start Eating end
Intake time detection
Intake term detection
Eating habit monitoring
Real-time automatic food intake behavior recognition
– Eating frequency: How many times user eat during having a meal
– Eating term detection : Time differences between spoon up and down
– Eating time: Total food intake time
SVM and Yaw based method with 90.9% accuracy
– 490 data were recognized correctly and 49 data detected incorrectly.
– 90.9% sensitivity (a true positive) accuracy
– 100% PPV(Positive predictive value)
J. Jo and A.Choi, "Automatic Food Intake Frequency Detection Method," Journal of Industrial Information Technology and Application, Vol.1 No.1, pp.4-8, 2017.
5. Eating habit monitoring
Deep learning based food intake pattern recognition
– Recognize various kinds of food intake behaviors such as eating with a spoon, picking food with
chopsticks, and drinking water in Asian culture
– Use a 3-axis accelerometer and processed it through a preprocessing process and classified it
using a CNN model with periodic food intake data
– Average 87.98% accuracy (57.89% for spoon, 93.81% for chopsticks and 96.49% for cups)
J.Cho, A.Choi, "Asian-style Food Intake Pattern Estimation Based on Convolutional Neural Network," ICCE 2018, Poster, Jan 12-14, 2018.
6. Blood pressure estimation
Apply deep learning method using ECG and PPG signals
– Multiple time-series ECG and PPG signal are used as an input to estimate BP
– Apply a CNN based on a distance and similarity measurement method
‘Physionet’open database
MIMIC database
Recordings of 49 ICU patients
0 100 200 300 400 500 600 700 800 900 1000
0
50
100
150
200
ABP
0 100 200 300 400 500 600 700 800 900 1000
-1
-0.5
0
0.5
1
ECG
0 100 200 300 400 500 600 700 800 900 1000
-1
-0.5
0
0.5
1
PPG
Continuous BP
ECG
PPG
Systolic BP
Diastolic BP
Input data sequence
(one period signal)
Data collection
Models
1DCNN(1~10beats)
RNN
BP estimation
CNN
Systolic BP
Diastolic BP
Data
segmentation
Outlier
removal
Noise
filtering
Train/Test
Dataset
Signal processing
ECG
PPG
Input – 1D & 2-channel
7. 0 10 20 30 40 50 60 70 80 90 100
Samples
20
40
60
80
100
DBP(mmHg)
Target value
Estimated value
0 10 20 30 40 50 60 70 80 90 100
Samples
50
100
150
200
SBP(mmHg)
Target value
Estimated value
Blood pressure estimation
1D CNN model + one periodic data + intra-subject analysis
– 60% (1,122,961) for train and 40% (401,187 * 2) for validation and test dataset
– Mean absolute error of SBP was 4.30mmHg, and 2.06 mmHg for DBP
Findings: SBP prediction results tend to be more inaccurate than the DBP
MAE 4.30
MAE 2.06
ECG
PPG
8. Continuous blood pressure estimation
Continuous blood pressure (BP) waveform
– Use an electrocardiogram (ECG) and a photoplethysmogram (PPG)
– Estimate systolic BP (SBP) and diastolic BP (DBP) based on three-layer convolutional neural
network
– Synthesize the continuous BP waveform based on the equation as bellow
P = DBP + 0.5PP + 0.36PP [sin(wt)+ 0.5 sin(2wt) + 0.25sin(3wt) ]
– 230 data from MIMIC database
– Result : 7.25 mmHg error from SBP, 5.7 mmHg error from DBP
1898 1900 1902 1904 1906 1908 1910 1912
Time (sec)
40
60
80
100
120
140
Blood
pressure
(mmHg)
Aterial blood pressure (Target)
Continous blood pressure (Estimated)
Signal
processing
Convolutional
neural
network
ECG signal
PPG signal
Time-domain
blood pressure
waveform
SBP
DBP
Y.Kim, J.Kang, J.Cho and A.Choi, "Continuous blood pressure signal modeling based on deep learning method and a physiological mathematical model, " IEEE EMBC(Engineering
in Medicine and Biology), Berlin, Germany, July 23– 27, 2019.
9. EEG-based emotion analysis
EEG based Emotion Classification with LSTM network using Attention
mechanism
– Input signals: 32 channel of EEG signal from DEAP database
– Target output: 2-level and 3-level classification of Valence and Arousal
– Result
90.1±2.1% for valence and 87.9±2.4% for arousal (2-level classification)
82.3±0.4% for valence and 81.7±1.1% for arousal (3-level classification)
Youmin Kim and Ahyoung Choi, "EEG-Based Emotion Classification Using Long Short-Term Memory Network with Attention Mechanism," Sensors, 20(23), 6727, Nov. 2020.
Input Layer
Raw EEG signal
Bi-LSTM Layer 1 Bi-LSTM Layer 2
Dropout Attention Layer
Dense Layer 1
Output
Dense Layer 2
ReLU ReLU
ReLU
Sigmoid(2-level)
Softmax(3-level)
10. Hybrid deep learning fusion network
Input data
– Acc signals (18 channel)
– Framed skeleton image data
– Coordinates vector data
Target output: 9 actions
Results (accuracy)
– Using sensor & image –94~95%
Using only Sensor –85~86%
Using only Image –65~70%
– With distorted data –91~93%
Human activity recognition
Junhyuk Kang, Jieun Shin, Jaewon Shin, Daeho Lee and Ahyoung Choi, "Robust Human Activity Recognition by Integrating Image and
Accelerometer Sensor Data Using Deep Fusion Network," Sensors, 22(1), 174, Dec. 2022