The Multimodal
Learning Analytics
Pipeline
Daniele Di Mitri1, Jan Schneider2,
Marcus Specht3, Hendrik Drachsler2
1 Open University of The Netherlands, The Netherlands
2 German Institute for International Educational Research, Germany
3 Delft University of Technology, The Netherlands
• We introduce the Multimodal Learning
Analytics Pipeline (MMLA Pipeline)
• a generic approach for collecting and exploiting
multimodal data to support learning activities across
physical and digital spaces
• Using Internet of Things devices, Wearable sensors,
Signal Processing and facilitating Machine Learning
Introduction
1. Psychomotor learning
• coordination between body and mind
2. Multimodal Learning
• Modes: text, image, speech, haptic
• Modalities: speaking, gesturing, moving,
facial expressions, physiological signals
3. Embodied communication
• People communicate using whole body
Relevant Theories of Learning
Multimodal Learning Analytics
(MMLA)
Learning Analytics approach
Measurement, collection, analysis and
reporting of data about learners
+
Data from multiple modalities
=
More accurate representation
of the learning process!
• Problem: multimodal data is complex
• multi-dimensional, have different format, support
• It’s noisy, messy,
• Difficult to store, synchronize, annotate, and exploit
• Solution: MMLA Pipeline
• Support researchers in setting up experiments
much more quickly
• Standard tools over tailor-made solutions
• Reduce data manipulation over-head
• Focus on analysis
Enabling technological advances
Graphical overview
Task model 3rd party
sensors or API
2. Data
storing
Dashboards
Physiological
sensor data
Motoric
sensor data
(D)
Historical
reports
(B)
Predictions
(C)
Patterns
5. Data
exploitation
1. Data
collection
Processed
data store
4. Data
processing
Predictions
Model
fitting
(A)
Corrective
feedback
Intelligent Tutors
(A)
Evaluation
Expert reports
3. Data
annotation
(B,C)
Prediction
models
(D)
(D)
B,C
(B,C)
RESEARCHPRODUCTION
correctionsawareness orchestration adaptation
Current prototypes
1. Multimodal Learning Hub
(Schneider et al., 2018)
data collection, data storing
2. Visual Inspection Tool
(Di Mitri et al., 2019)
data annotation
MLT data format
example of serialisation
of Myo data in JSON
MLT = Meaningful Learning Experience
example annotation.json
A. Corrective feedback: hardcoded rules
e.g. “if sensor value is x then y”; (non-
adaptive)
B. Classification/Prediction: estimation
of the learning labels (adaptive)
C. Pattern identification mining of recurrent
sensor values
D. Historical reports: visualizations and
analytics dashboard
Exploitation strategies
Real world applications
3. CPR Tutor
2. Presentation Trainer1. Calligraphy trainer
• All cases are Multimodal Tutors
• intelligent tutoring systems that use multimodal data
• The MMLA Pipeline supports
• the collection, analysis, annotation and exploitation of
multimodal data
• Current prototype
• optimized for individual learning
• recording of ~10 minutes, retrospective feedback
• Future prototypes
• real-time multimodal feedback
• collaborative learning
• longer recorded sessions
Evidence of potential impact
• The MMLA Pipeline is a generic and useful
approach for researchers
• Flexible and extensible applications
• Can be used with a range of sensor
applications
• The different components are Open Source
• Available for demo!
Summary
• Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018). Multimodal Learning
Hub: A Tool for Capturing Customizable Multimodal Learning Experiences, 1, 45–
58. http://doi.org/10.1007/978-3-319-98572-5_4
• Di Mitri D, Schneider J, Specht M, Drachsler H. From signals to knowledge: A
conceptual model for multimodal learning analytics. J Comput Assist Learn.
2018;34:338–349. https://doi.org/10.1111/jcal.12288
• Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines:
An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th
International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51–60).
New York, NY, USA: ACM. http://doi.org/10.1145/3303772.3303776
• Di Mitri D. (2018) Multimodal Tutor for CPR. In: Penstein Rosé C. et al. (eds)
Artificial Intelligence in Education. AIED 2018. Lecture Notes in Computer Science,
vol 10948. Springer, Cham. http://doi.org/10.1007/978-3-319-93846-2_96
• Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018) The Big Five:
Addressing Recurrent Multimodal Learning Data Challenges. In R. Martinez-
Maldonado et al. (Eds.), Proceedings of the Second Multimodal Learning Analytics
Across (Physical and Digital) Spaces (CrossMMLA), Vol. 2163. CEUR
Proceedings. http://ceur-ws.org/Vol-2163/#paper6
Useful references
Daniele DI MITRI
daniele.dimitri@ou.nl
www.dimstudio.org

The Multimodal Learning Analytics Pipeline

  • 2.
    The Multimodal Learning Analytics Pipeline DanieleDi Mitri1, Jan Schneider2, Marcus Specht3, Hendrik Drachsler2 1 Open University of The Netherlands, The Netherlands 2 German Institute for International Educational Research, Germany 3 Delft University of Technology, The Netherlands
  • 3.
    • We introducethe Multimodal Learning Analytics Pipeline (MMLA Pipeline) • a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces • Using Internet of Things devices, Wearable sensors, Signal Processing and facilitating Machine Learning Introduction
  • 4.
    1. Psychomotor learning •coordination between body and mind 2. Multimodal Learning • Modes: text, image, speech, haptic • Modalities: speaking, gesturing, moving, facial expressions, physiological signals 3. Embodied communication • People communicate using whole body Relevant Theories of Learning
  • 5.
    Multimodal Learning Analytics (MMLA) LearningAnalytics approach Measurement, collection, analysis and reporting of data about learners + Data from multiple modalities = More accurate representation of the learning process!
  • 6.
    • Problem: multimodaldata is complex • multi-dimensional, have different format, support • It’s noisy, messy, • Difficult to store, synchronize, annotate, and exploit • Solution: MMLA Pipeline • Support researchers in setting up experiments much more quickly • Standard tools over tailor-made solutions • Reduce data manipulation over-head • Focus on analysis Enabling technological advances
  • 7.
    Graphical overview Task model3rd party sensors or API 2. Data storing Dashboards Physiological sensor data Motoric sensor data (D) Historical reports (B) Predictions (C) Patterns 5. Data exploitation 1. Data collection Processed data store 4. Data processing Predictions Model fitting (A) Corrective feedback Intelligent Tutors (A) Evaluation Expert reports 3. Data annotation (B,C) Prediction models (D) (D) B,C (B,C) RESEARCHPRODUCTION correctionsawareness orchestration adaptation
  • 8.
    Current prototypes 1. MultimodalLearning Hub (Schneider et al., 2018) data collection, data storing 2. Visual Inspection Tool (Di Mitri et al., 2019) data annotation
  • 9.
    MLT data format exampleof serialisation of Myo data in JSON MLT = Meaningful Learning Experience example annotation.json
  • 10.
    A. Corrective feedback:hardcoded rules e.g. “if sensor value is x then y”; (non- adaptive) B. Classification/Prediction: estimation of the learning labels (adaptive) C. Pattern identification mining of recurrent sensor values D. Historical reports: visualizations and analytics dashboard Exploitation strategies
  • 11.
    Real world applications 3.CPR Tutor 2. Presentation Trainer1. Calligraphy trainer
  • 12.
    • All casesare Multimodal Tutors • intelligent tutoring systems that use multimodal data • The MMLA Pipeline supports • the collection, analysis, annotation and exploitation of multimodal data • Current prototype • optimized for individual learning • recording of ~10 minutes, retrospective feedback • Future prototypes • real-time multimodal feedback • collaborative learning • longer recorded sessions Evidence of potential impact
  • 13.
    • The MMLAPipeline is a generic and useful approach for researchers • Flexible and extensible applications • Can be used with a range of sensor applications • The different components are Open Source • Available for demo! Summary
  • 14.
    • Schneider, J.,Di Mitri, D., Limbu, B., & Drachsler, H. (2018). Multimodal Learning Hub: A Tool for Capturing Customizable Multimodal Learning Experiences, 1, 45– 58. http://doi.org/10.1007/978-3-319-98572-5_4 • Di Mitri D, Schneider J, Specht M, Drachsler H. From signals to knowledge: A conceptual model for multimodal learning analytics. J Comput Assist Learn. 2018;34:338–349. https://doi.org/10.1111/jcal.12288 • Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines: An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51–60). New York, NY, USA: ACM. http://doi.org/10.1145/3303772.3303776 • Di Mitri D. (2018) Multimodal Tutor for CPR. In: Penstein Rosé C. et al. (eds) Artificial Intelligence in Education. AIED 2018. Lecture Notes in Computer Science, vol 10948. Springer, Cham. http://doi.org/10.1007/978-3-319-93846-2_96 • Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018) The Big Five: Addressing Recurrent Multimodal Learning Data Challenges. In R. Martinez- Maldonado et al. (Eds.), Proceedings of the Second Multimodal Learning Analytics Across (Physical and Digital) Spaces (CrossMMLA), Vol. 2163. CEUR Proceedings. http://ceur-ws.org/Vol-2163/#paper6 Useful references
  • 15.