Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Multimodal Learning Analytics Pipeline


Published on

Presented the 25th May 2019 at the conference Artificial Intelligence and Adaptive Education (AIAED'19) Beijing, China.

Abstract: We introduce the Multimodal Learning Analytics Pipeline, a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces. The MMLA Pipeline facilitates researchers in setting up their multimodal experiments, reducing setup and configuration time required for collecting meaningful datasets. Using the MMLA Pipeline, researchers can decide to use a set of custom sensors to track different modalities, including behavioural cues or affective states. Hence, researchers can quickly obtain multimodal sessions consisting of synchronised sensor data and video recordings. They can analyse and annotate the sessions recorded and train machine learning algorithms to classify or predict the patterns investigated.

Published in: Data & Analytics
  • Login to see the comments

The Multimodal Learning Analytics Pipeline

  1. 1. The Multimodal Learning Analytics Pipeline Daniele Di Mitri1, Jan Schneider2, Marcus Specht3, Hendrik Drachsler2 1 Open University of The Netherlands, The Netherlands 2 German Institute for International Educational Research, Germany 3 Delft University of Technology, The Netherlands
  2. 2. • We introduce the Multimodal Learning Analytics Pipeline (MMLA Pipeline) • a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces • Using Internet of Things devices, Wearable sensors, Signal Processing and facilitating Machine Learning Introduction
  3. 3. 1. Psychomotor learning • coordination between body and mind 2. Multimodal Learning • Modes: text, image, speech, haptic • Modalities: speaking, gesturing, moving, facial expressions, physiological signals 3. Embodied communication • People communicate using whole body Relevant Theories of Learning
  4. 4. Multimodal Learning Analytics (MMLA) Learning Analytics approach Measurement, collection, analysis and reporting of data about learners + Data from multiple modalities = More accurate representation of the learning process!
  5. 5. • Problem: multimodal data is complex • multi-dimensional, have different format, support • It’s noisy, messy, • Difficult to store, synchronize, annotate, and exploit • Solution: MMLA Pipeline • Support researchers in setting up experiments much more quickly • Standard tools over tailor-made solutions • Reduce data manipulation over-head • Focus on analysis Enabling technological advances
  6. 6. Graphical overview Task model 3rd party sensors or API 2. Data storing Dashboards Physiological sensor data Motoric sensor data (D) Historical reports (B) Predictions (C) Patterns 5. Data exploitation 1. Data collection Processed data store 4. Data processing Predictions Model fitting (A) Corrective feedback Intelligent Tutors (A) Evaluation Expert reports 3. Data annotation (B,C) Prediction models (D) (D) B,C (B,C) RESEARCHPRODUCTION correctionsawareness orchestration adaptation
  7. 7. Current prototypes 1. Multimodal Learning Hub (Schneider et al., 2018) data collection, data storing 2. Visual Inspection Tool (Di Mitri et al., 2019) data annotation
  8. 8. MLT data format example of serialisation of Myo data in JSON MLT = Meaningful Learning Experience example annotation.json
  9. 9. A. Corrective feedback: hardcoded rules e.g. “if sensor value is x then y”; (non- adaptive) B. Classification/Prediction: estimation of the learning labels (adaptive) C. Pattern identification mining of recurrent sensor values D. Historical reports: visualizations and analytics dashboard Exploitation strategies
  10. 10. Real world applications 3. CPR Tutor 2. Presentation Trainer1. Calligraphy trainer
  11. 11. • All cases are Multimodal Tutors • intelligent tutoring systems that use multimodal data • The MMLA Pipeline supports • the collection, analysis, annotation and exploitation of multimodal data • Current prototype • optimized for individual learning • recording of ~10 minutes, retrospective feedback • Future prototypes • real-time multimodal feedback • collaborative learning • longer recorded sessions Evidence of potential impact
  12. 12. • The MMLA Pipeline is a generic and useful approach for researchers • Flexible and extensible applications • Can be used with a range of sensor applications • The different components are Open Source • Available for demo! Summary
  13. 13. • Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018). Multimodal Learning Hub: A Tool for Capturing Customizable Multimodal Learning Experiences, 1, 45– 58. • Di Mitri D, Schneider J, Specht M, Drachsler H. From signals to knowledge: A conceptual model for multimodal learning analytics. J Comput Assist Learn. 2018;34:338–349. • Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines: An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51–60). New York, NY, USA: ACM. • Di Mitri D. (2018) Multimodal Tutor for CPR. In: Penstein Rosé C. et al. (eds) Artificial Intelligence in Education. AIED 2018. Lecture Notes in Computer Science, vol 10948. Springer, Cham. • Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018) The Big Five: Addressing Recurrent Multimodal Learning Data Challenges. In R. Martinez- Maldonado et al. (Eds.), Proceedings of the Second Multimodal Learning Analytics Across (Physical and Digital) Spaces (CrossMMLA), Vol. 2163. CEUR Proceedings. Useful references
  14. 14. Daniele DI MITRI