Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Read Between The Lines: an Annotation Tool for Multimodal Data

254 views

Published on

This is the presentation of Read Between The Lines, the paper which we published at the Learning Analytics & Knowledge Conference 2019 in Tempe, Arizona (#LAK19).

Link to the paper available in Open Access ACM library https://dl.acm.org/citation.cfm?id=3303776

Abstract:
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.

Published in: Education
  • Be the first to comment

  • Be the first to like this

Read Between The Lines: an Annotation Tool for Multimodal Data

  1. 1. Read Between The Lines Daniele DI MITRI^, Jan SCHNEIDER*, Roland KLEMKE^, Marcus SPECHT^, Hendrik DRACHSLER^* SafePAT CM - 20180206 an Annotation Tool for Multimodal Data for Learning ^ Open University of The Netherlands * DIPF — German Institute for International Educational Research LAK’19 March 6th 2019, Tempe, Arizona, U.S.A.
  2. 2. Learning Analytics focus on online learning
  3. 3. How people actually learn?
  4. 4. Learning with mouse and keyboard Most of LA tools and studies use Learner-to-computer events user clicked a page user watched a video user comments a post easy to distinguish <who did what?> Typical settings desktop/laptop based learning LA technologies are shaped around this e.g. Experience API
  5. 5. Technologies without mouse and keyboard
  6. 6. Multimodal Learning Analytics (MMLA) LA approach Measurement, collection, analysis and reporting of data about learners + Data from multiple modalities = More accurate representation of the learning process!
  7. 7. Problem: MMLA is expensive! • Sensor data pose much bigger challenges • e.g. identify “who does what” is not straight-forward • Creating sensor architectures is complex task • Tailor-made solution are chosen over scalable solutions • They cannot be re-used, they don’t scale • Limit the research power
  8. 8. Theoretical Framework MMLA Model Di Mitri D, Schneider J, Specht M, Drachsler H. From signals to knowledge: A conceptual model for multimodal learning analytics. J Comput Assist Learn. 2018;1–12. https://doi.org/10.1111/jcal.12288
  9. 9. Five Big Challenges for MMLA D Di Mitri, J Schneider, M Specht, H Drachsler - 2018 The Big Five: Addressing Recurrent Multimodal Learning Data Challenges Feedback loop Classification Framework MMLA Feedback loop
  10. 10. Methodology 4. Validation Validation of VIT with 3 ITSs 3. Development Developed components to address FR's 2. Functional Requirements Derived 6 Functional Requirements 1. Review tools Reviewed 7 existing MMLA tools
  11. 11. Tool Collection Storing Annotation Processing Exploitation Main purpose 1. Social Signal Interpretation (Wagner, 2013) Multisource, Synchronised streams No custom format Using NovA Custom pipelines, various ML algorithms n.a. Human activity recognition 2. Lab Streaming Layer (Kothe, 2018) Multisource, streaming, synchronised streams Custom data format (XDF) n.a. n.a. n.a. Physiological data synchronisation 3. Data Curation Framework (Amin, 2016) Multisource, synchronised batches n.a. n.a. Anomaly detection n.a. Pervasive healthcare monitoring 4. ChronoViz (Fouse, 2011) n.a. n.a. Text based annotations n.a. n.a. Video coding human interactions 5. RepoViz (Mayor et al., 2013) n.a. Custom data format (repoVizz struct) Text based annotations n.a. n.a. Visual analysis of multi-user orchestration 6. GIFT (Sottilare, 2012) Multisource, batches Store in csv format n.a. Can be linked with external processing tools Corrective and personalised feedback Designing ITS 7. Multimodal Learning Hub (Schneider, 2018) Multisource, synchronised batches Custom data format (MLT) n.a. n.a. Corrective feedback Intelligent Learning Feedback Step 1) Reviewing existing tools
  12. 12. Multimodal Learning Hub The LearningHub is a software in C# which to collect and synchronise data from multiple sensor applications. Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018) Multimodal Learning Hub: A Tool for Capturing Customizable Multimodal Learning Experiences, 1, 45–58 • DATA COLLECTION data from multiple sensor applications • DATA STORING sensor data saved into MLT session • DATA EXPLOITATION it is possible to push simple feedback strings
  13. 13. 6 Functional requirements (FR’s) (FR1) the user can plot and visualise a multimodal recording file, featuring multiple synchronised data streams; (FR2) the user can view video of the session synchronised with the multimodal data; (FR3) the user can add annotations to single time intervals in attribute- value form; (FR4) the user can add custom annotations; (FR5) the user can download the annotations or attach them to the session file; (FR6) the tool should be compatible with cloud-based solutions for scalability and shared access.
  14. 14. The Visual Inspection Tool COMPONENTS a) Loading session file b) Attribute listing c) Loading annotation files d) Edit intervals e) Edit annotations f) Plot attributes g) Show video recordings
  15. 15. Visual Inspection Tool
  16. 16. Output of the VIT Input of the VIT Output of the VIT MLT session MLT session annotated
  17. 17. Transforming the annotated session
  18. 18. Machine learning idea Tensor (samples, bins, attributes) t1,t2,tn • Each sample is an array a smaller time-series • Each sample has different length • Resample all samples into equal number of bins • Would lead to a tensor (sample, bins, attributes) • Can be used with Neural Networks
  19. 19. What to do next? Data exploitation a) Corrective non-adaptive feedback b) Predictive adaptive feedback c) Pattern identification d) Historical reports e) Diagnostic analysis of factors f) Learner-Expert Comparison
  20. 20. Validation of VIT in 3 ITS 3. CPR Tutor 2. Presentation Trainer1. Calligraphy trainer
  21. 21. Available on GitHub https://github.com/dimstudio/visual-inspection-tool/
  22. 22. Conclusions We created the VISUAL INSPECTION TOOL for • Visual inspection and annotation of learning experiences • Export data for machine learning analysis • LearningHub + VIT are useful tools • Scientists will not reinvent the wheel
  23. 23. Come to our Demo! (ID Demo 1) Multimodal Tutor Builder Kit
  24. 24. SafePAT CM - 20180206

×