Although Massive Open Online Courses (MOOCs) have the potential to make quality education affordable and available to the masses, completion rates are extremely low due to the to the high level of autonomy and self-regulated skills that MOOCs require.
The aim of the present work is to investigate how self-regulated learning skills can be enhanced by encouraging metacognition and reflection in MOOC learners by means of social comparison. To this end, following an iterative process, we have developed the Learning Tracker, an interactive widget which allows learners to visualise their learning behaviour and com-
pare it to that of previous graduates of the same MOOC. Each iteration was extensively evaluated in live TU Delft MOOCs running on the edX platform while engaging over 20.000 MOOC learners.
Our results show that learners that have access to the Learning Tracker are more likely to graduate the MOOC. Moreover, we have observed that the widget has a positive impact on learners’ engagement and reduces procrastination. Based on our results, we argue that the mere fact of receiving feedback on a limited number of learning habits could trigger self-
reflection in learners and lead to improved learner performance.
4. 4
What is a MOOC?
Massive Open Online Course
Best Courses. Top Institutions.
Learn anytime, anywhere.
• 35 million learners
• 500 universities
• 4 200 MOOCs
5. 5
Dropout as a main challenge
• Low completion rates <15 % (Jordan, 2016)
• Underdeveloped learning skills and study habits
– High autonomy
– Role of the teacher
– Low metacognitive awareness
6. 6
Self Regulated Learning
• Definition: capability of the learner “to adjust her
actions and goals to achieve desired results in light
of changing environmental conditions”
(Zimmerman, 1990)
• Major success factor in online learning
environments, including MOOCs
• Lack of learner support in current MOOC platforms
8. 8
Aim
Investigate how self-regulated
learning skills can be enhanced in
MOOC learners
Encouraging metacognition
and self-reflection on learning
behaviour
Providing feedback through
social comparison with successful
learners on a learner dashboard
10. 10
Development
Design-based research methodology
• Incremental
• Evaluation on edX MOOCs offered by TU Delft
Two components
• Data
• Visualisation
First iteration Evaluation
January – March 2016
Second
iteration
Evaluation
April – June 2016
13. 13
Preliminary evaluation of the first iteration
• Metric configuration
• Additional information set
– Average graduate in the following week
– Reflection and planning support
Adjustments in the second iteration
15. 15
Preliminary evaluation of the first iteration
• Metric configuration
• Additional information set
– Average graduate at the end of current week
– Reflection and planning support
• Interactive elements
Adjustments in the second iteration
20. 20
Experimental setup
Three TU Delft MOOCs
– Weekly publication of learning material
– Video lectures, weekly assignments, practice
quizzes
– Graduation: >60% final score
Replicated longitudinal study
21. 21
Experimental setup
Method: randomized controlled trial
– Demographic analysis to ensure populations
are sufficiently randomized
WaterX SewageX InnovationX
Test group 5 460 4 038 1 184
Control group 5 483 4 099 1 168
Total enrolled 10 943 8 137 2 352
27. 27
Learners’ behaviour
RQ2.1 Do learners become more engaged
with the MOOC when they can compare
their behaviour with that of successful
learners?
28. 28
Learners’ engagement – course material
Learners are more engaged with the graded course material.
WaterX SewageX Innovationx
Graded quizzes .036 .114 .044
Practice non-graded quizzes .512 .071 -
Mann-Whitney test results (p-values) between the test
group and the control group.
– Significance level α = .050
– Significant differences are marked in bold.
29. 29
More learners are engaged with graded course content.
Learners’ engagement – course material
31. 31
Learners’ self-regulation
RQ2.2 Do learners show improvement of
their time-management skills when they
compare their behaviour to that of
successful learners?
32. 32
Learners’ self-regulation - procrastination
WaterX SewageX Innovationx
Timeliness
(recommended)
.055 .113 .039
Timeliness
(actual)
.040 .145 .035
Mann-Whitney test results (p-values) between the test
group and the control group.
– Significance level α = .050
– Significant differences are marked in bold.
Learners procrastinate less.
34. 34
Learners’ on-trackness
RQ2.3 Do learners change their behaviour
so it becomes similar to that of successful
learners when they compare themselves
to it?
35. 35
Learners’ on-trackness
Similarity between a learners’ behaviour
and that of the average graduate
1. Compute on-trackness score weekly
2. Cluster learners based on the evolution
of the on-trackness score
36. 36
Learners’ on-trackeness – clusters
No conclusive evidence that the Learning Tracker influences
the distribution of learners into clusters.
on-track
behind,
but keep up
behind,
initial activity
behind,
no activity
MOOC stands for Massive Open Online Course. The term massive refers to the large number of learners that can participate. MOOCs are open, meaning that learners have free access to the content once they enroll. The term online refers to the fact that accessing a MOOC can only be done by somebody that has Internet access.
MOOCs caught public eye in 2012, when top universities like Stanford, MIT or Harvard launched today’s largest MOOC platforms. One such example is edX, a nonprofit MOOC provider founded by Harvard University and MIT. Their mission is to increase access to high-quality education for anyone, anywhere.
MOOCs are expected to revolutionize education by making high quality education accessible to the masses and thus reducing the gap between the most privileged and the most disadvantaged learners.
Until now, more than 35 million learners enrolled in at least one MOOC and more than 500 universities are offering over 4000 MOOCs.
Despite their potential, MOOCs face some challenges. One of these major challenges is a low completion rate most of the time below 15%. Although literature identifies several reasons for learners dropping out early, the one that we want to focus on is underdeveloped learning skills and study habits. Very often, learners drop out because they are not equipped with proper skills to learn with a MOOC.
A new learning environment that requires high autonomy in terms of motivation, defining learning paths and engaging with other MOOC participants. Learners have a lot of freedom in choosing when, where, what and how to learn.
Learners face less constraints than in traditional face-to-face education. There are no consequences for failing or dropping out and the role of the instructor/teachers changes.
Many times learners are not aware that their learning skills are not adequate. Low metacognitive awareness means that learners are not inclined to think about and evaluate their own thinking process or the effectiveness of their strategies.
Yet, there are solutions. Learning psychologists identified that what makes learners successful is a skill called self-regulated learning. They explain that this is a major factor that influences the large-scale success/failure of online learning environments, including MOOCs
However, current MOOC platforms fail to support learners in developing these skills which are indispensable for online learning.
Looking at edX, for example, learners have the Progress page on which theys can view the grades they obtained for each assignment, but they do not receive any information of how they could improve their learning.
Thus the aim of this work is to address the lack of support in the learning process offered to learners and investigate how SRL skills can be enhanced in MOOC learners.
Relying on research done in the field of learning sciences, we hypothesize that SRL skills can be developed if learners reflect on and evaluate their learning behaviour.
In order to test this hypothesis, we developed a learner dashboard embedded on edX MOOC pages on which learners receive feedback on their learning behaviour. At the same time, learners can compare their behaviour to that of previously successful learners.
What we considered “successful learners” are learners that graduated previous editions of the same MOOC.
Incremental development: First iteration in January-March 2016 and a second iteration in April-June 2016
How it works: based on the edX trace logs that record every action learners perform on the platform, we extract and compute a series of metrics that are displayed on a widget embedded in the course pages.
Reasons for choosing a spider chart: concise visualization of numerous metrics in a small space and easily comparable medium
6 metrics are displayed around the spider chart. Learners can visualise their own performance and the one of the average graduate. The values of the metrics are computed once, before the beginning of the course for the average graduate. The values for the current learners are updated once a week when new edX data becomes available.
The first iteration was then evaluated on a live TU Delft MOOC on edX. Based on preliminary results that showed that the LT had a positive effect on the graduation rates, several adjustments were made to the second iteration.
First off, the metrics were changed to focus more on self-regulating behaviour. For example, we included metrics that referred to the number of visits per week, the average time between two consecutive sessions or the average length of a session.
Regarding the technologies used to implement it, for computing the behaviour metrics, we used code written in Java 8.
The widget in itself is a JS Script and for drawing it we used Highcharts, a JS charting library.
What gives validity to our study is that we evaluated two iteration of the LT over the full duration of each MOOC. Such studies are rare in literature.
Method: Learners are randomly assigned to a test or a control group. Test group has access to the LT, while the control group does not. This method allowed us to identify the effects of the Learning Tracker by comparing the behaviour of the test group with that of the control group.
The analysis was performed only taking into account data from active learners (those that spent more than 5 minutes on the platform).
Here we plotted the percentage of learners that graduated in each group in all three MOOCs. As we can see, the graduation percentage is higher in all three cases.
No significant differences between the final grades.
We also looked at the final grades the learners obtained. Statistical tests show no significant differences between the test group and the control group.
This plot show the distribution of learners in the test group and the control group according to the final grade. Two things to notice:
Firstly, there is a high density around 0 – common to every MOOC
Secondly, the curve representing the test group is above the one representing the control group – meaning that more learners passed the graduation threshold, although they did not pursue higher grades.
The same results hold also for the other two course.
The next steps of the analysis looked into what could be reasons for this change in graduation rate. Did the learners behaviour change? And if so, in what ways?
Solving practice quizzes first to grasp concepts is not a strategy employed by the test learners.
- Might be an explanation for the higher % of learners in the range of 60-80% for the final grade.
This graph shows the progression of both group through the first MOOC with respect to the number of learners that attempted at least one graded quiz question since the beginning of the course.
The widget was made available in week 2.
Number of learners that attempted at least one graded quiz question since the beginning of the course.
The difference becomes visible between week 2 and 3, a week after the widget was made available.
We investigated how learners use their time on the platform and their time-management skills
We observed significant differences in one metric: the timeliness of submission = the average time between learners submit assignments and the deadline.
To evaluate learners’ on-trackness, we measured the similarity between a learners’ behaviour and that of a successful learners by calculating an on-trackness score.
Using k-means clustering algorithm, we grouped learners into clusters with similar behaviours over time.
The learners exhibit similar behaviour patterns in all three courses.
- Describe the clusters
Out of the four patterns, two (Cluster 1 and Cluster 2) show a steady progress, while the other two exhibit a decrease in on-trackness over time (Cluster 3 and Cluster 4). The decrease in on-trackness score reflects drop-outs or very low activity.
However, when we looked at the distribution of the test and control learners into the four clusters we did not find any evidence that shows that the LT motivates learners to be on-track.
Learning behaviour can be broken down into several study habits that influence each other
Learning behaviour is a “web” of study habits
Positive effect: (a) Increases the likelihood of graduation because it increases the engagement with graded course material. (b) Reduces procrastination.
on-trackness score = the similarity between one’s behaviour and that of successful learners
To evaluate learners’ on-trackness, we measured the similarity between a learners’ behaviour and that of a successful learners by calculating an on-trackness score. The score is computed through an arithmetic weighted sum of the metric deviations. The metric deviations are the differences between a learner’s scaled value of the metric and the average graduate’s scaled value. The weights are inversely proportional with the amplitude of the metric deviation. The higher the difference, the lower the weight.
on-trackness score = the similarity between one’s behaviour and that of successful learners
To evaluate learners’ on-trackness, we measured the similarity between a learners’ behaviour and that of a successful learners by calculating an on-trackness score. The score is computed through an arithmetic weighted sum of the metric deviations. The metric deviations are the differences between a learner’s scaled value of the metric and the average graduate’s scaled value. The weights are inversely proportional with the amplitude of the metric deviation. The higher the difference, the lower the weight.
on-trackness score = the similarity between one’s behaviour and that of successful learners
To evaluate learners’ on-trackness, we measured the similarity between a learners’ behaviour and that of a successful learners by calculating an on-trackness score. The score is computed through an arithmetic weighted sum of the metric deviations. The metric deviations are the differences between a learner’s scaled value of the metric and the average graduate’s scaled value. The weights are inversely proportional with the amplitude of the metric deviation. The higher the difference, the lower the weight.
on-trackness score = the similarity between one’s behaviour and that of successful learners
To evaluate learners’ on-trackness, we measured the similarity between a learners’ behaviour and that of a successful learners by calculating an on-trackness score. The score is computed through an arithmetic weighted sum of the metric deviations. The metric deviations are the differences between a learner’s scaled value of the metric and the average graduate’s scaled value. The weights are inversely proportional with the amplitude of the metric deviation. The higher the difference, the lower the weight.
The inspiration for this work was research done in the field of search engines. Bateman investigated the effects of reflection and social comparison in search behaviour with positive results.
As a reference model, they quantified the behaviour of “expert searchers”. In our case, what we consider a model worth comparing against are previous graduates of the same MOOC.