Presentation given by Rebecca Ferguson at Charles Sturt University, Wagga Wagga campus, on 16 March 2018. http://uimagine.edu.au/portfolio/guest-lecture-dr-rebecca-ferguson/
33. • What is the most surprising part of your
results? Was this surprise shared by the people
involved?
• Can you justify why you used one specific
methodology instead of an alternative?
• What is the value and potential impact of your
initiative at scale?
• What changes in teaching and learning
activities do you envision that could be
realistically derived from your work?
• Identify the target audience of your study.
36. Clearly defined purpose
• Everyone involved needs to know why
analytics are being introduced and what they
are intended to achieve.
• Whatever the chosen aim, it should clearly
align with the institution's priorities.
37. Sponsors and leaders
Sponsors: Learning analytics need
a champion at a high level within
the institution
Sponsors need a clear and realistic
view of what can be achieved
Project leaders develop deliverable
plans that are strategically aligned
Project leaders ensure analytics
make life easier, not more
complicated
Project leaders are keenly aware of
how project success will be
measured
Project leaders inspire some to act
as learning analytics champions
38. Strategic development
• Develop a system model of
how learning analytics will
be developed and deployed.
• Identify gaps — for example,
learning and teaching units
often have poor
relationships with IT
departments.
• Assessment strategy and
use of learning analytics
need to be well aligned.
39. Capacity building
• Skilled data analysts
require the skills to work
confidently and effectively
with data, as well as
knowledge of pedagogy
• Staff members need to
understand learning
analytics outputs and their
limitations
• Data literacy skills should
form part of the student
curriculum and part of
continuing professional
development for all staff LAK18 picture from @carlopfernando
40. Ethics
• Institutions need to be absolutely clear why data are
being collected and analysed, and who benefits from
the use of analytics.
• Student voice needs to be clearly heard in discussions
about the ethical use of data
• Processes put into practice should be transparent.
• A consistent approach to the
ethical use of data should
extend across institutions,
enabling key principles to be
established and
implemented consistently.
41. In order not to fail…
• Work on learning analytics
should use data in ways that
end users understand, are
comfortable with, and find
valuable.
• Have a clear vision of what you want to achieve, a
vision that is closely aligned with institutional priorities.
• Revisit the vision frequently, so everyone is clear why
the project is developing in the way that it is.
• A senior leadership team is needed to steer the entire
process, taking into account different perspectives and
making changes across the institution where necessary.
Other are fields more advanced in use of evidence. Learn from them.
Other mainly quantitative fields have hierarchies of evidence – learning analytics has not moved high up these hierarchies
Randomised control trials are not always appropriate
Sometimes you need to be confident that an approach will work
Even when you carry out a test, it can be misleading
For example, the Hawthorn Effect can suggest an intervention is working, when it is just the attention being paid to participants that is having the effect
This study of a dead salmon shows the danger of false positives
https://blogs.scientificamerican.com/scicurious-brain/ignobel-prize-in-neuroscience-the-dead-salmon-study/
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
Why do we have this problem?
Well, education is hard.
It’s not only hard to learn – it’s hard to understand learning
We can’t easily see and measure learning, we can only use proxies for learning
Like self-report, or pre- and post-test
And once people know the proxies you are using, they start to game them
Like the PISA text
The Programme for International Student Assessment
Every three years, tests students in random schools worldwide on reading, science and maths
They have done a lot of work on the methodology and have responded to critiques
We should be able to use this information to compare performance on these tests
But several things go wrong – and more goes wrong as this is increasingly taken as a measure of countries; education systems.
Sometimes the results are invalid because there is not enough evidence.
Sometimes they are invalid because the importance of the tests causes countries to cheat
Sometimes they are invalid because they are taken as a proxy for a country’s educational system as a whole
So, on the LACE project, we set out to find the evidence that does exist about learning analytics
We set up an evidence hub – grouping published work in terms of these four statements
We asked partners from across Europe to contribute
We looked at LAK conferences and the Journal of Learning Analytics
We put a call out to the community
We prompted people at last year’s LAK to add their papers.
So it’s not all the evidence, but it is a lot of it (and you can add more, if you see a gap)
We found three main things:
There was no point classifying papers in terms of a hierarchy of evidence, because most of the work was exploratory or think pieces or small scale
There was relatively little evidence. Lots of papers have nothing to say in relation to our four propositions
What evidence there was turned out to be overwhelmingly positive. Which seemed unlikely and prompted our Failathons
We found three main things:
There was no point classifying papers in terms of a hierarchy of evidence, because most of the work was exploratory or think pieces or small scale
There was relatively little evidence. Lots of papers have nothing to say in relation to our four propositions
What evidence there was turned out to be overwhelmingly positive. Which seemed unlikely and prompted our Failathons
Lots of the papers don’t address the cycle,
No benefits shown for learners.
We looked closely at a load of papers,
Signals paper was one of the best at this
There has been fairly wide agreement in the literature that the Course Signals work at Purdue University shows that learning analytics can support learning.
People who engaged with Course Signals were more likely to be retained by the university. They were more likely to get high grades.
Here was a (LAK12) paper that gave us real evidence.
But there have been criticisms of the paper – most notably, the chocolate box critique
And then we run into the problem that it is almost impossible to check the figures because the data are not freely available, and the researchers either no longer have access to it or are not assigned time to work on
We have highlighted the problems with the Purdue paper because it is so significant in the field
But we all make mistakes.
Here is a chart produced by the two of us and published at LAK and in the JLA
It was checked by both of us and, presumably, by two sets of reviewers and by proof readers and editors.
Can you spot the mistake?
Yes, two mistakes.
And we can tell you about them, and we can issue a correction to the JLA
But how do we correct the conference proceedings?
How do we, as a community, stop the mistakes being propogated.?
Does this simply mean that learning analytics is a disaster zone? No.
What can we do about it? It’s not about individuals.
Punitive approaches terrible, let’s not tear ourselves apart like the psychologists
How can we improve our systems and structures to reduce mistakes, improve quality overall?
We are the superheroes who can save our field from the spectre of non-evidence