Technology is having a profound impact on the language learning industry. Much innovation is happening in the area of learning, but frustratingly little in the assessment space.
How can we open up new possibilities where assessment is at the centre of the learning process – so that it guides learning? How can technology facilitate non-summative assessments in a Learning-Oriented Assessment model?
This talk was delivered by Andrew Nye, Deputy Director of Digital & New Product Development at Cambridge Assessment English, during the OEB 2018 conference.
8. Which is the capital of Estonia?
A. London
B. Rome
C. Tallin
D. Istanbul
Which is the capital of Estonia?
A. Helsinki
B. Riga
C. Tallin
D. Sofia
If France and England united as
one country, where would the
capital be?
Learning Evaluation Learning Evaluation Learning Evaluation
Assessment serves double duty:
Evaluation + Learning
11. For learners and teachers
• Saves time and effort
• Provides objective,
standardized
data/diagnosis
Diagnostic testing: How does it help?
For clients or
partners
• Increases focus
on formative and
in-class
assessment
• Customisable
• Embeddable
13. Ruben R. Puentedura, As We May Teach: Educational Technology, Form Theory Into Practice, (2009)
14. Can a videogame be a valid medium for a language
assessment?
• MVP in six months
• A2 level summative
test
• Test of reading &
listening skills
• Spanish 11-14 year
olds
I’ve worked there for about 12 years now, prior to which I worked in English language teaching.
When I started my teaching career, just out of university, it may shock you to know that it was before the days of the internet, and much in the way of technology and computer assisted language learning.
The standard model of English Language teaching then was known as the 3 Ps…Presentation, Practise and Production – the idea being that language was packaged by the teacher in small bite size units that could be realistically dealt with in the space of a lesson. Some part of the language was ‘presented’ to students, then they practised it and then they produced it. And they progressed from input to output
However this approach was not without its criticism, mainly that
it assumed a linear, mechanistic way of learning languages…..that you learn 1 thing at a time, step by step and that this knowledge accumulates. And that what you learned on Monday will be available for use on Tuesday. And of course we know now that language learning is a much more organic process with fits and starts, and 2 steps forward and 1 step back.
Traditionally there are 2 assessment paradigms and these two types of assessment are often considered as a binary choice. But these two paradigms are not mutually exclusive and they can actually serve the double duty of evaluation and learning
If you don’t know the answer to this, it’s fairly easy to work out by a process of elimination.
this one requires more mental processing
and the final one requires a bit more head scratching
In good assessment tasks, there’s more cognitive engagement and deeper processing. And with good tasks, good teachers are constantly gathering data about how, and how much, students are learning – so that essentially you are learning and being assessed at the same time.
Assessment and learning can work together to create a virtuous cycle that in Cambridge we call learning oriented assessment (or LOA) – where assessment takes place at the centre of the learning process. So that it guides learning.
The learning cycle that would typically happen in any F2F teaching/classroom context looks something like this:
This is how 1 task works, within a whole host of tasks. Within an LOA syllabus.
But there are a number of potential problems with that, e .g.
it assumes good task design so that learning and assessment can go hand in hand
how do you cope with large classes of mixed ability? Teachers can’t possibly be constantly gathering data about their learners if they have a lot of them in a class, and they’re all doing different things.
Advances in digital technologies is one reason why learning and assessment can now be more integrated and can cope with large numbers of sts in a class
I work in the Digital & New Products team at CE - we’re looking at where emerging technology trends, language learning and assessing all intersect and how technology can improve language learning.
It’s a journey into the unknown for us– I’m going to tell you about a few of our recent experiments on our journey so far and what it all adds up to in terms of the 3 Ps in the title.
Firstly diagnostic tests…. learners consistently tell us that they want more detailed feedback on how they can improve and reach their goals, more quickly.
But good diagnostic assessments are surprisingly rare so we’ve been developing our capability in this area for these reasons
For learners and teachers, it saves them time and provides objective, standardized diagnosis
For schools that we’re working with, our solution is customisable and embeddable into their platform.
Obviously learners get detailed analysis showing areas to work on
And teachers get information to feed back into their strategies for improving learning
That’s all very well gathering data but creating new emotional and personalised experiences is really what we want to achieve with technology, not just gather data. The SAMR model captures this idea well……There’s nothing wrong with substitution and augmentation, but the real transformative gold from technology use comes from redefinition
so we’ve been investigating whether games based assessment can provide that, to help build engaging and emotional connections in language learning?
One of the pain points that we constantly hear about assessment is the stress involved….assessments are just not enjoyable. So could we do something to reduce that stress?
A key question we wanted to answer was whether a videogame could be a valid medium for a language assessment?
So we made an MVP in six months for A2/fairly low level summative test of receptive skills only
We had several angles to consider in our research:
whether the items in the game performed within acceptable statistical parameters.
whether any aspects of the game play/game interface interfered with learners' progression through the game.
We also wanted to gather learner feedback about they liked and dislike about the game? How did this compare to how they feel whilst completing a PB test?
How might test anxiety compare in the two assessment conditions?
So what did we learn?
Well, we found out that some tasks were a bit more difficult than expected
Both male and female students found the format more engaging than a paper-based equivalent.
The teachers commented on how immersed their students were in the game and how they tried to work things out for themselves.
overall, the students preferred the format
it was found to encourage perseverance and problem solving – obviously important for learning.
It doesn’t seem to be very well suited as a summative test as it’s less obvious where the testing is taking place. But it’s much more suited as a learning tool.
so that’s how games based assessment could work and where we might be able to get to. But we know that a lot of teachers/schools aren’t ready for this yet as they don’t have the infrastructure, they’ve got connectivity issues etc. So we’re working with something that is based in teachers’ known current reality of teaching and learning.
We’re launching a games-based learning product Ruby Rei, developed by Wibbu linked to one of our YL exams. Wibbu have a pre existing game and we’ve created additional teaching and learning content which can be used in class by teachers. With the game used by sts at home.
Write and Improve description
And then the 3rd example is Write and Improve, which is a good example of what we’re doing with our partners in the wider university in Cambridge
As the name suggests, a student can write a response and submit their text, the system reviews and returns it in a matter of seconds
Then using the feedback, students rewrite and resubmit again and again, gradually correcting and improving
It does what a teacher would do focusing on areas of improvement – not on everything
There’s a version for teachers. Each thick blue line here represents a series of responses by a student to a task. The width demonstrates the range of scores that the student achieved on that task. The longer the line the more they’ve progressed. And it shows how many attempts the student has had.
So those are a few examples of how we’re experimenting with various verticals but how do we put all this together into some kind of digital ecosystem? And make it all make sense as a whole?
As we develop more and more innovations in learning, we’re stuck in a very traditional assessment system so that the cognitive dissonance between learning and assessment is becoming more and more extreme.
And we can only fix that by looking at the whole system – there’s nothing more ridiculous than adding Edtech without changing the system itself
we need to do way more than substitute down here paper based tests for CB tests
it needs to be about Redefining how technology can facilitate non-summative assessments?
Imagine an ecosystem like Apple’s…the reason you stay in it is that life is simple for you in it. It keeps adding value and delighting you, the longer you stay in it
What would a vision of a language learning ecosystem be? Well it needs to be something that makes teaching and learning more efficient, through intelligent use of data.
for a learner, everything would be interconnected with feedback to a teacher, which feeds in to their teaching strategy. So that teaching and learning are very tightly linked.
It doesn’t exist for us yet. In fact we haven’t even figured out how to represent it….
What do we need to build such an ecosystem for LOA? what currently can we do and what can’t we do?
we have an organising framework/map for learning - tick…Cambridge curriculum….. setting out what needs to be taught
we can measure the learner so you can put them at the right place on the map – tick placement test
we can provide recommendations for learning…..diagnostic test
I mentioned Apple was simple…one of the reasons is it knows who you are when you log on from different devices……..Language learners often feel like they’re starting again all the time in their learning journey so a unique identifier would be really helpful in their interactions with Cambridge…..we’re working on a learner ID solution
teacher development is a crucial element – we do a lot of that but not in a way that’s tightly integrated into learning
a huge library of online content and an algorithm to sort through it to find the optimal pathways for each learner at any particular time – not yet
In the Apple analogy, we need to develop our music library and an algorithm to suggest the music you might like….we think it’s possible and a bit like something I always I find quite creepy when it happens to me, but really it isn’t….
It’s a bit like when you meet a friend for a drink, he’ll recommend some book or film or product he thinks you’ll like, and then, within days – without searching for it online – you start seeing targeted web ads for it.
No wonder there’s a persistent rumour that Facebook uses the microphone in your phone to eavesdrop on your conversations. The truth is that apps and websites already vacuum up so much of our data, they’ve no need to be eavesdropping….. it just feels that they must be.
There are 2 things about this - Big Tech knows us so much better than we think….and each of us is far more normal than we realise. So given a little information about us, both Facebook and our friends hit on the same recommendation because our interests are far less special, and therefore more predictable, than we’d like to believe.
All relationships are based on data and data exchange, whether with a person or a piece of tech and the question really is whether in Cambridge we can come up with an ecosystem that really does improve learners lives, and makes their language learning simple and efficient, so that they’re happy to give us their data.
In the same way that data can replicate the behaviour of your best friend but it can’t replace him/her, we can use data to help learning but it can’t replace the teacher.