Ok, here’s the cheat sheet. Generally you watch these things and about halfway into it after a bunch of examples the theme is revealed. It’s very mysterious and elegant. But I’m none of those things. I’m going directly for directly for the main theme here. The future of educational technology is tutoring at scale. This idea, we will find out, is a bit reductionist – as all predictions about the future have to be. It’s also a little more subtle than it first appears. But I think this one idea helps cut through the clutter and hype and nonsense that permeates much of edtech. We’ll come back to the phrase in a minute. First, let’s talk about some of the unhelpful conceptions of edtech.
Here’s one idea. Education is Ben Stein dry. There’s not enough movey bits. It’s not fast enough and the soundtrack kind of sucks. What we need is a Hollywood tech enhanced remake of education. What we need is MORE SNAKES ON THE PLANE. We take the lecture and we replace it with a Hollywood treatment. We take Cartesian Philosophy and we *tweet* it. We do VH1’s “I Love the 1880’s”. Crazy, right?
I call this frosting the horse pill. In this model we have this fundamentally uninteresting education that we have to shove down student’s throats. It’s like one of these huge pills right?
So we take some technology spice and we jazz it up. It’s frosting the horse pill. This isn’t wrong, by any means. If I’m going to get a horse pill, then fine, throw in some frosting. But this isn’t where the ed tech revolution is going to come out of. There’s a variant of the frosting the horse pill argument, which was argued by people like Marc Prensky and others –
Digital Nativism is the idea that our students are very different from us – so different in fact that we must learn to communicate in their idiom. They speak a different language! You see this idea from people like Prensky, and Don Tapscott. And basically on the news any given day of the week. Here’s how Sue Bennett in a 2007 review of the literature described the technology view of digital natives: “ The second assumption underpinning the claim for a generation of digital natives is that because of their immersion in technology young people ‘think and process information fundamentally differently from their predecessors’ (Prensky, 2001a, p. 1, emphasis in the original). Brown (2000), for example, contends ‘today’s kids are always “multiprocessing”—they do several things simultaneously—listen to music, talk on the cell phone, and use the computer, all at the same time’ (p. 13). It is also argued that digital natives are accustomed to learning at high speed, making random connections, processing visual and dynamic information and learning through game-based activities (Prensky, 2001a). It is suggested that because of these factors, young people prefer discovery-based learning that allows them to explore and to actively test their ideas and create knowledge (Brown, 2000). “ Bennett goes on to dismantle these claims one by one. And while some of it gets technical, the basic idea is that compared to the other differences we already deal with, inter-generational differences are insignificant. What do I mean by this? Let’s start with the developmental differences. The students we teach *do* have brains that function differently. And the biggest difference is that their brains are 17-24 years old. Not only are the differences huge between their brains and our middle-aged brains, but the differences between the freshman class and the senior class in cognition and focus are probably more pronounced than any generational trend. Second, there’s this idea about learning styles. Now learning styles research doesn’t say what most people think it does. People come to particular problems with learning styles they are comfortable with – based partly on what has worked for them before. But what we find is that successful learners adapt their learning style to the problem at hand. As I have demonstrated in my life over and over again, you can’t solve an IKEA problem ...
... with a discovery based methodology. And I have the leftover hex nuts to prove that. So what education helps students do, to a large extent, is build a toolbox of those approaches they can return to when they hit a problem. Education is partially about picking the right problem-solving style for the situation. So given the contextual nature of learning styles, talking about a generational shift in how people learn, at least, how they learn at the cognitive level, is reductive. I honestly could talk about this all day, but this is just the intro, so one more point. How many here were born between 1961 and 1981? OK, according to Howe you are Gen Xers. Now here’s what I know about Xers, by which I mean here is something I am going to make up on the spot. They have a black box approach to tech. They don’t care what happens inside their tech, when something breaks, a Baby Boomer fixes it (right?) but an Xer throws it out. Let’s do a test on my new hypothesis. All you Xers – give yourself one point mentally for each of these things you have done:
Tally them up. I want you to write your number secretly on the card in front of you. Now fold it. Now pass the cards up to the front and we’ll redistribute them randomly. [redistribution of cards] OK, now you should be holding a random card with someone else’s number on it. We did this to anonymize the process. No judgment here. When I call out a number, raise your hand if it is on your card. 0…1…2..3…4…5….6, more than 6? So here’s what’s interesting to me. Run this as thought experiment with a bunch of Boomers answering those questions, or your parents answering those questions. Imagine the hands up. My guess is there might be more hands up in the Boomer generation. But here’s the crucial part – the differences WITHIN each generation are far greater than those between generations. Maybe the Xer average is 3.4 and the Boomer average is 4.2 – but some people here had fives and some had twos. So what is the difference between 3.4 and 4.2 going to tell you when your classroom has fives and twos and zeros and sixes? It gets worse, right? These behaviors end up being very heavily contextualized. So the person that changes hard drives isn’t necessarily more likely to fix the patio. And by the same token, the person that uses Facebook is not going to automatically *get* spreadsheets or wikis. Or even get how Facebook might be used academically. And that's not even touching on issues of class, race, and gender and how those affect the manifestation of these behaviors.
So what are we left with? Once we remove the frosting covered horsepills, the snakes on a plane, and the digital native stuff? When the tech is neither flair nor some hip new language we will speak to kids. What is the future of tech in education?
Here’s chart one of my favorite studies, a Benjamin Bloom study from 1984. And citing some evidence from others he points out this fascinating fact. Say you have about 60 hours of instruction in the classroom, and you use a direct, conventional lecture method with some in-class practice, etc. But pretty much the type of instruction that would have been the norm in a 1980 classroom in America. Let’s say you take the 30 kids in that class and at the end of the year you rank them according to a final assessment. Oh, heck, lets do it. Can I get 30 people to line up here? OK, now I’m going to take someone that would have ended up average in that last assessment. But we go back in time and we pull them out of that class. And instead of giving them 60 hours of classroom instruction we give them 60 hours of one on one tutoring. Right, replacement, not additional. OK, at the end of that semester, where does #15 rank when we test him? [Assuming the person chooses, correctly, the front of the line...] Actually that’s exactly right. #15 gets a mark equivalent to the top student in the class. He’s now quite possibly #1.
I really want that to sink in. From average student to salutorian just through tutoring. We’re not doing anything to increase motivation, apart from teaching one on one. We are not making the class more interesting --- it’s the same material. Bloom called this the 2-sigma problem, named after the two standard deviations at which tutored students consistently outperformed their classroom taught peers. And because Bloom, like me, was really committed to education as a democratic force, he found this an incredibly hopeful finding. In Bloom’s formulation the goal of educational practice was to find a set of practices that could effectively match that performance. So why not just tutor? Scale. That’s it. We can’t afford to hire a skilled tutor for every person, so we make do with what we got. And what we have in many situations is a 10 to 300 student classroom setting.
So it's really worth thinking what works in tutoring, and how we can bring some of it's effects into classroom instruction. And here's my summary of the research. When compared to tutoring, traditional lecture-based instruction: Is not conversational Is not customized to student needs Does not provide immediate feedback Let’s deal with these one by one.
First, traditional classroom discussion is not conversational. We know that one of the best ways to understand something is to talk about it a lot in one-on-one conversations. Tutors are able to say “Explain ‘x’ to me” and as the student speaks they start to probe their understanding of ‘x’. Have you ever done this thing, where you thought that you knew something, and started to speak about it?
When you have to explain something, as in a tutoring situation, you learn more deeply. As you watch someone listen to you, you listen to yourself. You start to understand where your comprehension is strong and where it is weak, and you move to correct it. We are really really wired to learn conversationally.
Secondly, class instruction is not customized to student needs. A tutor constantly asks probing questions to identify where the student needs the most help and addresses those areas. In traditional classroom instruction the lecture is set. Maybe you spend 80% of the time on elements the students already get, and 5% of time on the piece you thought would be easy but seems to be an issue this year. There’s an interesting subpoint here about time on task. Time-on-task, as you saw in the Bloom chart, increases learning by providing more instruction per contact hour. But in studies after 1984 it was found what was really important was not raw time, but what Aronson, Zimmerman, and Carlos called “Academic Learning Time”. As they put it: “ The key to increasing student learning is to maximize the amount of academic learning time; that is, to utilize education time in ways in which students are actively engaged in learning at appropriate levels of difficulty“ In other words, a key feature of time on task is that your instruction is meeting the students where they are. It's customization. You start to see why tutoring is so powerful, right? Finally,
The feedback thing is huge. And particularly immediate feedback. There was a great randomized experiment a couple years back where they took a bunch of students and they randomly assigned a number of days in which the students would receive their grade on a project. So some students were told they would get feedback immediately, some told that it would be 3 days, or five days, or ten days, etc. And the result was that the closer the students expected the feedback, the better their performance. They put more time on task in, they thought harder about the flaws, they were more reflective. That’s just the expectation of feedback. The even more important fact? Learning is based largely on something Roger Schank calls expectation failure. We do something expecting one thing to happen and another thing does instead. In the moment right after we fail, our brains are primed to process feedback. I mean, on a neurological level. And the more compressed that time scale is, the better we learn. Imagine learning not to touch the stove if we received the pain three weeks later. Crazy right? Yet we do this to students all the time. Tutoring compresses feedback. Lecture decompresses it.
So one way of looking at this, is much of what different pedagogies have been about, from behaviorist to constructivist, is dealing with that two-sigma gap. And we’ve made progress without tech. When you have students swap papers and do peer review, you are addressing the need for immediate feedback. Well organized group work or partner work can bring in conversational approaches to knowledge creation that help deepen learning. Differentiated instruction attempts to address issues of the wide differences in ability and skill that students in a single classroom can have. Just this morning in the OER session, you all did a think/pair/share activity -- which is huge when you look at impacting learning through conversation -- your peer tutor you, and as you tutor your peers, you deepen your own understanding. It's hitting that conversation piece, and in that case, the best technology is markers and sticky charts. That's fine. I'd rather have people concentrating on learning that does these three things without technology than having people putting more snakes on the plane with technology. But if we are going to use technology, let's focus it on these areas, and see if tech can intensify its impact. Because we actually know what works. We’ve been transforming classrooms with these approaches for twenty/thirty years. And the really neat thing is that technology fits these issues of conversation, customizaton, and feedback amazingly well.
So let’s talk about a simple case: The flipped classroom. I didn't know that Dick Jardine was going to cover this in an earlier presentation, but that's excellent. You're up to speed. The idea here is to figure out which part of your classroom instruction could be replaced with video or some other media-rich out-of-class activity. You either find those videos online made by someone else. And then you take that time that you reclaim in the classroom and let students do their &quot;homework&quot; in the classroom. As they hit problems, you assist, either individually, or if a lot of the students are having the same issue, then through a quick mini-review. So that’s the flip -- homework in class, in-class lecture out of class. People have called it the inverted classroom, homework in reverse, etc. And you can see how well something like this might fit video use. At Keene State, Judy works with people to do simple screencasts (basically you just film your screen and talk over it), and the reaction has been very good. Dick Jardine, as you know, does a flipped classroom with his math classes, and you can imagine the impact there. And when we look at this, what do we see? What happens to the student experience?
It becomes conversational, it becomes more customized, it provides more immediate feedback. Now let me repeat -- I’m certain a lot of you are doing that sort of instruction WITHOUT the video piece right now. Some of you might be thinking -- I don’t need video to do that, I already run a class that builds in a lot of walkaround tutoring and Just-in-Time teaching. You’re right. You don’t need the technology to do it -- what technology should do is help you do that better. I’m going to give another example here, one that’s fresh from some stuff we are doing at Keene State. And I'm sorry about the Keene State examples -- I know there are examples of this all over the system, these are just the ones I can best talk about.
How many people here know what clickers are? How many people here have taught with clickers? OK, how many have heard of Mazur’s method of Peer Instruction, or the UBC model of deliberative practice? I used to hate the idea of clickers. Crappy, bulky, inelegant. Single purpose. Students pressing buttons like skinner rats in a box, trying to get that meal pellet. I mean, it even seemed like there was a phase where they came in around the mid-aughts, 2006 or so. And then they left, and good riddance, right? But then I started to read the research on using clickers in the context of Peer Instruction and something called Deliberative Practice. And what’s going on there -- the results are amazing. First, let me review for those of you that might not know what these methodologies look like -- what are they. In deliberative practice/peer instruction the sequence goes a little like this: BEFORE CLASS Students read material before class Students post responses to the reading before class to a course web site. If they don’t understand a piece of the reading, they let the instructor know. Instructor picks topics to cover based on student feedback DURING CLASS Instructor gives mini lecture on Topic #1 the students had difficulty Instructor asks a question, such as
3. Students vote individually via clickers. So far, so good -- and if you stop right here, this is what I thought clicker practice was like until fairly recently. I don’t know why I tuned it out, I just did. Here’s the first bit that’s interesting -- if 70-80% of the students get this, the instructor sums up the answer and moves on. In other words, the real-time response is used to customize the lesson.
Customization, right? Some people call this part a Just-in-Time Teaching approach. And here’s where it gets really interesting. If some of the students get it and other don’t, the instructor does not move on and give the answer. They say OK, we have a disagreement here, find someone that disagrees with your choice. You try to convince them you are right, and they try to convince you that they are right. And 90 seconds later you all vote again.
Do you see what’s going on here? We are having those conversations that are so effective. And then the last bit -- after the second vote, the instructor looks at what wrong answers were given and takes the time to explain why they are wrong and why the answer was right.
Instant feedback, right? This method was first implemented in the early 90s with 90s technology. It is OLD.
But we have to get over this idea that what’s important about tech is that we use the newest stuff. What’s important about tech is that we use it ways supported by the last fifty years or so of educational theory. What’s important is that address these concerns of conversationality, customization, and immediate feedback. And probably even more important, that the methodology is evidence-based. So we know this use of tech hits the three sweet spots -- which leads me to think it’s worth looking into. But what does the research say? Well, the research is fantastic. First of all, we have almost twenty years of research on this method at this point, and it’s got the typical problem of higher education educational research -- there’s a lot of weak studies, good controls are rare, sample sizes are sometimes small. But still, there are very few negative findings and a whole bunch of positive findings. And in the dozen or so strong, well designed studies -- there, the findings are undeniably positive, the effect sizes are huge. This article I’m going to show you was published in the journal Science last month. Yep, it’s kind of a big deal. I can go over the methodology in Q & A if you want, but here’s the takeaway. They took a professor with 15 years experience, one of the highest rated professors in the University of British Columbia. And he taught his traditional class. And he actually used clickers -- but for summative assessment. And in the twelfth week of the class they had one section of this class sub in a graduate student, with no previous teaching experience, and had her teach using the deliberative practice method using clickers. The background is a little more complicated than that, but trust me the study design was not bad. So graduate student using deliberative practice is up against much beloved professor. And to make this really clear, much beloved professor is USING CLICKERS, but he’s not using them to address our three points (conversation, feedback, customization). And he’s not using them in a recognized framework. So whose students do better at the end of the week? Anyone? Well yeah, it wouldn’t be a story otherwise, right? How much better? A lot better.
[Explain the graph. Notes: Graph is raw score on exam, effect size is Hake’s normalized gain, size is about 270 in control and experimental group, 75/53 attendance during experiment.] Does anyone see what this looks like? Remember our line of 30 people? Bloom’s two sigma jump? Actually the jump recorded here is about 2.5 standard deviations. Which is, frankly, insane. Part of it is explained by the jump in attendance on the experimental side. We’re still left with something, however, that is incredible if the result holds. What we have here is truly...
“ Tutoring at scale” These results are incredible enough that Keene State is running a couple of controlled experiments in the coming year to see if they can be duplicated in our smaller, liberal arts context. We have a cross-over design going in our Quant Lit courses, and a full blown controlled study going in Microeconomics with two 60 seat sections taught by the same instructor. And maybe they will work out, and maybe they won’t. We’ll see. But the point here is this is the goal. This is the future. There’s so many other things I wanted to talk about today that are great examples of this, but we don’t really have the time. Briefly, student produced wikis are a great way for students to dialogue and get feedback. Matt Ragan has been working on a project at Keene where students collaboratively take notes on their biology class on a wiki -- this embraces some of the same things we are talking about here. Learning Analytics is an area which is going to be huge in the next few years. If you can't think of a question in the Q & A, just ask me about learning analytics -- incredibly neat stuff that tries to apply some of the things companies like Google and Netflix have done around customization to education. It's individuated education on steroids. I could go on, I really could. But I have to stop. I do want to take questions. In conclusion, I suppose the thing I want you to walk out of here with is this -- When you use technology, ask yourself -- &quot;How am I using this to match the performance of one-on-one tutoring? Am I putting more snakes on the plane, or am I looking for ways tech can leverage this triple threat of conversation, customization, and feedback. &quot; If you do that -- if you look at each use you put tech towards and make sure it’s supporting those all important goals, you’re going to do pretty well, and you won’t need futurists like me up here cutting into your lunch.
<ul><li>Give yourself one point for each of these below. </li></ul><ul><ul><li>Changed your own oil </li></ul></ul><ul><ul><li>Fixed something electronic through soldering </li></ul></ul><ul><ul><li>Wired a ceiling light/fan in your home </li></ul></ul><ul><ul><li>Altered clothing that did not fit to fit </li></ul></ul><ul><ul><li>Replaced a hard drive on your computer </li></ul></ul><ul><ul><li>Built or repaired a porch or patio </li></ul></ul><ul><ul><li>Rotated your own tires or replaced own brakes </li></ul></ul><ul><ul><li>Upgraded an operating system </li></ul></ul><ul><ul><li>Replaced flooring in your house </li></ul></ul><ul><ul><li>Repaired a kitchen appliance </li></ul></ul>
"Well, the thing about DNA evidence is that that one in a million chance is, um... wait I know this -- It's called the prosecutor's fallacy. So say you have 100 people and one is guilty -- is that right? Hold on, I have to draw this out....."
What birth rate would you need among women of child bearing age in a country to maintain zero population growth? <ul><li>A) 0.9 B) 1.0 C) 2.0 D) 2.1 E) 4 </li></ul>
Thanks to: <ul><li>Conference organizers, for invite & wonderful conference. </li></ul><ul><li>Jon Mott, who turned me on to the Bloom work on tutoring. </li></ul><ul><li>Everyone I work with, really, from whom I probably stole an idea or ten. </li></ul><ul><li>Video feedback artists everywhere. Because it should be 1981 every day. Seriously. </li></ul><ul><li>(No, seriously.) </li></ul><ul><li>Mike Caulfield, Keene State, firstname.lastname@example.org </li></ul>