41. What data am I likely to get?
Learner Context* Action Result
identifier time attempted mastered
skills duration experienced passed
experience location viewed progressed
goals application selected completed
enrollment activity commented
platform shared
team login
asked
answered
*using context as a general term, not an xAPI term
Adapted from: Verbert, K., Manouselis, N., Drachsler, H., & Duval, E. (2012). Dataset-Driven Research to
Support Learning and Knowledge Analytics. Educational Technology & Society, 15 (3), 133–148
WE are going to talk about data in the intitial design process. And the use of data to assess the effectiveness of the learning intervention and provide for future design improvements.
Qualitative data is data the captures feelings and obsrervations that are not quantifiable.
Quantitative is what you nomrally would think of as data. It is numbers, something that is qunatifiable
So what exactly is qualitative data?
Qualitative looks at how you feel about something. In what we are doing it is watching someone use interface, watching their reaction to a piece of content or a design. It is basically trying to see how people are reacting to what is in front of them. This used a lot in interface design. Like I did in the opening slide.
Replace with demographics
IN most cases we start to look at qualitative data during interface design. Does it work, is there too much travel? Is there confusion on the part of the end user?
Because this is the last thing you want to see is a frustrated user after you have deployed some content. Collecting some qualitative data early in the design process from stakeholders and end users can help avoid this.
So now that we know what qualitative data is, how do we collect it?
The second one we will look at is collecting feedback form the deployed material. We can seek out feedback through interviews where you sit down and interview the person who has performed the task. User interviews can be conducted as one on one interviews or group interviews.
Surveys can be sent out to the users who performed the test to gather how the felt about the design that was delivered to them. There are many ways to set this up with ratings or short answer questions. You can also use surveys as step one and then conduct an interview to gain more perspective from the user.
Be careful many studies have found that directly interviewing and providing someone with a survey, they are not always truthful. How many times have you been asked about an interface and said its OK. Or if the person who designed is the one talking to you, you might tell them it is fine when really there were portions that confused or aggravated you.
Actually observing users is that best way to get real information. Although it can be too late (and should be) to observe users after deployment for feedback. Really the best and most effective time to observe users is using a portotyping phase. Lets take a deep dive into prototyping and how it is effective for qualitative analysis.
A key to gathering this observation data is to start collecting it as early as possible. The best way to do that is to build prototypes.
Next we build a physical prototype. These are great for watching users as we are going to get a first look at how they interact with a design. These are great first pieces of data that will affect how the design goes forward.
Here we see a more advanced version of the paper prototype in action. Notice the mdeia changing as the user “clicks” through the interface. It is a great bit of ffedback to see how the interact with the screen. You can for excessive movement, is there confusion about where to go next, things like that. After tweaking and coming up with a few designs you can move on to the next step
Once we narrow down the designs using physical prototypes we can build some wire frame prototypes, we will get to an example of this in just a few minutes.
Digital prototypes are also good because you can expand the reach of a prototype. Using Skype or a GoTo session with a webcam you can watch the person as they work through the content. Watching the face is important because it can tell you a lot about what they are thinking. After gathering data from this wider group of users…
The prototypes are incomplete, keep it simple as long as possible. Changes are much less expensive to make to prototypes than they are to released designs.
Prototype example: Palm pilot
How many people looked at a page, where did they go after looking at a page. WE can use dashboards to review these and they represent facts. Lets look at how we can collect these data types
Here we see a typical quantitative view of data. It is numbers, how many people looked at a page, how many people looked at a piece of content. Quantitative data is pure numbers.
However, numbers mean almost nothing. In a lot of cases the numbers will simply show us a quantity.
To really get meaning from numbers we need to plan out what we are going to connect.
We need to figure out what change means to us. If we were just looking to see how many people accessed something numbers are enough. When it comes to really seeing what behavior change our interventaion has made we need to figure out how we measure the change. So we need to figure out
We need to set some context to the numbers. Numbers are numbers, there is no meaning. By adding context to them we give them meaning.
WE can add context by using qualitative data. Why did so many people click on a certain element? Why was a certain path followed through a set of modules. So we are making a correlation between the design and the data that we are collecting.
WE are looking at Collecting more than just completions and test scores, but if we are collecting completions and test scores, what does it actually mean. If I score a 90% on a quiz how does that affect my job performance? What path did I take to get to the quiz, does that affect my score. Why did I miss the last 10%? Was there a job role that scored better than others? Why?
We are looking for confirmation that the learner got what they needed to know. And we can ask the question
Did the intervention actually work?
First Draw the Curves….
A great way to get the answers you WANT, but not..
Collect and analyze for a purpose
Actions, decisions, improvement, demonstrate value
Who are your Customers?
For Learning
For Learning Data Analysis
Exec, Mgr, end user, YOU!
Once establish customers, need to find their NEEDS & GOALS
Varies by audience
fxn’l level of customer
fxn’l level of decision/anticpated action
Ask lots of Questions to get to root
Anticipate what might trip you up
Outside – data availabilty, resource limitations
factors that limit ability to measure performane
In your control – quality/structure/consistency data, resource allocations
Knowing
What data goals are
What limitations/boundaries are
= narrow your focus, streamline project
TRANSITION – quant & qual (more next) with a view to action
How do we collect the data? Really two main ways to collect the datafrom our content. We can use Google analytic sor the experience API. How many peopl ehave heard of the experience api?
Transition ( after….)
Time to look @ data in the real world
Qual = rich and nuanced, but also complex, time consuming and SUBJECTIVE
Add venn diagram
QUANT – Objective, but in small chunks
Combining chunks gives context which allows for sense-making
e.g. Google Amazon
(we’re playing scrabble or lego w/data here)
QUANT & QUAL together – in things like market and med research
How well suited is your data to meet your needs
Access (12 years salesforce, 12 years data//restrictions, privacy)
Cleansing consistence
How long to keep
Interoperability – across systems, across time
Start SMALL!
Analysis/Analytics
Avoiding scorecards
Even with great tech tools, the most important CPU is the one in your skull
Start SMALL!
Analysis/Analytics
Avoiding scorecards
Even with great tech tools, the most important CPU is the one in your skull
When taking in new info, our brains intepretations hinge on experience & training
Subconscious decision about what’s important and how it connects ~schema
More ambiguous, greater certainty, more tenaciously seek and focus on confirmatory evidene
=Hard to investigate & interpret data objectively
So, What does this mean?
You start analysis from a point of deficit, from the questions you ask to data interp
Your brain WILL ‘Theorize ahead of it’s data” + will seek confirmatory evidence, and not seek other interpretations
There’s a way to beat our brain at it’s own game
Prob: Pet Theories + Confirmatory Evidence
Sol’n….
Confirming evidence for everything = confirming evidence for nothing
It’s not diagnostic (e.g. fever when you’re ill – doesn’t tell you what you have)
Main point ACH – ask better Q’s, seek broader data and Starting Point for Better Analysis of results.
This is an iterative process, did the intervention work? If not why not? What can we do to make it better? Adjustments can be made based on the results being seen from the data generated, did the interventions actually work. This feedback loop is something that we are missing in a lot of current cases.