Your SlideShare is downloading. ×
0
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
You Want Me to Measure What?
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

You Want Me to Measure What?

926

Published on

Why limit ourselves to traditional quantitative metrics like visitor count, page weight, conversion, and revenue when there is so much valuable qualitative data available? We can turn qualitative data …

Why limit ourselves to traditional quantitative metrics like visitor count, page weight, conversion, and revenue when there is so much valuable qualitative data available? We can turn qualitative data into quantitative data and use the same rigorous analysis techniques to help lead us to better designs, products, services, and experiences.

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
926
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
24
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. You Want Me to Measure What? Turning Qualitative Observations Into Quantitative Data
  • 2. Try not to rely on single sources of data. Why? • Minimize bias and error • Multifaceted data provide better insights into complex, multifaceted problems • We can develop better perspectives about users, needs, goals, expectations, issues, problems, etc. with more data sources Triangulation
  • 3. Common UX Questions
  • 4. These are great questions that help us craft well-designed experiences, but how do we measure them? Many of our most common questions are QUALITATIVE – things that are more often subjectively described than quantitatively, objectively measured. How do we measure THAT?
  • 5. • Sense of Place - location in time, space, and meaning, context • Enjoyment - pleasure, positive acceptance, fun • Fit - competence, aptitude, skill / ability, motivation • Adaptability - flexibility, adjustment, ease of change • Ownership - responsibility, locus of control, obligation • Confidence - trust, credibility, certainty. moral consistency • Intuitive - easy to learn, easy to understand, easy to remember How do we measure THESE?
  • 6. Soft Factors, Hard Data Qualitative observations can be turned into quantitative data.
  • 7. Logical Positivism – all meaningful research is either analytic, verifiable, or confirmable by observation and experiment; if it cannot be observed and measured directly, then it cannot be empirically studied and is therefore meaningless. Induction – start with observational data, identify patterns, and develop hypotheses that may be tested and developed into a theory. Deduction – start with a theory, generate a hypothesis, test it and gather data, and analyze the data to evaluate the theory. Learning from Social Sciences
  • 8. 1. Descriptive • How many people are registered to vote? 2. Correlational • What is the relationship between voting behavior and social attitudes? 3. Causal • People who vote more often are more likely to be socially ________ because…. Research may be:
  • 9. Descriptive research tells us nothing about why things are the way they are or if they are even related. NOTHING. Correlation is NOT causality. Just because there is an apparent relationship does not mean it is cause and effect . The “Third Variable Problem” must always be considered. Causal research requires strict experimental methods, careful control of variables, and must always consider “Plausible Rival Hypotheses.” Caution
  • 10. 1. Theoretical – develop, explore, and test theories 2. Empirical – based on observable, measurable data 3. Nomothetic – seeks rules or laws that pertain to the general case 4. Probabilistic – inferences and outcomes have probabilities 5. Causal – seeks an underlying cause and effect relationship Experimental Research Is:
  • 11. Not all research is experimental – much of our work is inductive – we do it just for more data. Much our of UX and usability research is just gathering descriptive data so that we know WHEN people are having trouble and WHERE, but we often do not quantify the extent of the difficulty. Not quantifying the extent of a problem sometimes makes it difficult for us to objectively rank problems and develop a plan for tackling them. “This seems like a bad problem” is a subjective evaluation. Not All Research is Experimental
  • 12. Independent Variables – the factors we control and manipulate Dependent Variables – the factors we measure to see how the independent variables have affected them Control Groups – we need a group of participants who experience no manipulated variables to serve as a baseline for comparison Statistically and scientifically, we never prove anything. We merely fail to disprove. Why? Four Key Things
  • 13. • Representative samples reflect the characteristics of the population • Random Samples • Simple Random Sampling • Stratified Random Sampling • Proximal Similarity Model – there is a similarity gradient • Longitudinal Studies • Cross-Sectional Studies Sampling
  • 14. Define Before Measure First, we need to know what we are trying to quantify.
  • 15. Something is defined in terms of the observable data that are used to measure it. Identifies one or more specific, observable events or conditions that any other researcher can independently measure. Operational Definition
  • 16. 1. We watch people. • Direct observation – contextual inquiry, ethnography, usability testing • Indirect observation – web analytics, evidence, behavioral records / traces 2. We ask people. • Questionnaires and surveys • Interviews Two Basic Methods
  • 17. How do we measure the behaviors we observe? 1. Frequency 2. Duration 3. Intensity Observation
  • 18. How do we ask people about their experiences? 1. Open-ended vs. Closed-ended interview questions 2. Forced choice (structured inputs) vs. Free response survey questions 3. Avoid double-barreled questions 4. Avoid ambiguity and bias in questions Questions
  • 19. Converting Data How do we turn what we saw and heard into data?
  • 20. The construction of a measurement instrument that associates qualitative constructs with metric (measured) units. 1. Nominal – categorical data 2. Ordinal – rank order 3. Interval – equal intervals between units 4. Ratio – interval scale with an absolute zero Measurement Scales
  • 21. Scales may have one or more dimensions: 1. One Dimension (e.g., Happiness (Low to High) (X)) 2. Two Dimensions (e.g., Verbal and Written Communication Skills (XY)) 3. Three Dimensions (e.g., Satisfaction, Completion Rate, and Referral Probability (XYZ) Scale data are numerically coded to facilitate statistical analysis. Measurement Dimensions
  • 22. What are some common qualitative factors you have been asked to measure or assess? What observable behaviors, conditions, or outcomes might be related to these factors? For example: Is the interface or device intuitive? Practice
  • 23. Definition • How can we operationally define intuitive? • What observable behaviors can we quantify in terms of number, duration, and intensity? • What questions can we ask to get additional data? Practice
  • 24. Scales • What are the measurement scales for each of these observable factors? • What are the categories, numerical values, and/or ranges for each scale? Dimensions • How many dimensions are necessary to describe the operational definition? Practice
  • 25. Time to Analyze Now we need to find meaning in the data.
  • 26. Validity – are we really measuring what we claim to be measuring? Just because it looks and sounds good (face validity) doesn’t mean it IS good (content, criterion, and construct validity.) For example, “aerodynamics” seems to be a valid knowledge area for airplane pilots, but “positive relationship with parents” does not (though it may be connected to emotional stability, which IS relevant for being a good pilot.) Just because something appears relevant does not mean it is. Data Integrity
  • 27. Reliability – the ability of an instrument to produce the same or similar results with repeated administrations. • Inter-rater reliability • Test / Re-Test reliability • Parallel Forms • Internal Consistency Data Integrity
  • 28. All valid instruments must be reliable, but not all reliable instruments are valid. Remember
  • 29. Descriptive Statistics – simple descriptions or summaries about the data derived from the sample. • Distribution • Measures of central tendency (mean, median, mode) • Dispersion (range, standard deviation, variance) • Correlation We cannot determine cause and effect from descriptive statistics. Statistics
  • 30. Inferential Statistics – seeking conclusions that extend beyond the data alone by making judgments about the probability of the observations. There are MANY statistical tests. If you’re interested, go take some classes. It’s worth it. Experimental methods and inferential statistics are necessary to establish causality. Statistics
  • 31. Ecological Fallacy – Occurs when you make conclusions about individuals based on analyses of group data. (We do not all behave and believe similarly.) “You’re a designer, therefore you are fashionable.” Exception Fallacy – Occurs when you make conclusions about a group based on exceptional individual cases. (This is the basis of much bigotry and prejudice.) “All designers wear black, because Steve Jobs did.” Fallacies
  • 32. Thanks! I’m happy to answer your questions.

×