SIMS Quantitative Course Lecture 1

2,984 views
2,946 views

Published on

First lecture for Quantitative Course I taught at SIMS, UC Berkeley in 2001

2 Comments
0 Likes
Statistics
Notes
  • I liked the presentation, is worth every second spent
    watching it.
    http://www.debtsettlementideas.com
    http://www.debtsettlementideas.com/category/credit-repair
    http://www.debtsettlementideas.com/category/compare-credit-cards
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Know my company, House Designer!

    +55 (13) 3011-9314
    +55 (13) 3219-6347
    housedesignerpromotions@gmail.com
    comercial@housedesigner.com.br

    Twitter: @HouseDesigne

    Fábrica: Rua Visconde de Vergueiro 20, Centro - Santos - SP - Brazil
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

No Downloads
Views
Total views
2,984
On SlideShare
0
From Embeds
0
Number of Embeds
74
Actions
Shares
0
Downloads
0
Comments
2
Likes
0
Embeds 0
No embeds

No notes for slide

SIMS Quantitative Course Lecture 1

  1. 1. The art and science of measuring people <ul><li>Reliability </li></ul><ul><li>Validity </li></ul><ul><li>Operationalizing </li></ul>
  2. 2. Overview of design and analysis <ul><li>Posing a question </li></ul><ul><li>Conceptualizing the question </li></ul><ul><li>Operationalizing the related concepts </li></ul><ul><li>Identifying Independent, Dependent, & Controlled Variables </li></ul><ul><li>Developing the Hypothesis </li></ul>
  3. 3. Choosing the testing method <ul><li>What method is appropriate for the current situation? (experiment, observation, surveys etc.) </li></ul><ul><ul><ul><li>>> choice of method as a </li></ul></ul></ul><ul><ul><ul><li>trade off between control </li></ul></ul></ul><ul><ul><ul><li>realism </li></ul></ul></ul><ul><li>Experimental, Quasi-Experimental and Non-Experimental Methods. </li></ul>
  4. 4. Collecting data <ul><li>The art of finding and recruiting participants </li></ul><ul><li>A practical view of randomization: Randomization and Pseudo Randomization </li></ul><ul><li>Random Selection and Random Assignment. </li></ul><ul><li>Practical issues about sample size and statistical power. </li></ul>
  5. 5. Analyzing the data: Basic Statistics <ul><li>Levels of measurement: nominal, ordinal, interval, and ratio </li></ul><ul><li>Mean, median, standard deviation </li></ul><ul><li>Testing mean differences </li></ul><ul><li>Significance levels and what they mean </li></ul>
  6. 6. Analysis of experimental designs: Single Factor Experiments <ul><li>Statistical Hypothesis Testing </li></ul><ul><li>Estimates of Experimental Error </li></ul><ul><li>Estimates of Treatment Effects </li></ul><ul><li>Evaluation of the Null Hypothesis </li></ul><ul><li>Various ANOVA models </li></ul>
  7. 7. Multi Factor Experiments <ul><li>Advantages of the factorial design </li></ul><ul><li>Interaction Effects </li></ul><ul><li>The power of within-subjects designs (reduction of variance) </li></ul><ul><li>The two factorial experiment </li></ul><ul><li>Higher Order Factorial Designs </li></ul>
  8. 8. Analysis of Non-Experimental Studies <ul><li>Statistical methods for analyzing correlational data </li></ul><ul><li>Correlations, Scatter Plots, Partial Corrs </li></ul><ul><li>Multiple Regression </li></ul><ul><li>Introduction to Factor Analysis, Cluster Analysis and Multidimensional Scaling </li></ul>
  9. 9. Surveys and Questionnaires <ul><li>The design of surveys and questionnaires </li></ul><ul><li>How to frame questions </li></ul><ul><li>Kinds of scales: Likert, Semantic Differential etc. </li></ul><ul><li>Analyzing survey data: which items are useful, Item Response Theory </li></ul><ul><li>Forming a scale to measure an attribute, e.g., satisfaction. Reliability, validity of scale </li></ul>
  10. 10. Measuring Individual Differences <ul><li>How to test for individual differences within users </li></ul><ul><li>Kinds of individual differences variables: </li></ul><ul><li>- demographic : such a age, gender etc. </li></ul><ul><li>- situational : motivation, interest, fatigue </li></ul><ul><li>- cognitive : memory, cognitive style etc., </li></ul><ul><li>- personality : internal/external locus of control </li></ul><ul><li>How to analyze existing data to identify individual differences, and how to design studies to test for individual differences? </li></ul>
  11. 11. Frenzied Shopping: Obstacles to purchase, and the perception of download times - A study on ecommerce conducted by Jared Spool A critical analysis and Illustration of alternative methods of examining this question
  12. 12. Frenzied shopping <ul><li>Create a realistic scenario : in present situation, get person motivated </li></ul><ul><li>Counted obstacles to purchase </li></ul><ul><ul><li>Advantages of measure : </li></ul></ul><ul><ul><ul><ul><ul><li>concrete: people agree about measure </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>valid: good measure of actual ecommerce experience </li></ul></ul></ul></ul></ul><ul><ul><li>Disadvantages of measure : </li></ul></ul><ul><ul><ul><ul><ul><li>not reliable: since situation is not structured </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>data analysis problems </li></ul></ul></ul></ul></ul>
  13. 13. Results <ul><li>Found more than 200 obstacles to purchase </li></ul><ul><li>The more the no of users, the greater the no of problems </li></ul><ul><li>What’s wrong with each test discovering hundreds of problems? </li></ul><ul><ul><li>Client has limited resources, need to focus on solving important (most common / most catastrophic problems) </li></ul></ul>
  14. 14. More results:Perception of download times <ul><li>How long will users wait for pages to download? </li></ul><ul><li>- Should web developers waste their time in making pages faster. </li></ul><ul><li>Method: Users were asked to rate the perceived speed of pages after they had completed task. </li></ul><ul><ul><ul><ul><ul><li>Ave. Speed Rated </li></ul></ul></ul></ul></ul><ul><li>Amazon.com 30 sec Fastest </li></ul><ul><li>About.com 8 sec Slowest </li></ul>
  15. 15. So what do download times relate to? <ul><li>Only correlated with success or failure of shopping. </li></ul><ul><ul><li>(Amazon.com judged to be slower than About.com even though About.com was much faster) </li></ul></ul><ul><li>Result is foregone conclusion given the task. </li></ul><ul><li>Problems with method: </li></ul><ul><ul><li>Memory issues: Users asked for ratings at the end of their experience with all the sites. Retrospective memory problems. </li></ul></ul><ul><ul><li>Ask someone waiting for a page to download if it is taking too long! </li></ul></ul>
  16. 16. <ul><li>Timeline Issues </li></ul>Rated speed no longer reflects the browsing, searching part of the experience. Cannot infer that download speeds are not important, can only infer that perception of download speeds can be influenced by other aspects of site
  17. 17. Survey: Are people bothered by long download times? <ul><li>Sample Question: </li></ul><ul><ul><li>How often do you leave a site without waiting for the first page to download? </li></ul></ul><ul><ul><ul><ul><ul><li>0-5% of times </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>5-10% of times </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>10 % and higher </li></ul></ul></ul></ul></ul><ul><ul><li>In your opinion, how important are the below web site characteristics. Rate their relative importance. </li></ul></ul><ul><ul><ul><ul><ul><li>Download speed </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>site content </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>site interactivity </li></ul></ul></ul></ul></ul>Possibilities: Task based surveys Low on control, High on Realism, Difficult to make causal inferences, Sampling Issues
  18. 18. Observation: Is user’s site usage affected by site speed? <ul><li>Method: Choose a few sites which are similar in their general purpose (e.g., shopping sites). Ask subjects to browse / complete a task on all sites. </li></ul><ul><li>Measurement: Watch participants for signs of frustration or satisfaction with speed of site. </li></ul>Low on control, High on Realism, Difficult to make causal inferences
  19. 19. Experiment: Relationship of perceived download times to actual download times. <ul><li>Method: Create versions of a site with different download speeds. Ask users to complete tasks on the sites (within subjects or between subjects). Make some tasks interesting and some boring. Give users less than enough time to complete the task. </li></ul><ul><li>Measurement: How many interesting and how many boring tasks did users choose to complete. Relate that to download speed of site. </li></ul>High on control, Low on on Realism, Easier to make causal inferences
  20. 20. Server Logs: Do people leave sites while waiting for slow pages to download? <ul><li>Method: Find similar sites with different speeds. Analyze the server logs for the sites. </li></ul><ul><li>Measurement: Analyze the logs to find the % of people who do not wait for a page to download. </li></ul>Low on control, High on Realism, Difficult to make causal inferences
  21. 21. Perception of download speed and all the ways to study it...
  22. 22. The state of the art <ul><li>What usability methods are currently prevalent and accepted in the field </li></ul><ul><ul><ul><li>CUE 2 </li></ul></ul></ul>
  23. 23. Comparative Usability Evaluation(CUE) 2 Molich et al., 1999 <ul><li>Purpose : Too much emphasis on one-way mirrors and scan converters </li></ul><ul><li>Little knowledge of REAL usability testing procedures </li></ul><ul><li>” Who checks the checker?” </li></ul><ul><li>Method : Nine teams tested the usability of a web site </li></ul><ul><ul><li>Seven professional teams </li></ul></ul><ul><ul><li>Two student teams </li></ul></ul><ul><li>Four European, five US teams </li></ul><ul><li>Test web-site: www.hotmail.com </li></ul>
  24. 24. Problems found in Comparative Usability Evaluation
  25. 25. Problem Found by Seven Teams During the registration process Hotmail users are asked to provide a password hint question. The corresponding text box must be filled. Most users did not understand the meaning of the password hint question. Some entered their Hotmail password in the Hint Question text box.
  26. 26. Characteristics of the tests
  27. 27. Problems by teams
  28. 28. What factors predict no of problems & no of common (non-exclusive) problems?
  29. 29. Inferences from CUE study <ul><li>Much disagreement about methods of usability testing </li></ul><ul><li>How to test? </li></ul><ul><li>Who should test? </li></ul><ul><li>What methods to use? </li></ul><ul><li>How many testers to have? </li></ul><ul><li>How many users to have? </li></ul>

×