• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Data confusion (how to confuse yourself and others with data analysis)
 

Data confusion (how to confuse yourself and others with data analysis)

on

  • 667 views

 

Statistics

Views

Total Views
667
Views on SlideShare
667
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Non-Linear Relationships  - Look at the data first Influential Points - Look for outliers and large residuals.  Plot the regression model on the original data set Extrapolating - Predicting beyond the range of actual data. Lurking Variables . Unknown variables that influence both the explanatory and response variable. Lurking variables may cause a relationship to appear strong when in fact the variables are not directly related. Summary Data . Averaging a lot of data will cause the strength of a relationship to appear greater. Assuming Causation . Cause and effect can only be determined by a controlled experiment. Here, we are simply identifying a relationship exists.

Data confusion (how to confuse yourself and others with data analysis) Data confusion (how to confuse yourself and others with data analysis) Presentation Transcript

  • DATA CONFUSION How to confuse yourself and others with Data Analysis
  • AGENDA FOR TODAY’S TALK
    • Good Graphs – Bad Graphs
    • The Law of Averages
    • PTBD Analysis
    • Enumerative & Analytical Problems
    • PARC Analysis
    • Wrong Methods of Analysis
  • “ There are three kinds of lies: Lies, damned lies and statistics” Attributed to Benjamin Disraeli by Mark Twain
  • GOOD GRAPHS AND BAD GRAPHS
  • DATA RELEVANCE
    • Graphs are only as good as the data they display
    • No amount of creativity can produce good graphs from dubious data
  • DATA CONTENT
    • Don’t produce graphs from very small amounts of data
    • The human brain can grasp 1, 2 or 3 numbers without a graph
  • RULES FOR PRODUCING GOOD GRAPHS
    • KEEP IT SIMPLE AND STUPID
      • Jesse Ventura
    • Tell the truth – don’t distort the data
  • GOOD GRAPHS
    • Portray information without distortion
    • Contain no distracting elements
      • No false third dimensions, irrelevant decoration, or colour (chartjunk)
    • Use an appropriate scale
    • Label axes and tick marks properly, including measurement units
    • Have a descriptive title and/ or caption and legend
    • Have a low ink – to – information ratio
  • BAD GRAPH GOOD GRAPH BAD GRAPH EVEN BETTER GRAPH
  • BAD GRAPH GOOD GRAPH GOOD GRAPH
  • GRAPHS THAT CONFUSE
  • CHART JUNK
  • GRAPHS THAT TELL A STORY
  • HISTOGRAMS
    • No meaningless gaps
    • Reasonable Choice of bins
    • Easy to choose or adjust bins
    • Good aspect ratio
    • Meaningful labels on axes
    • Appropriate labels on bin tick marks
  • TRENDING RANDOM VARIATION “ Upward trend” “ Downturn” “ Rebound” “ Setback” “ Turnaround” “ Downward trend”
  • THE LAW OF AVERAGES “ If I sit in a freezer and plunge my head into a pan of boiling chip fat. . . . . on average, I’m quite comfortable.”
  • SHEWHART’S RULES FOR PRESENTATION OF DATA
    • Rule One
      • Data should always be presented in a way that preserves the evidence in the data
    • Rule Two
      • When an average, standard deviation or histogram is used to summarize data, the user should not be misled into to taking action they would not take if the data were presented in a time series
  • USING THE WRONG METHODS Descriptive Statistics: A, B, C, D Variable N Mean StDev CoefVar Minimum Maximum A 20 11.950 0.102 0.85 11.83 12.08 B 20 11.950 0.100 0.84 11.85 12.25 C 20 11.950 0.102 0.86 11.75 12.15 D 20 11.950 0.100 0.84 11.81 12.14 Process: A B C D 1 11.85 11.85 11.75 12.14 2 11.83 11.86 11.95 12.01 3 11.87 11.87 11.8 11.88 4 11.84 11.87 11.94 12.07 5 11.85 11.88 11.95 11.95 6 11.86 11.89 12 11.87 7 11.85 11.89 12.05 12.06 8 11.85 11.9 11.85 11.94 9 11.84 11.92 11.94 11.84 10 11.86 11.91 11.85 12.05 11 12.05 11.93 12.05 11.93 12 12.06 11.93 11.85 11.83 13 12.03 11.95 12.05 12.04 14 12.02 11.97 11.95 11.92 15 12.03 11.96 11.95 11.82 16 12.04 11.99 11.95 12.03 17 12.06 12 11.85 11.91 18 12.06 12 12.1 11.81 19 12.04 12.16 12 12.01 20 12.08 12.25 12.15 11.81
  • NO SIGNIFICANT DIFFERENCE HERE!
  • NO DIFFERENCE?!?
  • ALWAYS CARRY OUT PTBD ANALYSIS P LOT T HE B ….. D OTS!
  • TYPES OF STATISTICAL STUDIES
    • Descriptive
    • Enumerative
    • Analytic
  • DESCRIPTIVE STUDY
    • Count all fish in barrel
    • Count number of goldfish
    • Proportion of goldfish applies to the fish population in this barrel and no other barrels of fish
  • ENUMERATIVE STUDY
    • Take a sample of fish from the barrel, and count the number of goldfish in the sample
    • Point estimate of the proportion of goldfish in the barrel population
    • Many statistical procedures do this
    • Cannot make any inference about any other barrels of fish
  • ANALYTICAL STUDY
    • Will we get the same proportion of goldfish in the future as we got in the past?
    • An analytical study allows prediction within limits
    Fish Packing Process over Time
  • ANALYTICAL STUDY
    • Proportion of goldfish is stable over time
    • Fish packing process is predictable within limits
    • We can expect, on average, 4 goldfish per barrel, but as many as 10 and as few as 0 in any single barrel
  • ENUMERATIVE vs ANALYTICAL METHODS
    • Enumerative methods
      • seek to provide numeric summaries, confidence intervals,etc
      • use significance tests, ANOVA, descriptive stats, etc., assume single, stable population
    • Analytical methods
      • seek to understand the system under study
      • use primarily graphical tools such as run charts, control charts, histograms, box plots, etc
      • in the real world, most problems are analytical
  • “ Analysis of variance, t-tests, confidence intervals, and other statistical techniques taught in books,….., are inappropriate because they provide no basis for prediction and because they bury the information contained in the order of production.” W.E. Deming, Out of the Crisis Traditional statistical methods have their place, but are widely abused in the real world. When this is the case, statistics do more to cloud the issue than to enlighten.
  • PARC ANALYSIS P ractical A ccumulated R ecords C ompilation P assive A nalysis (by) R egression C orrelations P lanning A fter R esearch C ompleted P rofound A nalysis R elying (on) C omputers note inverse relationship with C ontinuous R ecording (of) A dministrative P rocedures C onstant R epetition (of) A necdotal P erceptions
  • PLANNING A PROCESS IMPROVEMENT STUDY
    • Why collect the data?
    • What statistical methods for analysis?
    • What data will be collected?
    • How much data do we need?
    • How will the data be measured?
    • How good is the measurement system?
    • When and where will data be collected?
    • Who will collect the data?
    • Remember:
  • GARBAGE IN – GARBAGE OUT
  • WHAT’S SIGNIFICANT? Two-sample T for C1 vs C2 N Mean StDev SE Mean A 5 13.652 0.487 0.22 B 5 14.369 0.646 0.29 Difference = mu (C1) - mu (C2) Estimate for difference: -0.716615 95% CI for difference: (-1.551531, 0.118301) T-Test of difference = 0 (vs not =): T-Value = -1.98 P-Value = 0.083 DF = 8 Both use Pooled StDev = 0.5725 Two-sample T for C3 vs C4 N Mean StDev SE Mean A 200 13.510 0.501 0.035 A 200 13.667 0.492 0.035 Difference = mu (C3) - mu (C4) Estimate for difference: -0.157292 95% CI for difference: (-0.254935, -0.059649) T-Test of difference = 0 (vs not =): T-Value = -3.17 P-Value = 0.002 DF = 398 Both use Pooled StDev = 0.4967 Mean A = 13.7, Mean B = 14.4 Not significant? Mean A = 13.5, Mean B = 13.7 Significant?
  • WHAT SHOULD I DO WITH OUTLIERS?
    • Data point far away from the rest of the data
    • Don’t remove outliers to make data “look good”
    • Do you know why it is different?
    • If you do, remove it. If you don’t, leave it in
    • Could have a big impact on the analysis
    • Re – run analysis without outlier, and compare results
  • “ REGRESSION” WITH EXCEL
    • Usually means drawing an X-Y plot, fitting a straight line and coming up with an R 2 value.
    • As long as R 2 is high, everything’s hunky-dory.
    • WRONG!
  • “ REGRESSION” WITH EXCEL Relationship is clearly not linear, and should not be presented as such
  • “ REGRESSION” WITH EXCEL
    • Regression model checking – in Excel?
    • Residual plots:
      • Normally distributed
      • Random pattern when plotted vs fitted values
    OK Variance not homogeneous Model incorrect
  • PITFALLS OF REGRESSION ANALYSIS
    • Non-Linear Relationships
    • Influential Points
    • Extrapolating
    • Lurking Variables
    • Summary Data
    • Assuming Causation
    • THAT’S (WITH REASONABLE PROBABILITY) THE END FOLKS!
    • And remember,
    • With statistics, you never have to say you’re certain!
    • THANK YOU FOR YOUR ATTENTION
    • ARE THERE ANY QUESTIONS?
    • GOOD LUCK!!