Your SlideShare is downloading. ×
0
How do we know what works?           Ilmo	  van	  der	  Löwe             CHIEF	  SCIENCE	  OFFICER   iOpener Institute for...
“To	  measure	                 is	  to	  know.”Lord KelvinPHYSICIST
• OD	  interven3ons	  must	  be	  measured  – Did	  the	  interven3on	  have	  an	  impact?  – Were	  the	  effects	  were	...
A simple example• Ques3on:  – Does	  training	  managers	  create	  more	  produc3ve	  workers?• Interven3on:  – Train	  1...
Plan #1            Pre-­‐interven3on                                   Post-­‐interven3on             measurement         ...
Not necessarily...• Increased	  scores	  could	  be	  caused	  by:  – Economy	  gePng	  beEer,	  local	  team	  winning	  ...
Change over time• Outside	  factors	  other	  than	  training	  can	  change	  scores• Mere	  change	  in	  scores	  is	  ...
Control group• Revised	  plan  – Include	  a	  control	  group	  that	  is	  similar	  to	  the	  experiment	      group	 ...
Plan #2               Pre-­‐interven3on                                        Post-­‐interven3on                measureme...
Statistical significance• Sta3s3cal	  significance	  is	  the	  confidence	  you	  have	  in	    your	  results• Sta3s3cs	  ...
How	  big	  of	  a	  difference	  will	                                     training	  create	  between	  groups?          ...
signalconfidence =        ×                                   sample size              noise    • Is	  the	  sample	  size...
Although	  the	  employees’	  produc:vity	  at	  work	  is	  being	  measured,    it	  is	  the	  efficacy	  of	  the	  trai...
Each	  manager	  is	  different	  and	  will	  put	   the	  training	  into	  prac3ce	  differently.
Most	  managers	  will	  do	  an	  okay	  job.
Some	  will	  be	  excep3onally	  good.
Some	  will	  be	  excep3onally	  bad.
Each	  manager	  creates	  variabilityin	  data	  that	  cannot	  be	  controlled.
Thus,	  the	  effec:ve	  sample	  size	  is	  10,although	  400	  people	  are	  measured.
Small	  samples	  are	  more	  likely	  to	  be	  biased(In	  a	  sample	  of	  three,	  you	  may	  have	  two	  bad	  on...
 (Or	  the	  other	  way	  around.)
• Results	  should	  not	  change	    depending	  on	  who	    happens	  to	  respond.• The	  sample	  should	  be	    lar...
Plan #3               Pre-­‐interven3on                                   Post-­‐interven3on                measurement   ...
Getting close, but...• Even	  sta3s3cally	  significant	  differences	  between	  the	    experimental	  and	  control	  gro...
Plan #4               Pre-­‐interven3on                                    Post-­‐interven3on                measurement  ...
Three-way comparisons – Experimental	  group   • If	  significantly	  different	  from	  the	  control	  group,	  outside	  ...
Measurement in OD practice      • Measurement	  is	  important      • Measurement	  must	  be	  carefully	          planne...
Upcoming SlideShare
Loading in...5
×

Measurement and confidence in OD

209

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
209
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
4
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Transcript of "Measurement and confidence in OD"

  1. 1. How do we know what works? Ilmo  van  der  Löwe CHIEF  SCIENCE  OFFICER iOpener Institute for People and Performance
  2. 2. “To  measure   is  to  know.”Lord KelvinPHYSICIST
  3. 3. • OD  interven3ons  must  be  measured – Did  the  interven3on  have  an  impact? – Were  the  effects  were  posi3ve  or  nega3ve? – What  were  the  success  factors?
  4. 4. A simple example• Ques3on: – Does  training  managers  create  more  produc3ve  workers?• Interven3on: – Train  10  managers  to  be  beEer  leaders• Measurement  plan: – Measure  the  produc3vity  of  the  managers’  direct  reports   before  and  aIer  the  training  (a  total  of  400  people)
  5. 5. Plan #1 Pre-­‐interven3on Post-­‐interven3on measurement measurement Training Training  put  into  prac3ce TIME • If  direct  reports  are  more  produc:ve  at  work  in   the  end,  does  it  mean  that  the  training  worked?
  6. 6. Not necessarily...• Increased  scores  could  be  caused  by: – Economy  gePng  beEer,  local  team  winning  championship,   seasonal  weather  differences,  a  friendly  new  hire...• Decreased  scores  could  be  caused  by: – Fear  of  layoffs,  the  coffee  machine  being  broken,  serious   injuries  to  team  members,  recession...
  7. 7. Change over time• Outside  factors  other  than  training  can  change  scores• Mere  change  in  scores  is  not  evidence  of  efficacy – Measurement  must  take  into  account  outside  factors
  8. 8. Control group• Revised  plan – Include  a  control  group  that  is  similar  to  the  experiment   group  in  all  aspects,  except  the  training • Ideally,  same  loca3on,  same  work  hours,  same  work,  same   tenure,  same  seniority  etc.• Ra3onale – If  outside  factors  influence  scores,  then  their  effect  should   be  the  same  for  both  groups  because  both  experienced  them – If  training  influences  iPPQ  scores,  the  scores  of  the  control   group  should  differ  from  the  experimental  group
  9. 9. Plan #2 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME CONTROL  GROUP Business  as  usual • If  the  group  scores  differ,  how  can  we  tell   if  the  difference  is  significant?
  10. 10. Statistical significance• Sta3s3cal  significance  is  the  confidence  you  have  in   your  results• Sta3s3cs  put  confidence  into  precise  terms – "Theres  only  one  chance  in  a  thousand  this  could  have   happened  by  coincidence."  (p  <  0.001)
  11. 11. How  big  of  a  difference  will   training  create  between  groups? signalconfidence = × sample size noise How  many  people  in  each  group? What  other  factors  can   create  differences between  groups? • To  maximize  confidence – Increase  interven:on  quality  (boost  signal) – Minimize  other  differences  between  groups  (reduce  noise) – Increase  sample  size
  12. 12. signalconfidence = × sample size noise • Is  the  sample  size  10  or  400? – 10  managers  get  trained – 400  employees  get  surveyed
  13. 13. Although  the  employees’  produc:vity  at  work  is  being  measured, it  is  the  efficacy  of  the  training  interven:on  that  maSers.
  14. 14. Each  manager  is  different  and  will  put   the  training  into  prac3ce  differently.
  15. 15. Most  managers  will  do  an  okay  job.
  16. 16. Some  will  be  excep3onally  good.
  17. 17. Some  will  be  excep3onally  bad.
  18. 18. Each  manager  creates  variabilityin  data  that  cannot  be  controlled.
  19. 19. Thus,  the  effec:ve  sample  size  is  10,although  400  people  are  measured.
  20. 20. Small  samples  are  more  likely  to  be  biased(In  a  sample  of  three,  you  may  have  two  bad  ones  and  a  mediocre  one,  for  example)
  21. 21.  (Or  the  other  way  around.)
  22. 22. • Results  should  not  change   depending  on  who   happens  to  respond.• The  sample  should  be   large  enough  to  reduce   unintended  biases.
  23. 23. Plan #3 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME CONTROL  GROUP Business  as  usual • To  reduce  the  impact  of  manager  variability,   recruit  larger  number  of  managers  to  both   experimental  and  control  groups – With  large  numbers  of  managers,  extremes   cancel  each  other  out
  24. 24. Getting close, but...• Even  sta3s3cally  significant  differences  between  the   experimental  and  control  groups  do  not  automa3cally   speak  for  the  efficacy  of  training – Placebo  effect Belief  in  efficacy  creates  changes – Hawthorne  effect Special  situa3on  and  treatment  of  the  measurement  creates   changes
  25. 25. Plan #4 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME PLACEBO  GROUP Fake  training Training  put  into  prac3ce CONTROL  GROUP Business  as  usual
  26. 26. Three-way comparisons – Experimental  group • If  significantly  different  from  the  control  group,  outside  factors   did  not  account  for  the  effect. • If  significantly  different  from  the  placebo  group,  the  effects   were  unique  to  training,  not  just  different  treatment. – Control  group • If  not  different  from  experimental  group,  training  had  no  effect   at  all. – Placebo  group • If  not  different  from  experimental  group,  training  had  no  real   effect  beyond  the  special  treatment  given  to  the  group.
  27. 27. Measurement in OD practice • Measurement  is  important • Measurement  must  be  carefully   planned  and  executed • The  bare  minimums  are  a  proper   control  group  and  a  large  enough   sample  size
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×