• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Measurement and confidence in OD
 

Measurement and confidence in OD

on

  • 305 views

 

Statistics

Views

Total Views
305
Views on SlideShare
298
Embed Views
7

Actions

Likes
1
Downloads
1
Comments
0

1 Embed 7

http://www.linkedin.com 7

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Measurement and confidence in OD Measurement and confidence in OD Presentation Transcript

    • How do we know what works? Ilmo  van  der  Löwe CHIEF  SCIENCE  OFFICER iOpener Institute for People and Performance
    • “To  measure   is  to  know.”Lord KelvinPHYSICIST
    • • OD  interven3ons  must  be  measured – Did  the  interven3on  have  an  impact? – Were  the  effects  were  posi3ve  or  nega3ve? – What  were  the  success  factors?
    • A simple example• Ques3on: – Does  training  managers  create  more  produc3ve  workers?• Interven3on: – Train  10  managers  to  be  beEer  leaders• Measurement  plan: – Measure  the  produc3vity  of  the  managers’  direct  reports   before  and  aIer  the  training  (a  total  of  400  people)
    • Plan #1 Pre-­‐interven3on Post-­‐interven3on measurement measurement Training Training  put  into  prac3ce TIME • If  direct  reports  are  more  produc:ve  at  work  in   the  end,  does  it  mean  that  the  training  worked?
    • Not necessarily...• Increased  scores  could  be  caused  by: – Economy  gePng  beEer,  local  team  winning  championship,   seasonal  weather  differences,  a  friendly  new  hire...• Decreased  scores  could  be  caused  by: – Fear  of  layoffs,  the  coffee  machine  being  broken,  serious   injuries  to  team  members,  recession...
    • Change over time• Outside  factors  other  than  training  can  change  scores• Mere  change  in  scores  is  not  evidence  of  efficacy – Measurement  must  take  into  account  outside  factors
    • Control group• Revised  plan – Include  a  control  group  that  is  similar  to  the  experiment   group  in  all  aspects,  except  the  training • Ideally,  same  loca3on,  same  work  hours,  same  work,  same   tenure,  same  seniority  etc.• Ra3onale – If  outside  factors  influence  scores,  then  their  effect  should   be  the  same  for  both  groups  because  both  experienced  them – If  training  influences  iPPQ  scores,  the  scores  of  the  control   group  should  differ  from  the  experimental  group
    • Plan #2 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME CONTROL  GROUP Business  as  usual • If  the  group  scores  differ,  how  can  we  tell   if  the  difference  is  significant?
    • Statistical significance• Sta3s3cal  significance  is  the  confidence  you  have  in   your  results• Sta3s3cs  put  confidence  into  precise  terms – "Theres  only  one  chance  in  a  thousand  this  could  have   happened  by  coincidence."  (p  <  0.001)
    • How  big  of  a  difference  will   training  create  between  groups? signalconfidence = × sample size noise How  many  people  in  each  group? What  other  factors  can   create  differences between  groups? • To  maximize  confidence – Increase  interven:on  quality  (boost  signal) – Minimize  other  differences  between  groups  (reduce  noise) – Increase  sample  size
    • signalconfidence = × sample size noise • Is  the  sample  size  10  or  400? – 10  managers  get  trained – 400  employees  get  surveyed
    • Although  the  employees’  produc:vity  at  work  is  being  measured, it  is  the  efficacy  of  the  training  interven:on  that  maSers.
    • Each  manager  is  different  and  will  put   the  training  into  prac3ce  differently.
    • Most  managers  will  do  an  okay  job.
    • Some  will  be  excep3onally  good.
    • Some  will  be  excep3onally  bad.
    • Each  manager  creates  variabilityin  data  that  cannot  be  controlled.
    • Thus,  the  effec:ve  sample  size  is  10,although  400  people  are  measured.
    • Small  samples  are  more  likely  to  be  biased(In  a  sample  of  three,  you  may  have  two  bad  ones  and  a  mediocre  one,  for  example)
    •  (Or  the  other  way  around.)
    • • Results  should  not  change   depending  on  who   happens  to  respond.• The  sample  should  be   large  enough  to  reduce   unintended  biases.
    • Plan #3 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME CONTROL  GROUP Business  as  usual • To  reduce  the  impact  of  manager  variability,   recruit  larger  number  of  managers  to  both   experimental  and  control  groups – With  large  numbers  of  managers,  extremes   cancel  each  other  out
    • Getting close, but...• Even  sta3s3cally  significant  differences  between  the   experimental  and  control  groups  do  not  automa3cally   speak  for  the  efficacy  of  training – Placebo  effect Belief  in  efficacy  creates  changes – Hawthorne  effect Special  situa3on  and  treatment  of  the  measurement  creates   changes
    • Plan #4 Pre-­‐interven3on Post-­‐interven3on measurement measurement EXPERIMENTAL  GROUP Training Training  put  into  prac3ce TIME PLACEBO  GROUP Fake  training Training  put  into  prac3ce CONTROL  GROUP Business  as  usual
    • Three-way comparisons – Experimental  group • If  significantly  different  from  the  control  group,  outside  factors   did  not  account  for  the  effect. • If  significantly  different  from  the  placebo  group,  the  effects   were  unique  to  training,  not  just  different  treatment. – Control  group • If  not  different  from  experimental  group,  training  had  no  effect   at  all. – Placebo  group • If  not  different  from  experimental  group,  training  had  no  real   effect  beyond  the  special  treatment  given  to  the  group.
    • Measurement in OD practice • Measurement  is  important • Measurement  must  be  carefully   planned  and  executed • The  bare  minimums  are  a  proper   control  group  and  a  large  enough   sample  size