Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Systems research-socspi-2012-06-19


Published on

Overview of what makes good systems research for the 2012 NSF Social Computing Systems (SoCS) PI Meeting held at the University of MIchigan, Ann Arbor (Jun 17-19, 2012)

Published in: Education, Technology
  • Be the first to comment

Systems research-socspi-2012-06-19

  1. 1. James  &  Friends’  Systems  How  To    A  Guide  to  Systems  &  Applica3ons  Research! James Landay
 Short-Dooley Professor
 Computer Science & Engineering
 University of Washington " " "
 2012 NSF SoCS PI Meeting
 University of Michigan
 June19, 2012  
  2. 2. What  Type  of  Researcher  are  You?  A  -­‐  Discoverer   B  -­‐  Ques=oner   C  -­‐  Maker  
  3. 3. “With  a  Li6le  Help  From  My  UIST  Friends”  
  4. 4. QuesCons  Answered  What  are  the  key  a6ributes  of  strong  systems  work?  What  are  the  best  techniques  to  evaluate  systems  &  when  do  they  make  sense  to  use?  Which  HCI  techniques  do  not  make  sense  in  systems  research?  How  do  you  disCnguish  good  research  from  bad?  What  are  your  favorite  systems  research  projects  &  why?  What  makes  a  good  social  compuCng  systems  research  project  &  what  are  your  favorites?      
  5. 5. Key  A6ributes  of  Strong  Systems  Research  Compelling  Target  •  “Solves  a  concrete,  compelling  problem  with  demonstrated  need”   Strong  moCvaCon  for  the  problem  w/  need  based  in  users,  costs,  or  tech  issues  •  “Solves  a  compelling  set  of  problems  using  a  unifying  set  of  principles”   The  principles  Ce  the  set  of  problems  together    •  “Explores  how  people  will  interact  with  computers  in  the  future”   Takes  into  account  technical  &  usage  trends  
  6. 6. Key  A6ributes  of  Strong  Systems  Research  Technical  Challenge  •  “Goes  beyond  rou3ne  so@ware  engineering”   Requires  novel,  non-­‐trivial  algorithms  or  configura=on  of  components    Deployed  When  Possible  •  “system  is  deployed  &  intended  benefits  &  unexpected  outcomes   documented”   Not  required,  but  gold  standard  for  most  systems  work  
  7. 7. “Everybody’s  Got  Something  To  Evaluate  Except  Me  And  My  Monkey”  
  8. 8. EvaluaCon  Methods  for  Systems  Research  “it  depends  upon  the  contribu3on”    “match  the  type  of  evalua3on  with  how  you  expect  the  system  to  be  used”    “mul3tude  of  metrics  to  give  you  a  holis3c  view”  
  9. 9. Idea  EvaluaCon   Overall  value  of  system  or  applica2on  • If  extremely  novel,  the  fact  that  it  works  &     logical  argument  to  explore  “boundaries  of  value”  • Real  world  deployment  (expensive  in  Cme  &  effort)  
  10. 10. Technical  EvaluaCon   Measure  key  aspects  from  technical  perspec2ve  1) Toolkit  è  expressiveness  (“Can  I  build  it?”)    efficiency  (“How  long  will  it  take?”)    accessibility  (“Do  I  know  how?”)  2) Performance  improvement  è     benchmark  (error,  scale,  effiencey…)  3) Novel  component  è  controlled  lab  study*  *  may  not  generalize  to  real-­‐world  condiCons    
  11. 11. EffecCveness  EvaluaCon     1) Usability  improvement  è     controlled  lab  study*   2) Conceptual  understanding  è   case  studies  w/  a  few  real   external  users  
  12. 12. “Honey  Don’t  Use  That  Technique”  
  13. 13. HCI  Techniques  That  Don’t  Make  Sense  • Usability  Tests  &  A/B  tests   “can’t  tell  much  about  complex  systems”  • Contextual  Inquiry   “good  for  today,  but  can’t  predict  tomorrow”  • TradiConal  controlled  empirical  studies   “not  meaningful  to  isolate  small  number  of   variables”  
  14. 14. “I  Want  You”  
  15. 15. How  Do  You  Tell  Good  From  Bad?  Good  •  “Combines  a  lot  of  exisCng  ideas  together  in  new  ways  …  it  really  is  a   case  of  the  sum  being  greater  than  the  parts”  •  “PotenCal  for  impact”  •  “Tries  to  solve  an  important  problem  using  novel  technology.  It  is   creaCve  &  raises  new  possibiliCes  for  human-­‐computer  interacCon.”    Bad  •  “Fails  to  jusCfy  the  problem  it  addresses,  uses  off-­‐the-­‐shelf   technology,  or  does  not  teach  anything  new  about  how  people   interact  with  computers.”  •  “too  many  concepts—true  insight  has  a  simplicity  to  it”  •  “a  feature,  but  not  a  product  or  a  business”      
  16. 16. “I  Want  You”  
  17. 17. HYDROSENSE   Froehlich,  Larson,  Fogarty,  Patel   +  crucial  problems,  surprising  how  well  can  do  w/  few  sensors  
  18. 18. prefab   Dixon  &  Fogarty   +  “compelling,  but  not  obvious  best  way…  pushes  as  far  as  can”  
  19. 19. Whyline   Ko  &  Myers   +  “based  on  studies  of  how  people  debug  today”   +  “insight  that  almost  all  quesCons  in  form  of  why  or  “whynot”  
  20. 20. $100  InteracCve  Whiteboard   Johnny  Lee   +  “repurposes  current  tools  in  a  creaCve  way  to  solve  a  problem          that  no  one  would  have  imagined  possible  before  he  did  it”  
  21. 21. What  Makes  a  Good  Social  CompuCng  System?    •  “criteria  above  +  involves  social  interacCon  as  a  main  feature..   Facilitates  new  or  enhanced  forms  of  collaboraCve   parCcipaCon”  •  “combines  good  theory  with  good  systems  building”  •  “finds  new  ways  of  combining  the  best  of  people  and   computers  together”  •  “good  answers  to  why  people  will  parCcipate  at  scale”  •  “a  model  of  individual  user  behavior;  a  model  of  aggregated   social  behavior;  use  that  model  to  build  a  novel  system”  •  “make  the  system  work  in  the  face  of  malicious  behavior”  
  22. 22. Soylent   Bernstein,  et.  al.   +  “innovaCve  applicaCons  for  growing  trend  (crowdsourcing)”   +  “led  to  new  ideas  for  how  to  organize  people  &  computers”   +  “contributed  a  general  design  pa6ern  (Find-­‐Fix-­‐Verify)”  
  23. 23. Group  Lens  /  Movie  Lens   Riedl,  Herlocker,     Lam,  et.  al.   +  “built  their  own  community  &  used  it  to  develop  a  long  list  of   compelling  research  results”   +  “incorporates  lots  of  social  science  ideas,  led  to  innovaCons  in   collaboraCve  filtering,  and  has  actual  deployment  &  lots  of  use”  
  24. 24. Many-­‐Eyes   Heer,  Viégas,  Wa6enberg   +  “recognized  the  social   nature  of  people’s   relaConships  to     data  visualizaCons  &   provided  a  planorm  for   disseminaCng”     +  “significant  real-­‐world   impact  in  introducing   larger  audiences  to  a   variety  of  visualizaCon   techniques”  
  25. 25. Thanks  to  Contributors  Ben  Bederson,  University  of  Maryland  Ed  H.  Chi,  Google  Research  Saul  Greenberg,  University  of  Calgary  François  GuimbreCère,  Cornell  University  Jeffrey  Heer,  Stanford  University  Jason  Hong,  Carnegie  Mellon  University  Tessa  Lau,  IBM  Research  Dan  Olsen,  Brigham  Young  University