Software Practice 12 breakout - Tracking usage and impact of software
Upcoming SlideShare
Loading in...5
×
 

Software Practice 12 breakout - Tracking usage and impact of software

on

  • 1,260 views

 

Statistics

Views

Total Views
1,260
Views on SlideShare
454
Embed Views
806

Actions

Likes
0
Downloads
2
Comments
0

2 Embeds 806

http://astrocompute.wordpress.com 805
http://beforeitsnews.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Software Practice 12 breakout - Tracking usage and impact of software Software Practice 12 breakout - Tracking usage and impact of software Presentation Transcript

  • So#ware  reward,  cita.on,    a0ribu.on   Tracking  usage  and  impact   Neil  Chue  Hong,  Alberto  Di  Meglio,   Josh  Greenberg,  Juan  Lalinde,  Kevin   Jorissen  
  • Models  of  a0ribu.on  •  Tradi.onal  nota.on  of  cita.ons  -­‐  authority  flows  from  paper  to  paper  through   cita.on  chains   –  Lots  of  murkiness  when  it  comes  to  so#ware.   –  Cita.on  is  one  way  of  measuring  impact  but  only  one.  •  Papers  are  completed  and  published  before  people  “use”  them  so  impact  is  always   downstream   –  So#ware  can  be  published  mul.ple  .mes.   –  You  write  a  paper  so  someone  else  can  read  it.  Only  fix  bugs  in  pre-­‐print.     –  You  dont  maintain  the  paper,  you  publish  new  work,  papers.   –  We  dont  check  papers  for  their  dependencies  and  revise  them  without  new  work.  •  So#ware  is  more  like  a  long  term  research  project  which  has  many  versions  (akin   to  results)  •  If  you  create  things  which  are  higher  quality,  have  to  be  rewarded.   –  Helping  out  on  forums  -­‐  huge  impact,  but  recogni.on  is  zero.   –  Reward  for  the  so#ware  itself  should  be  more  than  the  paper  that  describes  it.   –  Impact  of  so#ware  should  be  even  greater  than  the  impact  of  a  single  paper  because  it   provides  tools  for  doing  many  things.  
  • Ways  in  which  we  like  to  be  rewarded  •  Money   –  Salary   –  Prizes  •  Recogni.on  and  Respect   –  Academic   –  Peers   –  Public  •  Achievement  of  long  term  pla[orm  funding  •  Promo.on  and  tenure  •  Being  featured  by  others  •  Being  curated  •  Chocolate  cake  
  • Ways  in  which  we  can  measure  usage   and  impact  •  coun.ng  downloads  •  coun.ng  cita.ons  on  related  papers  •  coun.ng  direct  cita.ons  of  so#ware   –  about  box  should  give  a  very  clear  cita.on  that  can  be  copied  and  pasted  •  coun.ng  numbers  of  licenses  granted  •  pu]ng  in  constraints  asking  for  updates  on  usage  as  part  of  the  licenses  •  logging  usage  through  checking  for  updates  (e.g.  in  Zotero)  •  webanaly.cs  techniques  •  sta.s.cs  from  so#ware  catalogues,  marketplaces,  science  gateways  (e.g.  in   nanoHUB)  •  We  want  to  measure  how  people  are  using  the  so#ware  (not  just  when  they  are   using  it   –  collect  sta.s.cs  manually  through  site  administrators  registering  services  at  their  sites  (could   be  automa.c)   –  cita.on  of  so#ware,  generate  data  when  its  used  (version  used,  authors,  size  of  usage)   –  number  of  commi0ers,  contributors,  par.cipants,  vitality  of  community   –  surveys,  site  visits,  observa.on  of  scien.sts  in  daily  rou.ne  
  • Changes  to  make  it  easier  to  track   usage  and  impact  of  so#ware  •  Formal  way  of  tracking   –  DOIs  for  so#ware?  So#ware  cita.ons.  •  So#ware  depositories  for  reproducible  papers   (e.g.  RunMyCode)  •  Be0er  upstream  prac.ces  e.g.  always  using   networked  code  repositories  •  Bu0on  in  so#ware  for  "prepare  my  results  and   other  stuff  for  publica.on"  
  • What  are  the  biggest  issues  •  changing  the  culture  surrounding  the  value  and   importance  of  so#ware  when  looking  at  career   progression  (stopping  the  self-­‐reinforcing   process)  •  how  do  you  rela.vely  value  someones   contribu.on,  and  appor.on  credit  (ar.cula.on  of   roles?)  •  do  we  understand  the  core  community  who  can   judge  the  value  and  impact  •  understanding  how  to  cite  so#ware  so  it  can  be   tracked  is  difficult  
  • Things  we’d  like  to  understand  •  What’s  the  model  of  credit  for  the  impact  of  so#ware  on  the  work  it   enables  (i.e.  what  lets  you  rack  up  points?)   –  1  point  every  .me  a  paper  cites  you  or  50  points  if  a  paper  that  uses  you  is   cited  50  .mes?  •  Is  there  a  scien.fic  community,  many  scien.fic  communi.es?   –  From  which  communi.es  do  people  want  to  get  recogni.on,  and  from  whom   within  the  communi.es?  •  Are  there  examples  where  removing  the  "hierarchical  value/weigh.ng"  or   hyperdifferen.a.ng  (extreme  differen.a.on  of  roles)  models  of   a0ribu.on  work  well  in  the  world  of  regular  scholarly  communica.on?  •  Should  there  be  a  differen.al  weigh.ng  of  the  respect  that  an  individual   gives  (Tripadvisor  model  vs  "wise  ones"/Faculty  of  the  1000)   –  Who  is  important  in  the  community  for  giving  out  “respected”  rewards?  •  Can  we  pick  a  handful  of  rela.vely  complex  pieces  of  so#ware  and  ask   people  involved  in  the  development  to  assign  rela.ve  values  to  each   others  contribu.ons?  Does  it  change  over  .me?