Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Reseval Mashup Platform Talk at SECO

641 views

Published on

We are going to represent a Mashup platform for the research evaluation. This talk was given at 2nd Search computing workshop in Como, italy on 27-may-2010.

  • Be the first to comment

  • Be the first to like this

Reseval Mashup Platform Talk at SECO

  1. 1. An Open Mashup Platform ForResearch Impact Evaluation<br />By: Muhammad Imran<br />
  2. 2. EXISTING SOLUTIONS:<br />Web of Science<br />Publish or Perish Tool<br />DBLP<br />
  3. 3. PRIMARY USERS OF RESEARCH EVALUATION<br />External Entities<br />Government agencies/funding organizations <br /><ul><li>Funding of research projects
  4. 4. Govt. policy for future investment</li></ul>University Management<br />Management, including committees, <br /><ul><li>Faculty recruitment and promotion</li></ul>University Departments<br />Institutional research, academic affairs, tech transfer, etc.<br /><ul><li>Institutes, department and individual evaluation</li></ul>Individuals <br />Faculty, Staff, Students<br /><ul><li>Finding experts in some specific area</li></li></ul><li>DIMENSIONS IN RESEARCH EVALUATION<br />Different Algorithms<br />Different Things<br />Different Purposes<br />Different Data Sources<br />Different Metrics<br /><ul><li>Individual
  5. 5. Groups
  6. 6. Artifacts
  7. 7. Universities
  8. 8. Countries</li></ul>…<br /><ul><li>Funding
  9. 9. Future Policy
  10. 10. Finding Expert
  11. 11. Hiring
  12. 12. Promotions</li></ul>…<br /><ul><li>Google Scholar
  13. 13. DBLP
  14. 14. Scopus
  15. 15. ACM
  16. 16. User’s DS</li></ul>…<br /><ul><li>H-Index
  17. 17. G-Index
  18. 18. C-Index
  19. 19. N-Index</li></ul>…<br /><ul><li>Diff aggregation
  20. 20. Diff normalization
  21. 21. Self-Citation
  22. 22. Filters</li></ul>…<br />
  23. 23. DIMENSIONS INRESEARCH EVALUATION<br />Different Things<br />Different Purposes<br />Different Data Sources<br />Different Metrics<br />Different Algorithms<br />Research Evaluation<br />
  24. 24. HOW IS RESEARCH EVALUATION DONE?<br /><ul><li>Available Data
  25. 25. Number of and value of Grants awarded
  26. 26. Number of awards (e.g. Nobel Prizes)
  27. 27. Number of patents
  28. 28. Number of post-graduate researchers
  29. 29. Publication counts
  30. 30. Citation counts
  31. 31. Bibliometric metrics
  32. 32. Voting
  33. 33. Reputation, Reviews, Rating (Liquid Journal)
  34. 34. And many more …
  35. 35. Peer Evaluation
  36. 36. Expensive, time consuming</li></li></ul><li>Individuals<br />Groups of researchers<br />Artifacts (Books, Papers, Articles etc)<br />Institutions<br />Geographical units (City, Region, Country)<br />…<br />UNIT OF EVALUATION<br />
  37. 37. Complex Metric Computation Logic<br />Diversity <br />Autonomy<br />Variability<br />
  38. 38. PROPOSED SOLUTION<br />Putting everything together<br />Capability to define complex logic<br />User<br />Personalized metrics support<br />User Input<br />Data Presentation<br />Mashup Magic<br />Evaluate individual, groups …<br />Data<br />Data<br />Common platform for various data sources<br />
  39. 39. ARCHITECTURE<br />MASHUP UI<br />Mashup Exec Logic<br />ResEval Components<br />ResEval Domain Model<br />ResEval Mashup Language<br />Mashup Engine<br />Resource Space Management System<br />
  40. 40. ResEval Mashup 1 (the vision)<br />
  41. 41. ResEval Mashup 2<br />
  42. 42. ResEval Mashup 3<br />
  43. 43. CONCLUSION & FUTURE WORK<br />We proposed architecture for<br />Personalized metrics definition<br />Resource-Oriented <br />Provide Impact evaluation for individual & groups<br />Future work<br />Mashup platform for complex metric logic<br />Components as web service<br />
  44. 44. Thank you.<br />

×