Presentation about our community-driven approach for reputation eliciting and estimation, given at the Altmetrics Workshop, during WebSci Conference 2011 held in Koblenz, Germany.
Reputation Management for Early Career ResearchersMicah Altman
In the rapidly changing world of research and scholarly communications, researchers are faced with a fast growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Early career scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.
Dr. Micah Altman briefly reviews a number of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, Dr. Altman suggests some steps early career researchers may consider when disseminating their research and participating in public reviews and discussion.
Doctoral Symposium Slides from ACM International Conference on Interactive Su...Stacey Scott
The ISS Doctoral Symposium, held with the ACM International Conference on Interactive Surfaces and Spaces (ISS) 2016, is a forum in which PhD students can meet and discuss their work with each other and a panel of experienced Interactive Surface researchers in an informal and interactive setting.
To participate, students submit a paper that describes the problem that their thesis aims to address, their research methodology, the work they have completed thus far, and the plan for the full dissertation work. Doctoral Symposium papers are published in the ISS conference companion distributed at the conference and archived in the ACM Digital Library.
Accepted students present their work to a panel of senior researchers in the ISS field, and participate in an intensive workshop around ISS research and profession career development. They also obtain free conference registration.
Presentation by Philip Cohen on collaborative work with Micah Altman as part of the MIT CREOS research talk series. Presented in fall 2018, in Cambridge, MA.
Contemporary journal peer review is beset by a range of problems. These include (a) long delay times to publication, during which time research is inaccessible; (b) weak incentives to conduct reviews, resulting in high refusal rates as the pace of journal publication increases; (c) quality control problems that produce both errors of commission (accepting erroneous work) and omission (passing over important work, especially null findings); (d) unknown levels of bias, affecting both who is asked to perform peer review and how reviewers treat authors, and; (e) opacity in the process that impedes error correction and more systematic learning, and enables conflicts of interest to pass undetected. Proposed alternative practices attempt to address these concerns -- especially open peer review, and post-publication peer review. However, systemic solutions will require revisiting the functions of peer review in its institutional context.
Social Media in Science and Altmetrics - New Ways of Measuring Research Impact Christoph Lutz
Social media are becoming more and more popular in scientific communication. Scientists use them for a range of purposes, from sharing publications, to blogging about their own or others’ research, conference tweeting, interpersonal communication and online participation, for example via Q&As on academic social network sites like ResearchGate and academia.edu. Moreover, many social media platforms can be used for impact measurement via so-called altmetrics. Altmetrics capture and aggregate social media metrics such as (re)tweets, Facebook likes, Mendeley bookmarks and Wikipedia cites. They can challenge or at least complement bibliometric impact measures, like the Journal Impact Factor and the h-index, which have been criticized on various grounds. This presentation first summarizes recent studies on social media adoption in science. It then focuses on altmetrics and summarizes key findings in that domain. Finally, it gives a hands-on introduction to altmetrics by demonstrating two prominent services: Impactstory and Altmetric.com.
Reputation Management for Early Career ResearchersMicah Altman
In the rapidly changing world of research and scholarly communications, researchers are faced with a fast growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Early career scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.
Dr. Micah Altman briefly reviews a number of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, Dr. Altman suggests some steps early career researchers may consider when disseminating their research and participating in public reviews and discussion.
Doctoral Symposium Slides from ACM International Conference on Interactive Su...Stacey Scott
The ISS Doctoral Symposium, held with the ACM International Conference on Interactive Surfaces and Spaces (ISS) 2016, is a forum in which PhD students can meet and discuss their work with each other and a panel of experienced Interactive Surface researchers in an informal and interactive setting.
To participate, students submit a paper that describes the problem that their thesis aims to address, their research methodology, the work they have completed thus far, and the plan for the full dissertation work. Doctoral Symposium papers are published in the ISS conference companion distributed at the conference and archived in the ACM Digital Library.
Accepted students present their work to a panel of senior researchers in the ISS field, and participate in an intensive workshop around ISS research and profession career development. They also obtain free conference registration.
Presentation by Philip Cohen on collaborative work with Micah Altman as part of the MIT CREOS research talk series. Presented in fall 2018, in Cambridge, MA.
Contemporary journal peer review is beset by a range of problems. These include (a) long delay times to publication, during which time research is inaccessible; (b) weak incentives to conduct reviews, resulting in high refusal rates as the pace of journal publication increases; (c) quality control problems that produce both errors of commission (accepting erroneous work) and omission (passing over important work, especially null findings); (d) unknown levels of bias, affecting both who is asked to perform peer review and how reviewers treat authors, and; (e) opacity in the process that impedes error correction and more systematic learning, and enables conflicts of interest to pass undetected. Proposed alternative practices attempt to address these concerns -- especially open peer review, and post-publication peer review. However, systemic solutions will require revisiting the functions of peer review in its institutional context.
Social Media in Science and Altmetrics - New Ways of Measuring Research Impact Christoph Lutz
Social media are becoming more and more popular in scientific communication. Scientists use them for a range of purposes, from sharing publications, to blogging about their own or others’ research, conference tweeting, interpersonal communication and online participation, for example via Q&As on academic social network sites like ResearchGate and academia.edu. Moreover, many social media platforms can be used for impact measurement via so-called altmetrics. Altmetrics capture and aggregate social media metrics such as (re)tweets, Facebook likes, Mendeley bookmarks and Wikipedia cites. They can challenge or at least complement bibliometric impact measures, like the Journal Impact Factor and the h-index, which have been criticized on various grounds. This presentation first summarizes recent studies on social media adoption in science. It then focuses on altmetrics and summarizes key findings in that domain. Finally, it gives a hands-on introduction to altmetrics by demonstrating two prominent services: Impactstory and Altmetric.com.
Your Systematic Review: Getting StartedElaine Lasda
Presentation for University at Albany- SUNY community related to best practices for conducting systematic reviews and other evidence synthesis practices.
TOWARDS A MULTI-FEATURE ENABLED APPROACH FOR OPTIMIZED EXPERT SEEKINGcsandit
With the enormous growth of data, retrieving information from the Web became more desirable
and even more challenging because of the Big Data issues (e.g. noise, corruption, bad
quality…etc.). Expert seeking, defined as returning a ranked list of expert researchers given a
topic, has been a real concern in the last 15 years. This kind of task comes in handy when
building scientific committees, requiring to identify the scholars’ experience to assign them the
most suitable roles in addition to other factors as well. Due to the fact the Web is drowning with
plenty of data, this opens up the opportunity to collect different kinds of expertise evidence. In
this paper, we propose an expert seeking approach with specifying the most desirable features
(i.e. criteria on which researcher’s evaluation is done) along with their estimation techniques.
We utilized some machine learning techniques in our system and we aim at verifying the
effectiveness of incorporating influential features that go beyond publications
Research Data Sharing and Re-Use: Practical Implications for Data Citation Pr...SC CTSI at USC and CHLA
Date: Apr 4, 2018
Speaker: Hyoungjoo Park, PhD candidate, School of Information Studies, University of Wisconsin-Milwaukee, and Dietmar Wolfram, PhD
Overview: It is increasingly common for researchers to make their data freely available. This is often a requirement of funding agencies but also consistent with the principles of open science, according to which all research data should be shared and made available for reuse. Once data is reused, the researchers who have provided access to it should be acknowledged for their contributions, much as authors are recognised for their publications through citation. Hyoungjoo Park and Dietmar Wolfram have studied characteristics of data sharing, reuse, and citation and found that current data citation practices do not yet benefit data sharers, with little or no consistency in their format. More formalised citation practices might encourage more authors to make their data available for reuse.
Digital Scholar Webinar: Recruiting Research Participants Online Using RedditSC CTSI at USC and CHLA
This 50-minute presentation introduces r/SampleSize, a community on the website Reddit that allows for online participant recruitment without compulsory or immediate payment. It will provide an overview of best practices for recruiting participants on r/SampleSize. It will also compare r/SampleSize to Amazon Mechanical Turk (MTurk), a widely used crowdsourcing platform for recruiting research participants.
A brief introduction to Design Science for Information Systems by Paul Johannesson at KTH/Stockholm University. The presentation builds on the work by Alan Hevner and others.
Data Science: Origins, Methods, Challenges and the future?Cagatay Turkay
Slides for my talk at City Unrulyversity on 18.03.15 in London. Discuss the term Data Science, touch upon the origins and the data scientist types. A longer discussion on the Data Science process and challenges analysts face.
And here is the abstract of the talk:
Data Science ... the term is everywhere now, on the news, recruitment sites, technology boards. "Data scientist" is even named to be sexiest job title of the century. But what is it, really? Is it just a hype or a term that will be with us for some time?
This session will investigate where the term is originating from and how it relates to decades of research in established fields such as statistics, data mining, visualisation and machine learning. We will investigate how the field is evolving with the emergence of large, heterogeneous data resources. We will discuss the objectives, tools and challenges of data science as a practice, and look at examples from research and industrial applications.
This presentation was provided by Robert J. Sandusky of The University of Illinois at Chicago, during the NISO event "Next Generation Discovery Tools: New Tools, Aging Standards," held March 27 - March 28, 2008.
Reproducibility from an infomatics perspectiveMicah Altman
Scientific reproducibility is most viewed through a methodological or statistical lens, and increasingly, through a computational lens. Over the last several years, I've taken part in collaborations to that approach reproducibility from the perspective of informatics: as a flow of information across a lifecycle that spans collection, analysis, publication, and reuse.
These slides sketch of this approach, and were presented at a recent workshop on reproducibility at the National Academy of Sciences, and at one our Program on Information Science brown bag talks. See: informatics.mit.edu
DIY ERM (Do-It-Yourself Electronic Resources Management) for the Small LibraryNASIG
Are you a lone electronic resources librarian at a small institution? Are you unable to implement an electronic resource management (ERM) system due to lack of financial or technical resources? Is your administrative information for e-resource subscriptions still recorded in a variety of physical print-outs, Word documents, Excel spreadsheets, staff wiki pages, etc., and you would like to organize it in one central location? Then this is the session for you! This program will describe the presenter's step-by-step approach to creating a homegrown electronic resources management (ERM) system using Microsoft Access 2010. The topics covered will include use-case analysis, data analysis, card sorting for database design, tables and relationships in databases, and how to use forms in Access to make the ERM database user-friendly. The presenter will also refer to free, online Access 2010 documentation that was referenced in the creation of her local ERM system. Presenter: Sarah Hartman-Caverly
Electronic Resources Manager, Delaware County Community College
Metadata and Metrics to Support Open AccessMicah Altman
This presentation, invited for a workshop on Open Access and Scholarly Books (sponsored by the Berkman Center and Knowledge Unlatched), provides a very brief overview of metadata design principles, approaches to evaluation metrics, and some relevant standards and exemplars in scholarly publishing. It is intended to provoke discussion on approaches to evaluation of the use, characteristics, and value of OA publications.
Your Systematic Review: Getting StartedElaine Lasda
Presentation for University at Albany- SUNY community related to best practices for conducting systematic reviews and other evidence synthesis practices.
TOWARDS A MULTI-FEATURE ENABLED APPROACH FOR OPTIMIZED EXPERT SEEKINGcsandit
With the enormous growth of data, retrieving information from the Web became more desirable
and even more challenging because of the Big Data issues (e.g. noise, corruption, bad
quality…etc.). Expert seeking, defined as returning a ranked list of expert researchers given a
topic, has been a real concern in the last 15 years. This kind of task comes in handy when
building scientific committees, requiring to identify the scholars’ experience to assign them the
most suitable roles in addition to other factors as well. Due to the fact the Web is drowning with
plenty of data, this opens up the opportunity to collect different kinds of expertise evidence. In
this paper, we propose an expert seeking approach with specifying the most desirable features
(i.e. criteria on which researcher’s evaluation is done) along with their estimation techniques.
We utilized some machine learning techniques in our system and we aim at verifying the
effectiveness of incorporating influential features that go beyond publications
Research Data Sharing and Re-Use: Practical Implications for Data Citation Pr...SC CTSI at USC and CHLA
Date: Apr 4, 2018
Speaker: Hyoungjoo Park, PhD candidate, School of Information Studies, University of Wisconsin-Milwaukee, and Dietmar Wolfram, PhD
Overview: It is increasingly common for researchers to make their data freely available. This is often a requirement of funding agencies but also consistent with the principles of open science, according to which all research data should be shared and made available for reuse. Once data is reused, the researchers who have provided access to it should be acknowledged for their contributions, much as authors are recognised for their publications through citation. Hyoungjoo Park and Dietmar Wolfram have studied characteristics of data sharing, reuse, and citation and found that current data citation practices do not yet benefit data sharers, with little or no consistency in their format. More formalised citation practices might encourage more authors to make their data available for reuse.
Digital Scholar Webinar: Recruiting Research Participants Online Using RedditSC CTSI at USC and CHLA
This 50-minute presentation introduces r/SampleSize, a community on the website Reddit that allows for online participant recruitment without compulsory or immediate payment. It will provide an overview of best practices for recruiting participants on r/SampleSize. It will also compare r/SampleSize to Amazon Mechanical Turk (MTurk), a widely used crowdsourcing platform for recruiting research participants.
A brief introduction to Design Science for Information Systems by Paul Johannesson at KTH/Stockholm University. The presentation builds on the work by Alan Hevner and others.
Data Science: Origins, Methods, Challenges and the future?Cagatay Turkay
Slides for my talk at City Unrulyversity on 18.03.15 in London. Discuss the term Data Science, touch upon the origins and the data scientist types. A longer discussion on the Data Science process and challenges analysts face.
And here is the abstract of the talk:
Data Science ... the term is everywhere now, on the news, recruitment sites, technology boards. "Data scientist" is even named to be sexiest job title of the century. But what is it, really? Is it just a hype or a term that will be with us for some time?
This session will investigate where the term is originating from and how it relates to decades of research in established fields such as statistics, data mining, visualisation and machine learning. We will investigate how the field is evolving with the emergence of large, heterogeneous data resources. We will discuss the objectives, tools and challenges of data science as a practice, and look at examples from research and industrial applications.
This presentation was provided by Robert J. Sandusky of The University of Illinois at Chicago, during the NISO event "Next Generation Discovery Tools: New Tools, Aging Standards," held March 27 - March 28, 2008.
Reproducibility from an infomatics perspectiveMicah Altman
Scientific reproducibility is most viewed through a methodological or statistical lens, and increasingly, through a computational lens. Over the last several years, I've taken part in collaborations to that approach reproducibility from the perspective of informatics: as a flow of information across a lifecycle that spans collection, analysis, publication, and reuse.
These slides sketch of this approach, and were presented at a recent workshop on reproducibility at the National Academy of Sciences, and at one our Program on Information Science brown bag talks. See: informatics.mit.edu
DIY ERM (Do-It-Yourself Electronic Resources Management) for the Small LibraryNASIG
Are you a lone electronic resources librarian at a small institution? Are you unable to implement an electronic resource management (ERM) system due to lack of financial or technical resources? Is your administrative information for e-resource subscriptions still recorded in a variety of physical print-outs, Word documents, Excel spreadsheets, staff wiki pages, etc., and you would like to organize it in one central location? Then this is the session for you! This program will describe the presenter's step-by-step approach to creating a homegrown electronic resources management (ERM) system using Microsoft Access 2010. The topics covered will include use-case analysis, data analysis, card sorting for database design, tables and relationships in databases, and how to use forms in Access to make the ERM database user-friendly. The presenter will also refer to free, online Access 2010 documentation that was referenced in the creation of her local ERM system. Presenter: Sarah Hartman-Caverly
Electronic Resources Manager, Delaware County Community College
Metadata and Metrics to Support Open AccessMicah Altman
This presentation, invited for a workshop on Open Access and Scholarly Books (sponsored by the Berkman Center and Knowledge Unlatched), provides a very brief overview of metadata design principles, approaches to evaluation metrics, and some relevant standards and exemplars in scholarly publishing. It is intended to provoke discussion on approaches to evaluation of the use, characteristics, and value of OA publications.
The Innovation Engine for Team Building – The EU Aristotele Approach From Ope...ARISTOTELE
ARISTOTELE approach has been presented at the Innovation Adoption Forum for Industry and Public Sector within the 6th IEEE International Conference on Digital Ecosystem Technologies (IEEE DEST - CEE 2012). The presentation about ARISTOTELE has been held by Paolo Ceravolo and Ernesto Damiani (University of Milan) during the keynote "The Innovation Engine for Team Building – The EU Aristotele Approach". Learn more on http://www.aristotele-ip.eu/
Stepping out of the echo chamber - Alternative indicators of scholarly commun...Andy Tattersall
This set of slides which was presented at Sheffield Hallam University and The London School of Hygene and Tropical Medicine. They showcase the many ways academics can leverage digital scholary communication tools to discover what is being said about their research and how best to respond to that conversation.
Modern research metrics and new models of evaluation have risen high on the academic agenda in the last few years. In this session two UK institutions who have adopted such metrics across their faculty will share their motivations and experiences of doing so, and explain further how they are integrating these data into existing models of review and analysis.
Social CI: A Work method and a tool for Competitive Intelligence NetworkingComintelli
This presentation is from a webinar hosted by Comintelli and is a part of a project called CIBAS: a collaboration with the Department of Media Technology at Södertörn University in Stockholm, Sweden. A new notion called social CI is introduced, meaning competitive intelligence (CI) for the networking organization. A conceptual framework for social CI is presented that is based on a socio-technical perspective combining both social and technical aspects. The presented framework is related to notions such as Enterprise 2.0 and wikinomics. A research design prototype of a tool for collaborative CI, CoCI, is also demonstrated. CoCI is a tool that has been developed using the Social CI framework that demonstrates how CI methods and CI tools can be developed together using a socio-technical approach.
A Big Picture in Research Data ManagementCarole Goble
A personal view of the big picture in Research Data Management, given at GFBio - de.NBI Summer School 2018 Riding the Data Life Cycle! Braunschweig Integrated Centre of Systems Biology (BRICS), 03 - 07 September 2018
IR Strangelove or: How I Learned to Stop Worrying and Love the Institutional ...OCLC Research
A view of the research support landscape and RLG partnership activities to help academic librarians provide better services. Given at the Spring CNI briefing in Minneapolis April 6, 2009.
By Ricky Erway, OCLC Research
En los últimos tiempos, la tecnología ha venido ganando protagonismo como herramienta que facilita la participación democrática en la toma de decisiones, así como también en procesos de deliberación e innovación en el sector público. Las tecnologías cívicas, como se las conoce, extienden el alcance de los espacios de participación ciudadana y han atraído más de 600 millones de dólares en inversiones entre 2011 y 2014, capturando el interés de gigantes del mundo informático, como Microsoft y Google, que han comenzado a apostar también por su desarrollo.
En esta charla explicamos las circunstancias que permitieron el surgimiento de las tecnologías cívicas, casos de uso junto con ejemplos locales (a Paraguay) de aplicación. Brindamos, además, un repaso a sus potencialidades y limitaciones, presentando en detalle dos prototipos de tecnologías cívicas en las que estuvimos trabajando durante los últimos dos años y que han servido para facilitar casos reales de participación ciudadana en procesos democráticos de decisión e innovación. La charla finaliza con ideas sobre trabajo futuros.
Cristhian Parra / Jorge Saldivar
"...And suddenly, the memory revealed itself". The role of IT in supporting s...Cristhian Parra
Slides of my Ph.D. dissertation discussion, by which I became a Dr. in Information and Communication Technologies :-)
On my dissertation, I discuss the ageing phenomena and the concept of active ageing (AA), present the state of the art of ICT for enabling AA, describe the participatory action research approach we used to gain insights that later led to a participatory design process that resulted in "Reminiscens", a tablet application for stimulating social reminiscence as the means for motivating intergenerational social interactions. The dissertation concludes with the comprehensive description, analysis and discussion of a 3-months longitudinal study bringing young volunteers together with older adults to share stories and digitalize them using Reminiscens.
What’s Up: Fostering Intergenerational Social InteractionsCristhian Parra
Presentation of the paper "What's Up: Fostering Intergenerational Social Interactions" at the FoSIBLE Workshop of the 2012 COOP Conference, held in Marseille, France.
Enabling Community Participation of Senior CitizensCristhian Parra
Presentation of the paper "Enabling Community Participation of Senior Citizens through Participatory Design and ICT" at the Community Informatics Research Network Conference held in Prato, Italy in November 2012.
Enabling Community Participation of Senior Citizens
2011 06-14 cristhian-parra_u_count
1. UCount: A community-driven approach for measuring Scientific Reputation Altmetrics Workshop / websci2011 Cristhian Parra University of Trento, Italy parra@disi.unitn.it
3. What is Scientific Reputation? Scientific Reputation is the social evaluation (opinion) by the scientific community of a researcher or its contributions given a certain criterion (scientific impact)
4. Main Goal understand To understand the way reputationis formed within and across scientific communities ? How, Why
5. Science is an Economy of Reputation [Whitley 2000] Motivation Improve support for Decision Making Readership Affiliation Bibliometrics
9. Results Surveys: Correlation between bibliometric indicators and reputation is always in the rank of (-0.5:0.5) Research Position Contests CNRS dataset: same result as in surveys Italian dataset: around 50% of effectiveness in predictions for all metrics Bibliometrics are not a good describer of real reputation
12. UCount Eliciting Reputation Been there Peer Review based assessment (Research Position Contests) Surveys Community oriented Surveys Peer Review Feedback
13. UCount Surveys List of Candidates DBLP Coauthorship Graph ICST Affinity Shortest Path + Jaccard Editorial Boards Palsberg Top H Researchers http://icst.org/UCount-Survey/ http://icst.org/icst-transactions/ http://www.cs.ucla.edu/~palsberg/h-number.html
17. Reverse Engineering Approaches Decision Trees No tree with more than 60% of accuracy Unsupervised Methods Genetic algorithms applied on CNRS Dataset improved correlation in an average of 15% (running only for 5 minutes) Highly improved correlation for fields Research Management and Politics. Next Applying Machine Learning techniques Explore other techniques (e.g. neural networks) Obtain other types of features (e.g. keynotes, advisory networks) http://code.google.com/p/revengrep/ https://github.com/cdparra/melquiades/
18. Reverse Engineering Problem (2) Possible Examples of Combinations One single feature with the highest correlation to reputation (e.g. H-Index for Databases, Readership for Social Informatics) A linear combination of features A complex logic algorithm (e.g. a decision tree)
22. Lamont (2009). How professors think: Inside the curious world of academic judgment. Bollen (2009) et. al. A principal component analysis of 39 scientific impact measures. Sabater (2005) et. al. Review on computational trust and reputation models. Hirsch (2005). “An index to quantify an individual’s scientific research output.” Calstelfranchi (2002). Social trust: A cognitive approach. Priem (2010) et. al. Alt-metrics: A manifesto. 2010. Mann (2006) et. al. Bibliometric impact measures leveraging topic analysis. Mussi (2010) et. al. Discovering Scientific Communities using Conference Network. Nazri (2007) et. al. Journal Impact Factor. 2007. Bergstrom (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. Bar-Ilan (2008). Informetrics at the beginning of the 21st century–A review. Jensen (2009) et. al. Testing bibliometric indicators by their prediction of scientists promotions. Kulasegarah (2010) et. al. Comparison of the h-index with standard bibliometric indicators to rank influential otolaryngologists in Europe and North America. Katsaros (2008) et. al. Evaluating Greek Departments of Computer Science/Engineering using Bibliometric Indices. Whitley (2000). The intellectual and social organization of the sciences. Oxford: Oxford University Press References
41. Paper at CLEI, 2010(**) http://project.liquidpub.org/karaku (**) http://project.liquidpub.org/resman First year in one slide :)
42. Social Networking Services Mendeley CiteULike Connotea Delicious Digital libraries SRSAPI SRS Repository Data Crawling DBLP Source dumps Target DB Staging area Scopus Xplore Data Loading and Cleaning …
43. Social Networking Services Mendeley CiteULike Connotea Delicious Digital libraries SRSAPI SRS Repository Data Crawling DBLP Source dumps Target DB Staging area Scopus Xplore Data Loading and Cleaning …
44. Data Acquisition Storage Data Acquisition Layer Off-line acquisition On-demand acquisition Adapter Layer DBLP MAS CiteULike Delicious Twitter MAS DBLP CiteULike Delicious Twitter Data Sources
45. How professors think [Lamont 2009] Correlation experiments [Jensen 2009, Kulasegarah 2010, Katsaros 2008] There is no direct study of reputation in the research evaluation process.
Editor's Notes
Good afternoon everyone. My name is Cristhian Parra and today I will present the work we are pushing forward in Trento to first capture and later estimate reputation in academia
The most basic definition of reputation comes in the following way: reputation (in this case scientific) is the social evaluation of a group of entities (the scientific community) towards a person, group of persons, organizations, artifacts (researchers and contributions in this case) on a certain criterion (which is more frequently the scientific impact)And why is this of any importance?
With this title, we want to refer to the two main elements of the proposal. The first element is “understanding”, which refers to the main goal of the proposal: to understand the way reputation is formed within and across scientific communities. Very few people will doubt about reputation people like Einstein in Physics, Turing in CS, or more recently by Aho in CS (famous to us students for his Dragon Book). Their good reputation is safe, in a way. Now, few people will also know how to precisely explain why this happens or what exactly make researchers to have such a good opinion about some of their peer. Which lead us to the second element of our proposal, related to the fundamental problem we will need to solve in order to get to the goal: Reverse Engineering Scientific Reputation.How can we derive the main aspects that affect reputation of researchersin the mind of people?
Because Science is basically an Economy of Reputation, where the reward for contributing to science is fundamentally building up your reputation.An this reputation is mainly based on your Scientific impact, is a multi-dimensional construct that can not be adequately measured by any single indicator [9]. It might depend on features ranging from citation-based bibliometrics, to newly web based readership or download, twitter counts, or simply the reputation of your affiliation or collaborators.This features can be both objective (e.g. bibliometrics) and subjective (e.g.affiliation) criteria resulting in a measure of and they are highly dependant of the communities. Some communities might be more or less subjective than others. Researchers will understand criteria behind their own reputationResearchers will also understand how this reputation varies across communitiesAll this understanding will help to ease the pressure of the publish or perish cultureIn general, it will improve support for decision making in evaluation processes.
weak positive linear dependence wrt H-Index (with self-citations).medium positive linear dependence wrtnumber of publications,
Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (ICST)
Measure the difference on reputation across different communitiesValidation of resultsAnd the challenges are basically the following. First, we need to get reputation info. This is, we need to know the opinion researchers have about other researchersSecond, we need to understand what are the features that characterize to researchersor their work in computer science. Example of Features are Indicators as the "Total number of publications" and other Informations that can give an idea of thequality of the work of a scientist (e.g. keynotes talks, awards, grants, affiliation, etc.) Then, we need to find a way of representing and "Collecting" these features. That is, we need to crawl the web, academic libraries, search engines, etc. looking for this info. Once we have all the data, the next step is to efectively "derive" and "represent"reputation logic behind a particular ranking. And finally, the big challenge is to validate the work. To measure how much our derived reputation algorithms can actually help researchers make better decisions.
Measure the difference on reputation across different communitiesValidation of resultsAnd the challenges are basically the following. First, we need to get reputation info. This is, we need to know the opinion researchers have about other researchersSecond, we need to understand what are the features that characterize to researchersor their work in computer science. Example of Features are Indicators as the "Total number of publications" and other Informations that can give an idea of thequality of the work of a scientist (e.g. keynotes talks, awards, grants, affiliation, etc.) Then, we need to find a way of representing and "Collecting" these features. That is, we need to crawl the web, academic libraries, search engines, etc. looking for this info. Once we have all the data, the next step is to efectively "derive" and "represent"reputation logic behind a particular ranking. And finally, the big challenge is to validate the work. To measure how much our derived reputation algorithms can actually help researchers make better decisions.
Measure the difference on reputation across different communitiesValidation of resultsAnd the challenges are basically the following. First, we need to get reputation info. This is, we need to know the opinion researchers have about other researchersSecond, we need to understand what are the features that characterize to researchersor their work in computer science. Example of Features are Indicators as the "Total number of publications" and other Informations that can give an idea of thequality of the work of a scientist (e.g. keynotes talks, awards, grants, affiliation, etc.) Then, we need to find a way of representing and "Collecting" these features. That is, we need to crawl the web, academic libraries, search engines, etc. looking for this info. Once we have all the data, the next step is to efectively "derive" and "represent"reputation logic behind a particular ranking. And finally, the big challenge is to validate the work. To measure how much our derived reputation algorithms can actually help researchers make better decisions.
Possible Examples of CombinationsOne single feature with the highest correlation to reputation (e.g. H-Index for Databases, Readership for Social Informatics)A linear combination of featuresA complex logic algorithm (e.g. a decision tree)
Measure the difference on reputation across different communitiesValidation of resultsAnd the challenges are basically the following. First, we need to get reputation info. This is, we need to know the opinion researchers have about other researchersSecond, we need to understand what are the features that characterize to researchersor their work in computer science. Example of Features are Indicators as the "Total number of publications" and other Informations that can give an idea of thequality of the work of a scientist (e.g. keynotes talks, awards, grants, affiliation, etc.) Then, we need to find a way of representing and "Collecting" these features. That is, we need to crawl the web, academic libraries, search engines, etc. looking for this info. Once we have all the data, the next step is to efectively "derive" and "represent"reputation logic behind a particular ranking. And finally, the big challenge is to validate the work. To measure how much our derived reputation algorithms can actually help researchers make better decisions.
Possible Examples of CombinationsOne single feature with the highest correlation to reputation (e.g. H-Index for Databases, Readership for Social Informatics)A linear combination of featuresA complex logic algorithm (e.g. a decision tree)
Now, I’m sure that you are all thinking now. “Why do we want to do this?”Yes, and NO.
Researchers will understand criteria behind their own reputation, allowing them to know what re- ally matters when it comes to research impact. This is what indicators contribute most to the researcher’s opinion of reputation.• Researchers will also understand how this reputation varies across communities, giving an important in- put for the always difficult problem of cross community comparisons.• This understanding will be done using data sources that include traditional but also social indicators (e.g. liquidpub, citeulike, mendeley, etc.) which means that our results will naturally extent metrics beyond cita- tions, helping to identify ways to measure scientific reputation in accurate terms (i.e. closer to the real opinion of people)• All these understanding will help to ease the pressure of the publish or perish culture and allow scientists to better focus on what it is really important.
In our case, because we want to analyze Reputation in the context of Science, we need to understand Research Evaluationbecause in order to come up with an opinion about a peer in science, what we do is EVALUATING himIn research evaluation, not onlyResearchers are the subject of evaluation, but alsoTheir contributions (papers)The dissemination means such as Journals and ConferencesAnd the Institutions. To do so, we have been using two main methods:Committees (such as those of peer review)Quantitative Analysis (such as bibliometric indicators)
weak positive linear dependence wrt H-Index (with self-citations).medium positive linear dependence wrtnumber of publications,
weak positive linear dependence wrt H-Index (with self-citations).medium positive linear dependence wrtnumber of publications,