We are going to represent a Mashup platform for the research evaluation. This talk was given at 2nd Search computing workshop in Como, italy on 27-may-2010.
ResEval: Resource-oriented Research Impact Evaluation platformMuhammad Imran
This document proposes a new open and resource-oriented platform for research impact evaluation. It discusses problems with existing solutions like limited data sources and predefined metrics. The proposed solution features a common platform to access various scientific resources, support for personalized metrics, natural language queries, and evaluation of individuals and groups. The architecture defines three layers and prototypes have been implemented for individual/contribution evaluation and group comparison. Future work includes improving the language module and adding more prototype options.
Literature review and research methodologyRaj Bhattarai
This document provides an overview of key concepts for conducting literature reviews and research methods. It discusses considerations for literature reviews such as the origin, nature, and limitations of knowledge. It also outlines various sources that can be reviewed, including books, articles, reports and websites. For research methods, the document describes basic concepts like research types, designs, populations and sampling techniques. It discusses variables, data sources, analysis methods and tools. The goal is to help identify the essential elements needed to structure a literature review or research study.
Beyond the Factor: Talking about Research ImpactClaire Stewart
The document discusses the increasing interest in research metrics and impact from funders, publishers, and institutions for purposes such as hiring, promotion, and evaluating proposals, but notes there are significant limitations to current metrics like journal impact factors which vary widely between disciplines and do not capture the full breadth of research outputs and impacts. It advocates for using quantitative metrics to support, not replace, expert review and evaluation of research and capturing a richer array of data on outputs like publications, presentations, and other influences on knowledge and society to more fully understand a researcher's impact.
This document discusses content analysis methods for analyzing documents and textual data. It describes several types of documents that can be analyzed, including organized records and unorganized personal records. The document outlines the steps to conducting a content analysis, including selecting materials, defining coding units, establishing categories to measure, and developing a coding system. It also discusses issues of validity and reliability in content analysis and different methodological approaches like human coding, dictionary-based analysis, and supervised machine learning.
This presentation was provided by Rachel Lewellen of Harvard University during the NISO Webinar, Using Analytics to Extract Value from the Library's Data, Part Two, held on September 19, 2018.
The Cornell University CISER Data Archive contains over 27,000 numeric datasets covering topics such as demography, economics, health, labor, and surveys. It provides consulting services to help users find, access, and use appropriate data for their research needs. Cornell researchers can download publicly available datasets or access restricted data within the CISER computing environment. The archive also maintains a restricted data center for Cornell researchers to preserve and share their own research data.
This document discusses skills related to inquiry and evidence that effective education leaders use and that EdD candidates need. It notes that leaders request and use data from others rather than conducting their own research. EdD programs generally require a dissertation involving empirical research. The document proposes a three-course sequence at the University of Colorado Denver to teach necessary inquiry skills, covering conceiving studies, data collection, and analysis. This is intended to balance preparing students for leadership and completing a dissertation capstone project.
ResEval: Resource-oriented Research Impact Evaluation platformMuhammad Imran
This document proposes a new open and resource-oriented platform for research impact evaluation. It discusses problems with existing solutions like limited data sources and predefined metrics. The proposed solution features a common platform to access various scientific resources, support for personalized metrics, natural language queries, and evaluation of individuals and groups. The architecture defines three layers and prototypes have been implemented for individual/contribution evaluation and group comparison. Future work includes improving the language module and adding more prototype options.
Literature review and research methodologyRaj Bhattarai
This document provides an overview of key concepts for conducting literature reviews and research methods. It discusses considerations for literature reviews such as the origin, nature, and limitations of knowledge. It also outlines various sources that can be reviewed, including books, articles, reports and websites. For research methods, the document describes basic concepts like research types, designs, populations and sampling techniques. It discusses variables, data sources, analysis methods and tools. The goal is to help identify the essential elements needed to structure a literature review or research study.
Beyond the Factor: Talking about Research ImpactClaire Stewart
The document discusses the increasing interest in research metrics and impact from funders, publishers, and institutions for purposes such as hiring, promotion, and evaluating proposals, but notes there are significant limitations to current metrics like journal impact factors which vary widely between disciplines and do not capture the full breadth of research outputs and impacts. It advocates for using quantitative metrics to support, not replace, expert review and evaluation of research and capturing a richer array of data on outputs like publications, presentations, and other influences on knowledge and society to more fully understand a researcher's impact.
This document discusses content analysis methods for analyzing documents and textual data. It describes several types of documents that can be analyzed, including organized records and unorganized personal records. The document outlines the steps to conducting a content analysis, including selecting materials, defining coding units, establishing categories to measure, and developing a coding system. It also discusses issues of validity and reliability in content analysis and different methodological approaches like human coding, dictionary-based analysis, and supervised machine learning.
This presentation was provided by Rachel Lewellen of Harvard University during the NISO Webinar, Using Analytics to Extract Value from the Library's Data, Part Two, held on September 19, 2018.
The Cornell University CISER Data Archive contains over 27,000 numeric datasets covering topics such as demography, economics, health, labor, and surveys. It provides consulting services to help users find, access, and use appropriate data for their research needs. Cornell researchers can download publicly available datasets or access restricted data within the CISER computing environment. The archive also maintains a restricted data center for Cornell researchers to preserve and share their own research data.
This document discusses skills related to inquiry and evidence that effective education leaders use and that EdD candidates need. It notes that leaders request and use data from others rather than conducting their own research. EdD programs generally require a dissertation involving empirical research. The document proposes a three-course sequence at the University of Colorado Denver to teach necessary inquiry skills, covering conceiving studies, data collection, and analysis. This is intended to balance preparing students for leadership and completing a dissertation capstone project.
The document discusses peer review reform and proposes a new system called IOTA (I Owe the Academy Review). IOTA would allow academics to donate review tokens that represent a pledge to conduct peer reviews. These tokens could then be granted to various peer review initiatives in exchange for conducting experiments and sharing results openly. The goal is to better match reviewers with projects promoting high-quality scholarship, while generating more empirical evidence on effective peer review methods. Several example scenarios for how IOTA could work with different types of journals or research pools are provided.
Ontologies for music from a digital library practitioner's perspectiveJenn Riley
Riley, Jenn. "Ontologies for music from a digital library practitioner's perspective." International Association of Music Libraries Archives and Documentation Centres Annual Conference 2006, June 18-23, 2006
The document discusses the Tudor Research Centre in Luxembourg which develops open source assessment platforms and provides online and offline assessment services. It summarizes some of the research projects and international collaborations. It also discusses the need to improve management of assessment resources through standard metadata sets and models to describe items, tests, and multimedia resources and better exchange of items and tests across platforms.
The PowerPoint was created by Mark Henry, Director of Advising and Transfer at Northampton Community College.
He presented to NCC students and Phi Theta Kappa Honor Society members in November, 2011.
This document discusses bibliometrics and its use in research evaluation. It begins by defining bibliometrics as the use of bibliographic data sources to study research published in the scientific literature. It then discusses different bibliographic data sources and their coverage. The document outlines how bibliometrics can inform both indicator-based models and evaluation-based models of performance-based institutional funding. It emphasizes that bibliometrics should supplement, not replace, peer review in research evaluation. It concludes by discussing good practices for using bibliometrics, including the need for transparency, field normalization, and ensuring metrics align with strategic goals.
This document discusses data metrics and incentives for data sharing. It presents several models for data sharing, such as publishing datasets on personal websites or through data repositories and journals. It also conceptualizes different types of potential "data metrics" like citations, readership counts, downloads/views. However, data sharing is currently hindered by a "vicious circle" where researchers lack incentives to share data if they cannot get credit. The document recommends developing reward systems based on data metrics, standardizing data publications, reducing costs/negative perceptions, and coordinating technical infrastructure to break this circle and encourage data sharing.
V.3 poster current citations and a future with linked dataIliadis Dimitrios
1) Converting citation data to linked data has several advantages such as allowing other applications to use the citation data, describing the reasons publications were cited, and connecting citation information like authors and papers.
2) Linked data assigns unique identifiers (URIs) to citations and related information and describes relationships between cited and citing publications using RDF triples. This allows connecting citation data to other linked open data.
3) Projects that convert citation data to linked data use URIs, RDF triples, and ontologies like CiTO to describe citation intent. This enables advanced searches, citation network visualizations, and linking to other semantic data.
Title IX: A Survey for Institutions of Higher EducationHR ACUITY LLC
The HR Acuity Title IX Survey for Institutions of Higher Education gathers data on the impact Title IX investigations are having on higher education. How much are institutions spending on Title IX-related expenses? What technology platform are campuses using to document and track allegations? What does the Title IX investigation process look like and what are the top 3 challenges of managing it?
This document discusses assessment of reference and instruction services at a university library. It proposes a multifaceted assessment approach involving several elements: tracking reference questions by level of effort required to answer them, developing an instruction assessment plan drawing on models from other universities, creating reports on assessment results that include data policies and structure, and establishing a timeline for reviewing and revising the assessment plan. The goal is to evaluate reference and instruction services through multiple qualitative and quantitative measures.
Prediction-Improving Early Warning Systems With Categorized Course Resource U...Beste Ulus
This study aimed to find out how STEM collage students' use of different course resources in an LMS predict their final grades. They used Logistic Regression Models to estimate assosciation between resourse use and final grade. The results showed that there is a positively significant association between exam-related sources use and final grade. On the other hand, when students use lecture-related resources, then they are less likely to have good grades.
Waddington, R. J., Nam, S., Lonn, S., Teasley, S. D. (2016). Improving early warning systems with categorized course resource usage. Journal of Learning Analytics, 3(3), 263–290.
Towards Ranking in Folksonomies for Personalized Recommender Systems in E-Lea...Mojisola Erdt née Anjorin
CROKODIL provides support for collaborative learning by allowing users to semantically tag and organize web resources into activity hierarchies and share them with learner groups. The document also summarizes a paper that proposes using folksonomies to enhance personalized recommender systems for e-learning by ranking resources based on additional semantics found in folksonomies and integrating user feedback. It outlines applying and evaluating this approach within an application scenario.
This document describes a methodology for mining reference transaction data to aid in reference, instruction, outreach, and collection development activities. The methodology involves librarians recording data from reference interviews, cleaning the data, classifying questions by subject, and collating the data into spreadsheets and reports. This process has led to the creation of subject guides, targeted library instruction, and informed collection development decisions at their university library.
Semantometrics: Towards Fulltext-based Research Evaluationpetrknoth
Over the recent years, there has been a growing interest in developing new scientometric measures that could go beyond the traditional citation-based bibliometric measures. This interest is motivated on one side by the wider availability or even emergence of new information evidencing research performance, such as article downloads, views and twitter mentions, and on the other side by the continued frustrations and problems surrounding the application of citation-based metrics to evaluate research performance in practice.
Semantometrics are a new class of research evaluation metrics which build on the premise that full text is needed to assess the value of a publication. This talk will present the results of an investigation into the properties of the semantometric contribution measure (Knoth & Herrmannova, 2014). We will provide a comparative evaluation of the contribution measure with traditional bibliometric measures based on citation counting.
Graduate & Post-graduate students' awareness about plagiarismAta Rehman
The document summarizes a study on plagiarism among graduate and postgraduate students in Pakistan. It found that many students were unaware of plagiarism policies and detection systems. It recommends universities provide extensive education on plagiarism through courses, workshops and clear policies. It also suggests using plagiarism detection software and citation management tools to promote academic integrity.
The document discusses information literacy and its importance for students. It provides several key points:
- According to studies, only 13% of students are considered information literate, and students rely more on the internet than libraries for research. Students also spend little time evaluating online information.
- The Association of College and Research Libraries has established five standards for information literacy in higher education, which many accrediting organizations have adopted. The standards address determining information needs, accessing information, evaluating sources, using information, and understanding legal and ethical issues.
- Studies found students struggle with finding relevant information online, evaluating sources, and using library resources effectively. While they value libraries, library orientations are often not helpful. Information literacy is
Resource sharing opportunities among academic librariesKhalid Mahmood
This document discusses opportunities for resource sharing among academic libraries. It defines resource sharing and identifies the key resources in academic libraries that could be shared, including information sources, infrastructure, and human knowledge. Information sources that could be shared are collections, databases, inter-library loans, digitized materials, and document delivery. Infrastructure such as buildings, equipment, and ICT systems could also be shared. Additionally, the human knowledge base including staff expertise, training, and reference services presents opportunities for collaboration. The document outlines some factors enabling resource sharing in Pakistani academic libraries and the major hurdles to overcome, particularly lack of leadership and support from university administrations.
This document proposes domain-specific mashups that are tailored for specific application domains. It argues that generic mashup tools are difficult to design in a way that balances generality, expressiveness, and simplicity. The approach presented involves developing domain-specific mashup tools with domain concept models and mashup meta-models mapped to domain processes. This reduces generality but allows domain experts to develop mashups using domain terminology and processes. An architecture is presented involving shared memory and a mashup engine to execute components defined in domain-specific mashup models. Future work involves extending an existing mashup tool to support this domain-specific approach.
Jamie Oliver enjoys meeting people interested in food and sharing ideas about food with them. He notes that food is a global concept and there is always something new to learn about it.
The document summarizes Mesoamerican, South American, and North American indigenous art from pre-Columbian civilizations. It describes iconic Olmec colossal stone heads from Mexico and the large ceremonial structures of Teotihuacan. It also outlines the ballgame tradition throughout Mesoamerica as well as key architectural sites and artworks of the Maya civilization. Moche and Inca art from Peru is highlighted along with Mississippian earthworks like Serpent Mound in Ohio and Anasazi cliff dwellings of the American Southwest. A wide range of artistic traditions prior to European contact are concisely presented.
This document presents a series of quotes from various educational publications over 300 years that criticize students' dependence on new writing and calculating technologies of the time, from slates and paper to pens, ink, and calculators. The quotes express concerns that students will not know how to function without these technologies. The final statement notes that while these tools have changed over time, people will always find something new to complain about.
The document discusses peer review reform and proposes a new system called IOTA (I Owe the Academy Review). IOTA would allow academics to donate review tokens that represent a pledge to conduct peer reviews. These tokens could then be granted to various peer review initiatives in exchange for conducting experiments and sharing results openly. The goal is to better match reviewers with projects promoting high-quality scholarship, while generating more empirical evidence on effective peer review methods. Several example scenarios for how IOTA could work with different types of journals or research pools are provided.
Ontologies for music from a digital library practitioner's perspectiveJenn Riley
Riley, Jenn. "Ontologies for music from a digital library practitioner's perspective." International Association of Music Libraries Archives and Documentation Centres Annual Conference 2006, June 18-23, 2006
The document discusses the Tudor Research Centre in Luxembourg which develops open source assessment platforms and provides online and offline assessment services. It summarizes some of the research projects and international collaborations. It also discusses the need to improve management of assessment resources through standard metadata sets and models to describe items, tests, and multimedia resources and better exchange of items and tests across platforms.
The PowerPoint was created by Mark Henry, Director of Advising and Transfer at Northampton Community College.
He presented to NCC students and Phi Theta Kappa Honor Society members in November, 2011.
This document discusses bibliometrics and its use in research evaluation. It begins by defining bibliometrics as the use of bibliographic data sources to study research published in the scientific literature. It then discusses different bibliographic data sources and their coverage. The document outlines how bibliometrics can inform both indicator-based models and evaluation-based models of performance-based institutional funding. It emphasizes that bibliometrics should supplement, not replace, peer review in research evaluation. It concludes by discussing good practices for using bibliometrics, including the need for transparency, field normalization, and ensuring metrics align with strategic goals.
This document discusses data metrics and incentives for data sharing. It presents several models for data sharing, such as publishing datasets on personal websites or through data repositories and journals. It also conceptualizes different types of potential "data metrics" like citations, readership counts, downloads/views. However, data sharing is currently hindered by a "vicious circle" where researchers lack incentives to share data if they cannot get credit. The document recommends developing reward systems based on data metrics, standardizing data publications, reducing costs/negative perceptions, and coordinating technical infrastructure to break this circle and encourage data sharing.
V.3 poster current citations and a future with linked dataIliadis Dimitrios
1) Converting citation data to linked data has several advantages such as allowing other applications to use the citation data, describing the reasons publications were cited, and connecting citation information like authors and papers.
2) Linked data assigns unique identifiers (URIs) to citations and related information and describes relationships between cited and citing publications using RDF triples. This allows connecting citation data to other linked open data.
3) Projects that convert citation data to linked data use URIs, RDF triples, and ontologies like CiTO to describe citation intent. This enables advanced searches, citation network visualizations, and linking to other semantic data.
Title IX: A Survey for Institutions of Higher EducationHR ACUITY LLC
The HR Acuity Title IX Survey for Institutions of Higher Education gathers data on the impact Title IX investigations are having on higher education. How much are institutions spending on Title IX-related expenses? What technology platform are campuses using to document and track allegations? What does the Title IX investigation process look like and what are the top 3 challenges of managing it?
This document discusses assessment of reference and instruction services at a university library. It proposes a multifaceted assessment approach involving several elements: tracking reference questions by level of effort required to answer them, developing an instruction assessment plan drawing on models from other universities, creating reports on assessment results that include data policies and structure, and establishing a timeline for reviewing and revising the assessment plan. The goal is to evaluate reference and instruction services through multiple qualitative and quantitative measures.
Prediction-Improving Early Warning Systems With Categorized Course Resource U...Beste Ulus
This study aimed to find out how STEM collage students' use of different course resources in an LMS predict their final grades. They used Logistic Regression Models to estimate assosciation between resourse use and final grade. The results showed that there is a positively significant association between exam-related sources use and final grade. On the other hand, when students use lecture-related resources, then they are less likely to have good grades.
Waddington, R. J., Nam, S., Lonn, S., Teasley, S. D. (2016). Improving early warning systems with categorized course resource usage. Journal of Learning Analytics, 3(3), 263–290.
Towards Ranking in Folksonomies for Personalized Recommender Systems in E-Lea...Mojisola Erdt née Anjorin
CROKODIL provides support for collaborative learning by allowing users to semantically tag and organize web resources into activity hierarchies and share them with learner groups. The document also summarizes a paper that proposes using folksonomies to enhance personalized recommender systems for e-learning by ranking resources based on additional semantics found in folksonomies and integrating user feedback. It outlines applying and evaluating this approach within an application scenario.
This document describes a methodology for mining reference transaction data to aid in reference, instruction, outreach, and collection development activities. The methodology involves librarians recording data from reference interviews, cleaning the data, classifying questions by subject, and collating the data into spreadsheets and reports. This process has led to the creation of subject guides, targeted library instruction, and informed collection development decisions at their university library.
Semantometrics: Towards Fulltext-based Research Evaluationpetrknoth
Over the recent years, there has been a growing interest in developing new scientometric measures that could go beyond the traditional citation-based bibliometric measures. This interest is motivated on one side by the wider availability or even emergence of new information evidencing research performance, such as article downloads, views and twitter mentions, and on the other side by the continued frustrations and problems surrounding the application of citation-based metrics to evaluate research performance in practice.
Semantometrics are a new class of research evaluation metrics which build on the premise that full text is needed to assess the value of a publication. This talk will present the results of an investigation into the properties of the semantometric contribution measure (Knoth & Herrmannova, 2014). We will provide a comparative evaluation of the contribution measure with traditional bibliometric measures based on citation counting.
Graduate & Post-graduate students' awareness about plagiarismAta Rehman
The document summarizes a study on plagiarism among graduate and postgraduate students in Pakistan. It found that many students were unaware of plagiarism policies and detection systems. It recommends universities provide extensive education on plagiarism through courses, workshops and clear policies. It also suggests using plagiarism detection software and citation management tools to promote academic integrity.
The document discusses information literacy and its importance for students. It provides several key points:
- According to studies, only 13% of students are considered information literate, and students rely more on the internet than libraries for research. Students also spend little time evaluating online information.
- The Association of College and Research Libraries has established five standards for information literacy in higher education, which many accrediting organizations have adopted. The standards address determining information needs, accessing information, evaluating sources, using information, and understanding legal and ethical issues.
- Studies found students struggle with finding relevant information online, evaluating sources, and using library resources effectively. While they value libraries, library orientations are often not helpful. Information literacy is
Resource sharing opportunities among academic librariesKhalid Mahmood
This document discusses opportunities for resource sharing among academic libraries. It defines resource sharing and identifies the key resources in academic libraries that could be shared, including information sources, infrastructure, and human knowledge. Information sources that could be shared are collections, databases, inter-library loans, digitized materials, and document delivery. Infrastructure such as buildings, equipment, and ICT systems could also be shared. Additionally, the human knowledge base including staff expertise, training, and reference services presents opportunities for collaboration. The document outlines some factors enabling resource sharing in Pakistani academic libraries and the major hurdles to overcome, particularly lack of leadership and support from university administrations.
This document proposes domain-specific mashups that are tailored for specific application domains. It argues that generic mashup tools are difficult to design in a way that balances generality, expressiveness, and simplicity. The approach presented involves developing domain-specific mashup tools with domain concept models and mashup meta-models mapped to domain processes. This reduces generality but allows domain experts to develop mashups using domain terminology and processes. An architecture is presented involving shared memory and a mashup engine to execute components defined in domain-specific mashup models. Future work involves extending an existing mashup tool to support this domain-specific approach.
Jamie Oliver enjoys meeting people interested in food and sharing ideas about food with them. He notes that food is a global concept and there is always something new to learn about it.
The document summarizes Mesoamerican, South American, and North American indigenous art from pre-Columbian civilizations. It describes iconic Olmec colossal stone heads from Mexico and the large ceremonial structures of Teotihuacan. It also outlines the ballgame tradition throughout Mesoamerica as well as key architectural sites and artworks of the Maya civilization. Moche and Inca art from Peru is highlighted along with Mississippian earthworks like Serpent Mound in Ohio and Anasazi cliff dwellings of the American Southwest. A wide range of artistic traditions prior to European contact are concisely presented.
This document presents a series of quotes from various educational publications over 300 years that criticize students' dependence on new writing and calculating technologies of the time, from slates and paper to pens, ink, and calculators. The quotes express concerns that students will not know how to function without these technologies. The final statement notes that while these tools have changed over time, people will always find something new to complain about.
Prospectvision is a software-as-a-service that uses behavioral analysis to identify organizations interested in a company's products and services based on how they interact with its website. It ranks leads as hot, warm or cool based on interest level. Prospectvision also qualifies the focus of a lead's interest so the appropriate person can follow up. It provides a weekly report with contact information for actionable leads for sales teams to convert into sales. The goal is to deliver actionable insight for always-on lead generation and a ten times increase in lead generation ROI.
The late 14th century in Europe saw immense hardship and instability due to factors like the Black Death plague, famine, and the Hundred Years War. This period known as the "Four Horsemen" devastated the population. However, the arts began to gradually flourish with the rise of the middle class who had more wealth and education. Artworks from this period like Giotto's paintings showed more realistic and natural figures, laying the foundations for developments in realism and humanism that would characterize the Renaissance. This late Gothic period is sometimes called the Proto-Renaissance, as it set the stage for the artistic transformations that would follow.
Amate bark painting is a form of Mexican folk art that is created on paper made from boiled fig tree bark. The artworks often feature birds and use curved lines with bright colors and repeated shapes within borders. While considered folk art as it is made by untrained community artists, the piece highlights a debate around whether folk art is better or worse than fine art.
A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Di...Muhammad Imran
This paper addresses the baffling problem of name disam- biguation in the context of digital libraries that administer bibliographic citations. The problem emanates when multi- ple authors share a common name or when multiple name variations of an author appear in citation records. Name dis- ambiguation is not trivial to solve, and most of the digital libraries do not provide an efficient way to accurately iden- tify the citation records of an author. Furthermore, lack of complete meta-data information in digital libraries hinders the existence of generic algorithm that can be applicable on any dataset. We propose a heuristic-based, unsupervised and adaptive method that also embraces users’ interaction to count users’ feedback in disambiguation process. Moreover, the method exploits important features associated with an author and citation records such as co-authors, affiliation, publication title, venue etc., and contrives a conspicuous multilayer hierarchical clustering algorithm, which tunes it- self according to the available information and form clusters of unambiguous records. Our experiments on a set of re- searchers that are contemplated to be highly ambiguous de- cisively produced high precision and recall results and affirm the viability of our algorithm.
The Role of Social Media and Artificial Intelligence for Disaster ResponseMuhammad Imran
Keynote slides for ISCRAM 2016.
"Social Media platforms such as Twitter are invaluable sources of time-critical information. Information on social media communicated during emergencies convey timely and actionable information. For rapid crisis response, real-time insights are important for emergency responders. Although, many humanitarian organizations would like to use this information, however they struggle due a number of issues such as information overload, information vagueness, less credible and misinformation. In this talk, I will describe the role of social media and potential artificial intelligence computational techniques useful for humanitarian organizations and decision makers to make sense of social media data for rapid crisis response."
Creative Commons is a non-profit organization that works to increase the amount of creative works available for free sharing and reuse. It provides copyright licenses known as Creative Commons licenses that allow creators to select some rights they wish to reserve, while allowing certain uses of their work. Creative Commons licenses have been used to license over 500 million works and are available in over 50 countries through local jurisdiction licenses.
Coordinating Human and Machine Intelligence to Classify Microblog Communica0o...Muhammad Imran
An emerging paradigm for the processing of data streams involves human and machine computation working together, allowing human intelligence to process large-scale data. We apply this approach to the classification of crisis-related messages in microblog streams. We begin by describing the platform AIDR (Artificial Intelligence for Disaster Response), which collects human annotations over time to create and maintain automatic supervised classifiers for social media messages. Next, we study two significant challenges in its design: (1) identifying which elements must be labeled by humans, and (2) determining when to ask for such annotations to be done. The first challenge is selecting the items to be labeled by crowdsourcing workers to maximize the productivity of their work. The second challenge is to schedule the work in order to reliably maintain high classification accuracy over time. We provide and validate answers to these challenges by extensive experimentation on real- world datasets.
Artificial Intelligence for Disaster ResponseMuhammad Imran
AIDR is a free open-source platform that uses machine learning and crowdsourcing to automatically filter and classify relevant tweets during humanitarian crises. It collects tweets based on keywords, hashtags, location, and followed users. Classifiers then tag tweets with categories like donations, damage reports, or eyewitness accounts. The platform achieves around 75% accuracy in classification by training models on tagged tweets and leveraging random forest algorithms.
Tweet4act: Using Incident-Specific Profiles for Classifying Crisis-Related Me...Muhammad Imran
This work describes our work presented at the ISCRAM-2013 conference. We presented Tweet4act system, which is used to detect and classify crisis-related messages communicated over a microblogging platform. Our system relies on extracting content features from each message. These features and the use of an incident-specific dictionary allow us to determine the period type of an incident that each message belongs to.
Summarizing Situational Tweets in Crisis ScenarioMuhammad Imran
During mass convergence events such as natural disasters, microblogging platforms like Twitter are widely used by affected people to post situational awareness messages. These crisis related messages disperse among multiple categories like infrastructure damage, information about missing, injured, and dead people etc. The challenge here is to extract important situational updates from these messages, assign them appropriate informational categories, and finally summarize big trove of information in each category. In this paper, we propose a novel framework which first assigns tweets into different situational classes and then summarize those tweets. In the summarization phase, we propose a two stage summarization framework which first extracts a set of important tweets from the whole set of information through an Integer-linear programming (ILP) based optimization technique and then follows a word graph and content word based abstractive summarization technique to produce the final summary. Our method is time and memory efficient and outperforms the baseline in terms of quality, coverage of events, locations et al., effectiveness, and utility in disaster scenarios.
Extracting Information Nuggets from Disaster-Related Messages in Social MediaMuhammad Imran
This document discusses extracting useful information from social media messages during disasters. It outlines filtering disaster-related tweets, classifying them by type (e.g. caution/advice, casualties), and extracting key information within tweets (e.g. locations, needs). The approach is demonstrated on datasets from the 2011 Joplin tornado and 2012 Hurricane Sandy. Automatic classification achieves over 80% accuracy for some classes. Information extraction obtains up to 90% precision. Ongoing work includes providing these tools as a machine learning service to help during crises.
Introduction to Machine Learning: An Application to Disaster ResponseMuhammad Imran
Introduction to Machine Learning talk (part-2) focused on the applications of machine learning in the disaster response domain. In the first part of the talk, we presented different machine learning approaches.
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Nicolas Robinson-Garcia
This document outlines a proposed valuation model for assessing individual scientists. It aims to address shortcomings of current assessment methods that focus only on excellence, outputs, and universal criteria. The model would combine expert judgment with metrics to evaluate multiple dimensions of scientists' work, including scientific engagement, social engagement, background, capacity building, and openness. Case studies of scientists would examine how reported activities fit within this model and relate to factors like seniority, diversity, and values not currently considered. The next step would be to test the model through an experimental structured expert judgment assessment. Feedback on the proposal is sought to help improve the model.
Reputation Management for Early Career ResearchersMicah Altman
In the rapidly changing world of research and scholarly communications, researchers are faced with a fast growing range of options to publicly disseminate, review, and discuss research—options which will affect their long-term reputation. Early career scholars must be especially thoughtful in choosing how much effort to invest in dissemination and communication, and what strategies to use.
Dr. Micah Altman briefly reviews a number of bibliometric and scientometric studies of quantitative research impact, a sampling of influential qualitative writings advising this area, and an environmental scan of emerging researcher profile systems. Based on this review, and on professional experience on dozens of review panels, Dr. Altman suggests some steps early career researchers may consider when disseminating their research and participating in public reviews and discussion.
Providing Tools for Author Evaluation - A case studyinscit2006
The document discusses tools in Scopus for author evaluation. It outlines challenges in author evaluation including author disambiguation and data limitations. Scopus addresses this through the Author Identifier which uses publication data to group documents by author, improving disambiguation. The Citation Tracker and H-index provide visual citation analysis tools for author evaluation. PatentCites and WebCites additionally track citations in patents and web sources. Quality author evaluation depends on underlying source data quality, and Scopus aims to make author information objective, quantitative, and globally comparative.
This is a joint presentation by Jeroen Bosman and Bianca Kramer, given during a joint NISO-ICSTI webinar, held on Wednesday, October 26, on Enabling Innovation in Researcher Workflow and Scholarly Communication.
Introduction to Altmetrics for Medical and Special LibrariansLinda Galloway
Altmetrics (or alternative citation metrics) provide new ways to track scholarly influence across a wide range of media and platforms. This presentation covers altmetric fundamentals, tips on connecting your users with altmetrics, and an overview of newly published research. Presented as part of the NN/LM MAR Boost Box Series; http://nnlm.gov/mar/training/boost_mar2014.pdf
This document discusses different methods for conducting needs assessments, including surveys, interviews, focus groups, and reviewing institutional data. It provides an overview of the types of data each method can collect and their strengths and limitations. The document also lists 12 steps for conducting a needs assessment from NOAA and provides examples of how needs assessment data from multiple sources can be triangulated to develop a more accurate understanding. Lastly, it provides several links to additional resources on needs assessments and program planning.
This presentation was provided by Holly Falk-Krzesinski of Elsevier during the NISO event, "Is This Still Working? Incentives to Publish, Metrics, and New Reward Systems," held on February 20, 2019.
Presented at the “Science evaluation in the 21st century and its impact on scientific formation” session of the Biased Science and Alternatives for the Publication System Symposium - Rio de Janeiro, September 15 2014.
TOWARDS A MULTI-FEATURE ENABLED APPROACH FOR OPTIMIZED EXPERT SEEKINGcsandit
With the enormous growth of data, retrieving information from the Web became more desirable
and even more challenging because of the Big Data issues (e.g. noise, corruption, bad
quality…etc.). Expert seeking, defined as returning a ranked list of expert researchers given a
topic, has been a real concern in the last 15 years. This kind of task comes in handy when
building scientific committees, requiring to identify the scholars’ experience to assign them the
most suitable roles in addition to other factors as well. Due to the fact the Web is drowning with
plenty of data, this opens up the opportunity to collect different kinds of expertise evidence. In
this paper, we propose an expert seeking approach with specifying the most desirable features
(i.e. criteria on which researcher’s evaluation is done) along with their estimation techniques.
We utilized some machine learning techniques in our system and we aim at verifying the
effectiveness of incorporating influential features that go beyond publications
Research evaluation is relevant to librarians because they can provide expertise and data to various stakeholders evaluating research performance. Key stakeholders include university rankings, research funders, institutions, and researchers themselves. There are several tools and data sources librarians can leverage, such as journal rankings and metrics, citation data from databases, and altmetrics. Librarians can advise on using these evaluation methods and managing research information and outputs through repositories and current research information systems.
This document discusses how Thomson Reuters and bibliometric data and tools can support research institutions. It describes the Web of Science database and InCites platform for benchmarking and analyzing research productivity and impact. Examples are provided of how the University of Toronto uses bibliometric data and tools for reporting, promoting excellence, grant applications, and research management. The document concludes by promoting a complimentary research report and additional resources from Thomson Reuters.
LITA’s Altmetrics and Digital Analytics Interest Group is proud to present Heather Coates, Richard Naples, and Lauren Collister in our second free webinar of the season. Heather will introduce the concept of altmetrics with a quick "Altmetrics 101," Richard will discuss the Smithsonian's implementation of Altmetric, and Lauren will share the University of Pittsburgh's experience with Plum Analytics.
CHiR presentation measuring scholarly and public impactPlethora121
American University Library's Conference for High Impact Research presentation, Measuring Scholarly and Public Impact. Given May 15th, 2017, discusses bibliometrics and altmetrics, focusing on case uses, current trends, and disciplinary considerations.
Scholarly Metrics in Specialized SettingsElaine Lasda
Presentation for the Bibliometric and Research Impact Community (BRIC) of Canada on case studies of research impact in specialized settings. Focus on Michigan Publishing by co-presenter Rebecca Welzenbach
Slides from Keynote presentation at the University of Southern California's 2015 Teaching with Technology annual conference.
"9:15 am – ANN Auditorium
Key Note: What Do We Mean by Learning Analytics?
Leah Macfadyen, Director for Evaluation and Learning Analytics, University of British Columbia
Executive Board, SoLAR (Society for Learning Analytics Research)
Leah Macfadyen will define and explore the emerging and interdisciplinary field of learning analytics in the context of quantified and personalized learning. Leah will use actual examples and case studies to illustrate the range of stakeholders learning analytics may serve, the diverse array of questions they may be used to address, and the potential impact of learning analytics in higher education."
Early Career Tactics to Increase Scholarly ImpactElaine Lasda
Workshp for Ph.D. candidates, postdocs and faculy on how bilbiometrics, altmetrics, open access, ORCID, and other resources enable greater visibility of research output.
The document discusses enhancements to the DMPTool to further streamline the data management planning process. DMPTool2 will add new features like co-ownership of plans, self-service administration, and optional plan review. It will have improved governance and be jointly developed by additional partners. The goal is to better support the creation of data management plans, which are increasingly required for funding and publication.
This document discusses methods for measuring the impact of citizen science projects online. It describes the development of a framework called MICS (Measuring Impact of Citizen Science) for assessing citizen science impact. MICS includes indicators for different domains like society, science, economy, environment and governance. The framework provides characteristics for each indicator such as its name, description, data type, and how data should be collected and analyzed. Case studies are being used to help implement and refine the MICS framework.
Similar to Reseval Mashup Platform Talk at SECO (20)
Processing Social Media Messages in Mass Emergency: A SurveyMuhammad Imran
Millions of people use social media to share information during disasters and mass emergencies. Information available on social media, particularly in the early hours of an event when few other sources are available, can be extremely valuable for emergency responders and decision makers, helping them gain situational awareness and plan relief efforts. Processing social media content to obtain such information involves solving multiple challenges, including parsing brief and informal messages, handling information overload, and prioritizing different types of information. These challenges can be mapped to information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. This work highlights these challenges and presents state of the art computational techniques to deal with social media messages, focusing on their application to crisis scenarios.
Damage Assessment from Social Media Imagery Data During DisastersMuhammad Imran
Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during dis- asters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification ac- curacy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strike.
Image4Act: Online Social Media Image Processing for Disaster ResponseMuhammad Imran
We present an end-to-end social media image processing system called Image4Act. The system aims at collecting, denoising, and classifying imagery content posted on social media platforms to help humanitarian organizations in gaining situational awareness and launching relief operations. The system combines human computation and machine learning techniques to process high-volume social media imagery content in real time during natural and human-made disasters. To cope with the noisy nature of the social media imagery data, we use a deep neural network and perceptual hashing techniques to filter out irrelevant and duplicate images. Furthermore, we present a specific use case to assess the severity of infrastructure damage incurred by a disaster. The evaluations of the system on existing disaster datasets as well as a real-world deployment during a recent cyclone prove the effectiveness of the system.
AIDR Tutorial (Artificial Intelligence for Disaster Response)Muhammad Imran
This document provides an overview of the AIDR (Artificial Intelligence for Disaster Response) system, including how it collects Twitter data through keywords, geographic regions, and following users. It also describes how AIDR allows users to classify collected data by defining classifiers and labels, and how the classifiers are generated based on human-tagged tweets.
A Robust Framework for Classifying Evolving Document Streams in an Expert-Mac...Muhammad Imran
An emerging challenge in the online classification of social media data streams is to keep the categories used for classification up-to-date. In this paper, we propose an innovative framework based on an Expert-Machine-Crowd (EMC) triad to help categorize items by continuously identifying novel concepts in heterogeneous data streams often riddled with outliers. We unify constrained clustering and outlier detection by formulating a novel optimization problem: COD-Means. We design an algorithm to solve the COD-Means problem and show that COD-Means will not only help detect novel categories but also seamlessly discover human annotation errors and improve the overall quality of the categorization process. Experiments on diverse real data sets demonstrate that our approach is both effective and efficient.
23. DIMENSIONS INRESEARCH EVALUATION Different Things Different Purposes Different Data Sources Different Metrics Different Algorithms Research Evaluation
38. PROPOSED SOLUTION Putting everything together Capability to define complex logic User Personalized metrics support User Input Data Presentation Mashup Magic Evaluate individual, groups … Data Data Common platform for various data sources
39. ARCHITECTURE MASHUP UI Mashup Exec Logic ResEval Components ResEval Domain Model ResEval Mashup Language Mashup Engine Resource Space Management System
43. CONCLUSION & FUTURE WORK We proposed architecture for Personalized metrics definition Resource-Oriented Provide Impact evaluation for individual & groups Future work Mashup platform for complex metric logic Components as web service
General Problems:Data incompleteness (data sources)Predefined metrics for evaluationOnly allowed fixed interface based queries
Common platform to access various kind of scientific resources.Support the definition of personalized metrics.Capability to define complex queries using Research Evaluation Mash Queries.Capable to evaluate individual, contribution and group of researchers.Defining personalized metricsAnswering the complex queriesCustomization of user queryComprising different data sourcesGroup formation & composition