Networks of citations are a key tool for referencing, disseminating and evaluating research results. The task of characterising the functional role of citations in scientific literature is very difficult, not only for software agents but for humans, too. The main problem is that the mental models of different annotators hardly ever converge to a single shared opinion. The goal of this paper is to investigate how an existing reference model for classifying citations, namely CiTO (Citation Typing Ontology), is interpreted and used by annotators of scientific literature. We present an experiment capturing the cognitive processes behind subjects’ decisions in annotating papers with CiTO, and we provide initial ideas to refine future releases of CiTO.
Systematic Literature Reviews and Systematic Mapping Studiesalessio_ferrari
Lecture slides on Systematic Literature Reviews and Systematic Mapping Studies in software engineering. It describes the different steps, discusses differences between the two methods, and gives guidelines on how to conduct these types of study.
Lecture on case study design and reporting in empirical software engineering. The lecture touches on the topics of units of analysis, data collection, data analysis, validity procedures, and collaboration with industries.
Knowledge Patterns for the Web: extraction, transformation, and reuseAndrea Nuzzolese
KPs are an abstraction of frames as introduced by Fillmore and Minsky. KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts (i.e., top-down defined artifacts that can be compared to KPs, such as FrameNet frames or Ontology Design Patterns) to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Unfortunately, type paths are not always available. In fact, Linked Data is a knowledge soup because of the heterogeneous semantics of its datasets and because of the limited intentional as well as extensional coverage of ontologies (e.g., DBpedia ontology, YAGO) or other controlled vocabularies (e.g., SKOS, FOAF, etc.). Thus, we propose a solution for enriching Linked Data with additional axioms (e.g., rdf:type axioms) by exploiting the natural language available for example in annotations (e.g. rdfs:comment) or in corpora on which datasets in Linked Data are grounded (e.g. DBpedia is grounded on Wikipedia). Then we present K∼ore, a software architec- ture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. K∼ore is the architectural binding of a set of tools, i.e., K∼tools, which implements the methods for KP transformation and extraction. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
SHELDON is the first true hybridization of NLP machine reading and Semantic Web. It is a framework that builds upon a ma- chine reader for extracting RDF graphs from text so that the output is compliant to Semantic Web and Linked Data patterns. It extends the current human-readable web by using Semantic Web practices and technologies in a machine-processable form. Given a sentence in any language, it provides different semantic functionalities (frame detection, topic extraction, named entity recognition, resolution and coreference, terminology extraction, sense tagging and disambiguation, taxonomy induction, semantic role labeling, type induction, sentiment analysis, citation inference, relation and event extraction) as well as nice visualization tools which make use of the JavaScript infoVis Toolkit and RelFinder, as well as a knowledge enrichment component that extends machine reading to Semantic Web data. The system can be freely used at http://wit.istc.cnr.it/stlab-tools/sheldon.
The Open Knowledge Extraction Challenge focuses on the production of new knowledge aimed at either populating and enriching existing knowledge bases or creating new ones. This means that the defined tasks focus on extracting concepts, individuals, properties, and statements that not necessarily exist already in a target knowledge base, and on representing them according to Semantic Web standard in order to be directly injected in linked datasets and their ontologies. The OKE challenge, has the ambition to advance a reference framework for research on Knowledge Extraction from text for the Semantic Web by re-defining a number of tasks (typically from information and knowledge extraction) by taking into account specific SW requirements. The Challenge is open to everyone from industry and academia.
Systematic Literature Reviews and Systematic Mapping Studiesalessio_ferrari
Lecture slides on Systematic Literature Reviews and Systematic Mapping Studies in software engineering. It describes the different steps, discusses differences between the two methods, and gives guidelines on how to conduct these types of study.
Lecture on case study design and reporting in empirical software engineering. The lecture touches on the topics of units of analysis, data collection, data analysis, validity procedures, and collaboration with industries.
Knowledge Patterns for the Web: extraction, transformation, and reuseAndrea Nuzzolese
KPs are an abstraction of frames as introduced by Fillmore and Minsky. KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts (i.e., top-down defined artifacts that can be compared to KPs, such as FrameNet frames or Ontology Design Patterns) to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Unfortunately, type paths are not always available. In fact, Linked Data is a knowledge soup because of the heterogeneous semantics of its datasets and because of the limited intentional as well as extensional coverage of ontologies (e.g., DBpedia ontology, YAGO) or other controlled vocabularies (e.g., SKOS, FOAF, etc.). Thus, we propose a solution for enriching Linked Data with additional axioms (e.g., rdf:type axioms) by exploiting the natural language available for example in annotations (e.g. rdfs:comment) or in corpora on which datasets in Linked Data are grounded (e.g. DBpedia is grounded on Wikipedia). Then we present K∼ore, a software architec- ture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. K∼ore is the architectural binding of a set of tools, i.e., K∼tools, which implements the methods for KP transformation and extraction. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
SHELDON is the first true hybridization of NLP machine reading and Semantic Web. It is a framework that builds upon a ma- chine reader for extracting RDF graphs from text so that the output is compliant to Semantic Web and Linked Data patterns. It extends the current human-readable web by using Semantic Web practices and technologies in a machine-processable form. Given a sentence in any language, it provides different semantic functionalities (frame detection, topic extraction, named entity recognition, resolution and coreference, terminology extraction, sense tagging and disambiguation, taxonomy induction, semantic role labeling, type induction, sentiment analysis, citation inference, relation and event extraction) as well as nice visualization tools which make use of the JavaScript infoVis Toolkit and RelFinder, as well as a knowledge enrichment component that extends machine reading to Semantic Web data. The system can be freely used at http://wit.istc.cnr.it/stlab-tools/sheldon.
The Open Knowledge Extraction Challenge focuses on the production of new knowledge aimed at either populating and enriching existing knowledge bases or creating new ones. This means that the defined tasks focus on extracting concepts, individuals, properties, and statements that not necessarily exist already in a target knowledge base, and on representing them according to Semantic Web standard in order to be directly injected in linked datasets and their ontologies. The OKE challenge, has the ambition to advance a reference framework for research on Knowledge Extraction from text for the Semantic Web by re-defining a number of tasks (typically from information and knowledge extraction) by taking into account specific SW requirements. The Challenge is open to everyone from industry and academia.
This work presents some experiments in letting humans annotate citations according to CiTO, an OWL ontology for describing the function of citations. We introduce a comparison of the performance of different users, and show strengths and difficulties that emerged when using that particular model to characterise citations of scholarly articles.
Immersive Recommendation incorporates cross-platform and diverse personal digital traces into recommendations. Our context-aware topic modeling algorithm systematically profiles users' interests based on their traces from different contexts, and our hybrid recommendation algorithm makes high-quality recommendations by fusing users' personal profiles, item profiles, and existing ratings. The proposed model showed significant improvement over the state-of-the-art algorithms, suggesting the value of using this new user-centric recommendation model to improve recommendation quality, including in cold-start situations.
I won't be #BulliedIntoBadScience! - Laurent Gatto - OpenCon 2017Right to Research
This presentation by Laurent Gatto was part of OpenCon 2017's Next-Generation Initiatives Advancing Open panel.
The #BulliedIntoBadScience campaign was initiated after several attempts to influence publishing practices at the University of Cambridge and in the UK. However, it seems at times impossible for academics, early stage and more senior, to change a broken system that is, sadly, just accepted by most. During this OpenCon 2017 Panel, Laurent shared some of the background of the #BulliedIntoBadScience campaign and reflected on early career researchers' challenges in fighting for a more ethical environment.
Quantitative Research Methods
1.What is scientific research? What is quantitative research?
2.Why we need research?
3.Who is conducting the research?
4.What is the research process?
5.What is the language of research?
Information to Wisdom: Commonsense Knowledge Extraction and Compilation - Part 3Dr. Aparna Varde
This is the 3rd part of the tutorial on commonsense knowledge (CSK) at ACM WSDM 2021 by Simon Razniewski, Niket Tandon and Aparna Varde. It focuses on evaluation of the acquired knowledge, both intrinsic & extrinsic, as well as highlights, outlook with a brief perspective on COVID and open issues for further research.
Abstract: Commonsense knowledge is a foundational cornerstone of artificial intelligence applications. Whereas information extraction and knowledge base construction for instance-oriented assertions, such as Brad Pitt’s birth date, or Angelina Jolie’s movie awards, has received much attention, commonsense knowledge on general concepts (politicians, bicycles, printers) and activities (eating pizza, fixing printers) has only been tackled recently. In this tutorial we present state-of-the-art methodologies towards the compilation and consolidation of such commonsense knowledge (CSK). We cover text-extraction-based, multi-modal and Transformer-based techniques, with special focus on the issues of web search and ranking, as of relevance to the WSDM community.
Towards the automatic identification of the nature of citationsUniversity of Bologna
The reasons why an author cites other publications are varied: an author can cite previous works to gain assistance of some sort in the form of background information, ideas, methods, or to review, critique or refute previous works. The problem is that the best possible way to retrieve the nature of citations is very time consuming: one should read article by article to assign a particular characterisation to each citation. In this work we propose an algorithm, called CiTalO, to infer automatically the function of citations by means of Semantic Web technologies and NLP techniques. We also present some preliminary experiments and discuss some strengths and limitations of this approach.
With the progress towards open science, scientific communication is facing a new wave of innovations towards more openness and speed of research publication which will deeply affect the way the peer review function is carried out and the overall role of journals in assuring quality and adding value to manuscripts.
Several initiatives are promoting the generalized adoption of open access preprints as a formal beginning stage of research publication, which has been common since the 90’s in the physics community. And, in the last decade, new ways to carry out the evaluation of manuscripts have emerged either to replace or to improve the traditional methods, which are widely criticized as being slow and expensive in addition to lacking transparency.
Quality nonprofit journals from emerging and developing countries have succeeded to follow the main innovations brought by the Internet. In addition to the technicalities of the digital publishing, there is a wide adoption of Open Access in the international flow of scientific information. The new wave of innovations that affect the peer review function and the changing role of journals pose new challenges to the emerging and developing countries in regard of scientific publishing. The adoption of these innovations is essential for progress of SciELO as a leading open access program to enhance scientific communication.
The scope of this workshop aims at an in-depth analysis and discussion of the state of art and main trends of the peer review function, the modalities of carrying it out as well as of the increasing adoption of mechanisms to speed publication such as preprints and how they affect and potentially renew the role of journals. These recommendations will guide SciELO policies on manuscript evaluation and on the adoption of preprint publications.
Semantic Knowledge and Privacy in the Physical WebPrajit Kumar Das
In the past few years, the Internet of Things has started to become a reality; however, its growth has been hampered by privacy and security concerns. One promising approach is to use Semantic Web technologies to mitigate privacy concerns in an informed, flexible way. We present CARLTON, a framework for managing data privacy for entities in a Physical Web deployment using Semantic Web technologies. CARLTON uses context-sensitive privacy policies to protect privacy of organizational and personnel data. We provide use case scenarios where natural language queries for data are handled by the system, and show how privacy policies may be used to manage data privacy in such scenarios, based on an ontology of concepts that can be used as rule antecedents in customizable privacy policies.
Aemoo: Linked Data Exploration based on Knowledge PatternsAndrea Nuzzolese
This paper presents a novel approach to Linked Data exploration that uses Encyclopaedic Knowledge Patterns (EKPs) as relevance criteria for selecting, organising, and visualising knowledge. EKP are discovered by mining the linking structure of Wikipedia and evaluated by means of a user-based study, which shows that they are cognitively sound as models for building entity summarisations. We implemented a tool named Aemoo that supports EKP-driven knowledge exploration and integrates data coming from heterogeneous resources, namely static and dynamic knowledge as well as text and Linked Data. Aemoo is evaluated by means of controlled, task-driven user experiments in order to assess its usability, and ability to provide relevant and serendipitous information as compared to two existing tools: Google and RelFinder.
Conference Linked Data: the ScholarlyData projectAndrea Nuzzolese
The Semantic Web Dog Food (SWDF) is the reference linked dataset of the Semantic Web community about papers, people, organisations, and events related to its academic conferences. In this paper we analyse the existing problems of generating, representing and maintaining Linked Data for the SWDF. With this work (i) we provide a refactored and cleaned SWDF dataset; (ii) we use a novel data model which improves the Semantic Web Conference Ontology, adopting best ontology design practices and (iii) we provide an open source workflow to support a healthy growth of the dataset beyond the Semantic Web conferences.
More Related Content
Similar to Evaluating citation functions in CiTO: cognitive issues
This work presents some experiments in letting humans annotate citations according to CiTO, an OWL ontology for describing the function of citations. We introduce a comparison of the performance of different users, and show strengths and difficulties that emerged when using that particular model to characterise citations of scholarly articles.
Immersive Recommendation incorporates cross-platform and diverse personal digital traces into recommendations. Our context-aware topic modeling algorithm systematically profiles users' interests based on their traces from different contexts, and our hybrid recommendation algorithm makes high-quality recommendations by fusing users' personal profiles, item profiles, and existing ratings. The proposed model showed significant improvement over the state-of-the-art algorithms, suggesting the value of using this new user-centric recommendation model to improve recommendation quality, including in cold-start situations.
I won't be #BulliedIntoBadScience! - Laurent Gatto - OpenCon 2017Right to Research
This presentation by Laurent Gatto was part of OpenCon 2017's Next-Generation Initiatives Advancing Open panel.
The #BulliedIntoBadScience campaign was initiated after several attempts to influence publishing practices at the University of Cambridge and in the UK. However, it seems at times impossible for academics, early stage and more senior, to change a broken system that is, sadly, just accepted by most. During this OpenCon 2017 Panel, Laurent shared some of the background of the #BulliedIntoBadScience campaign and reflected on early career researchers' challenges in fighting for a more ethical environment.
Quantitative Research Methods
1.What is scientific research? What is quantitative research?
2.Why we need research?
3.Who is conducting the research?
4.What is the research process?
5.What is the language of research?
Information to Wisdom: Commonsense Knowledge Extraction and Compilation - Part 3Dr. Aparna Varde
This is the 3rd part of the tutorial on commonsense knowledge (CSK) at ACM WSDM 2021 by Simon Razniewski, Niket Tandon and Aparna Varde. It focuses on evaluation of the acquired knowledge, both intrinsic & extrinsic, as well as highlights, outlook with a brief perspective on COVID and open issues for further research.
Abstract: Commonsense knowledge is a foundational cornerstone of artificial intelligence applications. Whereas information extraction and knowledge base construction for instance-oriented assertions, such as Brad Pitt’s birth date, or Angelina Jolie’s movie awards, has received much attention, commonsense knowledge on general concepts (politicians, bicycles, printers) and activities (eating pizza, fixing printers) has only been tackled recently. In this tutorial we present state-of-the-art methodologies towards the compilation and consolidation of such commonsense knowledge (CSK). We cover text-extraction-based, multi-modal and Transformer-based techniques, with special focus on the issues of web search and ranking, as of relevance to the WSDM community.
Towards the automatic identification of the nature of citationsUniversity of Bologna
The reasons why an author cites other publications are varied: an author can cite previous works to gain assistance of some sort in the form of background information, ideas, methods, or to review, critique or refute previous works. The problem is that the best possible way to retrieve the nature of citations is very time consuming: one should read article by article to assign a particular characterisation to each citation. In this work we propose an algorithm, called CiTalO, to infer automatically the function of citations by means of Semantic Web technologies and NLP techniques. We also present some preliminary experiments and discuss some strengths and limitations of this approach.
With the progress towards open science, scientific communication is facing a new wave of innovations towards more openness and speed of research publication which will deeply affect the way the peer review function is carried out and the overall role of journals in assuring quality and adding value to manuscripts.
Several initiatives are promoting the generalized adoption of open access preprints as a formal beginning stage of research publication, which has been common since the 90’s in the physics community. And, in the last decade, new ways to carry out the evaluation of manuscripts have emerged either to replace or to improve the traditional methods, which are widely criticized as being slow and expensive in addition to lacking transparency.
Quality nonprofit journals from emerging and developing countries have succeeded to follow the main innovations brought by the Internet. In addition to the technicalities of the digital publishing, there is a wide adoption of Open Access in the international flow of scientific information. The new wave of innovations that affect the peer review function and the changing role of journals pose new challenges to the emerging and developing countries in regard of scientific publishing. The adoption of these innovations is essential for progress of SciELO as a leading open access program to enhance scientific communication.
The scope of this workshop aims at an in-depth analysis and discussion of the state of art and main trends of the peer review function, the modalities of carrying it out as well as of the increasing adoption of mechanisms to speed publication such as preprints and how they affect and potentially renew the role of journals. These recommendations will guide SciELO policies on manuscript evaluation and on the adoption of preprint publications.
Semantic Knowledge and Privacy in the Physical WebPrajit Kumar Das
In the past few years, the Internet of Things has started to become a reality; however, its growth has been hampered by privacy and security concerns. One promising approach is to use Semantic Web technologies to mitigate privacy concerns in an informed, flexible way. We present CARLTON, a framework for managing data privacy for entities in a Physical Web deployment using Semantic Web technologies. CARLTON uses context-sensitive privacy policies to protect privacy of organizational and personnel data. We provide use case scenarios where natural language queries for data are handled by the system, and show how privacy policies may be used to manage data privacy in such scenarios, based on an ontology of concepts that can be used as rule antecedents in customizable privacy policies.
Similar to Evaluating citation functions in CiTO: cognitive issues (20)
Aemoo: Linked Data Exploration based on Knowledge PatternsAndrea Nuzzolese
This paper presents a novel approach to Linked Data exploration that uses Encyclopaedic Knowledge Patterns (EKPs) as relevance criteria for selecting, organising, and visualising knowledge. EKP are discovered by mining the linking structure of Wikipedia and evaluated by means of a user-based study, which shows that they are cognitively sound as models for building entity summarisations. We implemented a tool named Aemoo that supports EKP-driven knowledge exploration and integrates data coming from heterogeneous resources, namely static and dynamic knowledge as well as text and Linked Data. Aemoo is evaluated by means of controlled, task-driven user experiments in order to assess its usability, and ability to provide relevant and serendipitous information as compared to two existing tools: Google and RelFinder.
Conference Linked Data: the ScholarlyData projectAndrea Nuzzolese
The Semantic Web Dog Food (SWDF) is the reference linked dataset of the Semantic Web community about papers, people, organisations, and events related to its academic conferences. In this paper we analyse the existing problems of generating, representing and maintaining Linked Data for the SWDF. With this work (i) we provide a refactored and cleaned SWDF dataset; (ii) we use a novel data model which improves the Semantic Web Conference Ontology, adopting best ontology design practices and (iii) we provide an open source workflow to support a healthy growth of the dataset beyond the Semantic Web conferences.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Evaluating citation functions in CiTO: cognitive issues
1. STLab University of Bologna#eswc2014Ciancarini
!
Evaluating citation functions in CiTO:
cognitive issues
!
!
!
!
!
28 May 2014 - Heraklion, Crete
Paolo Ciancarini1,2,Angelo Di Iorio1, Andrea Giovanni Nuzzolese1,2,
Silvio Peroni1 and FabioVitali1
1Department of Computer Science and Engineering, University of Bologna, Italy
2STLab, Institute of Cognitive Science and Technology, National Research Council, Rome, Italy
2. STLab University of Bologna#eswc2014Ciancarini
Outline
2
• Motivations
• CiTO
• Experiment
• Evaluation
• Lessons learnt and conclusions
3. STLab University of Bologna#eswc2014Ciancarini
• Bibliographic citations can be seen as tools for:
• linking, disseminating, exploring, evaluating research
• The task of characterising the functional role of citations in
scientific literature is very difficult for agents and humans
• Investigating how an existing reference model for classifying
citations, i.e., CiTO, is interpreted and used by human annotators
• We want to study human’s behaviour in order to simulate it
within CiTalO, a tool that automatically classifies citations with
CiTO
Motivations and goals
3
4. STLab University of Bologna#eswc2014Ciancarini
• An OWL ontology for describing factual as well as rhetorical
functions of citations in scholarly articles
• Defines
• a top-level property cites (and its inverse isCitedBy)
• 41 sub-properties of cites that allow users to characterise precisely
the semantics of a citation act
• Has been successfully used in large projects, like CiteULike,
data.open.ac.uk and the Open Citation Corpus
• Several tools have been developed to annotate citations with
CiTO
• e.g., Chrome and WordPress plug-ins
CiTO
4
5. STLab University of Bologna#eswc2014Ciancarini
• The richness of properties in CiTO (CiTO-Ps) is
• a key feature of CiTO: this aspect has contributed to the
adoption of the ontology by the Semantic Publishing
Users’ adoption of CiTO
5
6. STLab University of Bologna#eswc2014Ciancarini
• The richness of properties in CiTO (CiTO-Ps) is
• a key feature of CiTO: this aspect has contributed to the
adoption of the ontology by the Semantic Publishing
• an hindrance: most tools actually employ a sub-set of the CiTO
properties,
• e.g., 6 CiTO-Ps enabled for user annotation by Pensoft Publishers
and 9 in the Chrome plug-in
Users’ adoption of CiTO
5
7. STLab University of Bologna#eswc2014Ciancarini
6
user
author
“It extends the research
outlined in earlier work [3]”
CiTO annotations and mental
models
CiTO
8. STLab University of Bologna#eswc2014Ciancarini
6
Interpretation
of author’s
text
Understanding
of CiTO
user
author
“It extends the research
outlined in earlier work [3]”
CiTO annotations and mental
models
CiTO
9. STLab University of Bologna#eswc2014Ciancarini
6
Interpretation
of author’s
text
Understanding
of CiTO
user
mental
model
author
“It extends the research
outlined in earlier work [3]”
CiTO annotations and mental
models
CiTO
10. STLab University of Bologna#eswc2014Ciancarini
6
Interpretation
of author’s
text
Understanding
of CiTO
user
mental
model
Annotation
cito:extends
author
“It extends the research
outlined in earlier work [3]”
CiTO annotations and mental
models
CiTO
11. STLab University of Bologna#eswc2014Ciancarini
mental
model
mental
model
6
Interpretation
of author’s
text
Understanding
of CiTO
user
mental
model
Annotation
cito:extends
author
“It extends the research
outlined in earlier work [3]”
mental
model
mental
model
CiTO annotations and mental
models
CiTO
12. STLab University of Bologna#eswc2014Ciancarini
mental
model
mental
model
6
Interpretation
of author’s
text
Understanding
of CiTO
user
mental
model
Annotation
cito:extends
author
“It extends the research
outlined in earlier work [3]”
mental
model
mental
model
CiTO annotations and mental
models
CiTO!
cito:citesForInformation
cito:givesSupportTo
…
The mental models of
different annotators
hardly ever converge to
a single shared opinion
13. STLab University of Bologna#eswc2014Ciancarini
7
What we did
We performed an experiment to
investigate how humans use CiTO
to annotate citations with a type
• 20
• 105 citations chosen among the
seventh volume of the
proceedings of the Balisage
Conference Series
subjects
14. STLab University of Bologna#eswc2014Ciancarini
7
What we did
We performed an experiment to
investigate how humans use CiTO
to annotate citations with a type
• 20
• 105 citations chosen among the
seventh volume of the
proceedings of the Balisage
Conference Series
(lucky) subjects
15. STLab University of Bologna#eswc2014Ciancarini
• The experiment had one independent variable, i.e., the number of
CiTO-Ps available to subjects for the annotation
• Condition T41: 10 subjects used 41 CiTO-Ps
• Condition T10: 10 subjects used a subset of 10 CiTO-Ps
• i.e., citesAsDataSource, citesAsPotentialSolution,
citesAsRecommendedReading, citesAsRelated, citesForInformation,
credits, critiques, includesQuotationFrom, obtainsBackgroundFrom,
usesMethodIn
• T10 CiTO-Ps were chosen among those that had shown a moderate
inter-rater agreement (Fleiss’k>0.33) in a preliminary experiment on
the same data sample
The experiment
8
16. STLab University of Bologna#eswc2014Ciancarini
!
Evaluation framework
9
Citation context CITO-Ps
Explanations and examples
of the usage of the CiTO-Ps
Available on line at http://www.cs.unibo.it/~nuzzoles/cito_1/?user=r
17. STLab University of Bologna#eswc2014Ciancarini
10
1. Which properties have been used by subjects during the experiment?
2. Which were the most used properties?
3. What was the global inter-rater agreement of the subjects?
4. Did the number of available choices bias the global inter-rater agreement?
5. Which properties showed an acceptable positive agreement among
subjects?
6. Could properties be organized according to their similarity in subjects’
annotations?
7. What was the perceived usability of the CiTO-Ps?
8. Which were the features of CiTO-Ps that subjects perceived as most
useful or problematic?
Target questions
18. STLab University of Bologna#eswc2014Ciancarini
11
!
!
• Condition T41
• used 37 different CiTO-Ps over 41(avg: 21.7 CiTO-Ps per subject)
• 4 properties not selected by any subject
• i.e., parodies, plagiarizes, repliesTo and ridicules
!
• Condition T10
• used all the 10 CiTO-Ps
Results
“Which properties have been used by subjects during the experiment?”
19. STLab University of Bologna#eswc2014Ciancarini
12
Results
“Which were the most used properties?”
20. STLab University of Bologna#eswc2014Ciancarini
“What was the global inter-rater agreement of the subjects?”
“Did the number of available choices bias the global inter-rater agreement?”
“Which properties showed an acceptable positive agreement among subjects?”
13
• Condition T41
• Global Fleiss’k = 0.13
• 5 CiTO-Ps with moderate local positive agreement (k > 0.5)
• i.e., citesAsPotentialSolution (0.66), citesAsRecommendedReading (0.6), agreesWith (0.54),
citesAsDataSource (0.52), usesMethodIn (0.54)
• Condition T10
• Global Fleiss’k = 0.15
• 4 CiTO-Ps with moderate local positive agreement
• i.e., citesAsPotentialSolution (0.71), citesAsDataSource (0.63),
citesAsRecommendedReading (0.52), includesQuotationFrom (0.69)
Data evaluation
21. STLab University of Bologna#eswc2014Ciancarini
“What was the global inter-rater agreement of the subjects?”
“Did the number of available choices bias the global inter-rater agreement?”
“Which properties showed an acceptable positive agreement among subjects?”
13
• Condition T41
• Global Fleiss’k = 0.13
• 5 CiTO-Ps with moderate local positive agreement (k > 0.5)
• i.e., citesAsPotentialSolution (0.66), citesAsRecommendedReading (0.6), agreesWith (0.54),
citesAsDataSource (0.52), usesMethodIn (0.54)
• Condition T10
• Global Fleiss’k = 0.15
• 4 CiTO-Ps with moderate local positive agreement
• i.e., citesAsPotentialSolution (0.71), citesAsDataSource (0.63),
citesAsRecommendedReading (0.52), includesQuotationFrom (0.69)
Data evaluation
The global
agreement is
very low
22. STLab University of Bologna#eswc2014Ciancarini
“What was the global inter-rater agreement of the subjects?”
“Did the number of available choices bias the global inter-rater agreement?”
“Which properties showed an acceptable positive agreement among subjects?”
13
• Condition T41
• Global Fleiss’k = 0.13
• 5 CiTO-Ps with moderate local positive agreement (k > 0.5)
• i.e., citesAsPotentialSolution (0.66), citesAsRecommendedReading (0.6), agreesWith (0.54),
citesAsDataSource (0.52), usesMethodIn (0.54)
• Condition T10
• Global Fleiss’k = 0.15
• 4 CiTO-Ps with moderate local positive agreement
• i.e., citesAsPotentialSolution (0.71), citesAsDataSource (0.63),
citesAsRecommendedReading (0.52), includesQuotationFrom (0.69)
Data evaluation
The global
agreement is
very low
The # of CiTO-Ps
does not affect the
agreement
23. STLab University of Bologna#eswc2014Ciancarini
“What was the global inter-rater agreement of the subjects?”
“Did the number of available choices bias the global inter-rater agreement?”
“Which properties showed an acceptable positive agreement among subjects?”
13
• Condition T41
• Global Fleiss’k = 0.13
• 5 CiTO-Ps with moderate local positive agreement (k > 0.5)
• i.e., citesAsPotentialSolution (0.66), citesAsRecommendedReading (0.6), agreesWith (0.54),
citesAsDataSource (0.52), usesMethodIn (0.54)
• Condition T10
• Global Fleiss’k = 0.15
• 4 CiTO-Ps with moderate local positive agreement
• i.e., citesAsPotentialSolution (0.71), citesAsDataSource (0.63),
citesAsRecommendedReading (0.52), includesQuotationFrom (0.69)
Data evaluation
The global
agreement is
very low
The # of CiTO-Ps
does not affect the
agreement
The set of CiTO-PS with
moderate local positive
agreement is little
affected by the # of
CiTO-Ps
24. STLab University of Bologna#eswc2014Ciancarini
• We applied the Chinese Whispers clustering algorithm
• Input: 2 graphs built by combining all the pairs of different CiTO-Ps as
annotated by subjects for each citation
• Gr - it takes into account repetitions in annotations for a each CiTO
property
•
e.g.,“extends”,“extends”, and “updates” on a citation generate
(extends,updates) and (extends,updates)
• Gn - it does not take into account repetitions
•
e.g.,“extends”,“extends”, and “updates” generate (extends,updates)
14
Clustering CiTO-Ps
“Could properties be organized
according to their similarity in subjects’ annotations?”
25. STLab University of Bologna#eswc2014Ciancarini
14
Clustering CiTO-Ps
“Could properties be organized
according to their similarity in subjects’ annotations?”
Gr Gn
disputes
critiques
derides
refutes confirms
credits
obtainsSupportFrom
26. STLab University of Bologna#eswc2014Ciancarini
14
Clustering CiTO-Ps
“Could properties be organized
according to their similarity in subjects’ annotations?”
Gr Gn
disputes
critiques
derides
refutes confirms
credits
obtainsSupportFrom
There exist some sort of
relations (e.g., taxonomical,
equivalence) among the
CiTO-Ps of each cluster
27. STLab University of Bologna#eswc2014Ciancarini
15
• We computed the System Usability Scale (SUS)
Measuring the usability of CiTO-Ps
0.00!
10.00!
20.00!
30.00!
40.00!
50.00!
60.00!
70.00!
80.00!
90.00!
100.00!
SUS mean! Usability mean! Learnability mean!
T41!
T10!
“What was the perceived usability of the CiTO-Ps? ”
28. STLab University of Bologna#eswc2014Ciancarini
15
• We computed the System Usability Scale (SUS)
Measuring the usability of CiTO-Ps
0.00!
10.00!
20.00!
30.00!
40.00!
50.00!
60.00!
70.00!
80.00!
90.00!
100.00!
SUS mean! Usability mean! Learnability mean!
T41!
T10!
“What was the perceived usability of the CiTO-Ps? ”
Only the usability
score approaches the
statistical significance
29. STLab University of Bologna#eswc2014Ciancarini
16
Grounded theory analysis
• The subjects filled a final text questionnaire aimed at capturing
positive and negative aspects of CiTO-Ps
• We used the text answers for performing a grounded theory analysis,
used in Social Science to extract relevant concepts from
unstructured text
“Which were the features of CiTO-Ps that subjects perceived as most useful or
problematic?”
30. STLab University of Bologna#eswc2014Ciancarini
16
Grounded theory analysis
“Which were the features of CiTO-Ps that subjects perceived as most useful or
problematic?”
31. STLab University of Bologna#eswc2014Ciancarini
17
Conclusions
• Lessons learnt and suggestions to improve CiTO
• Reduce the number of less-used properties
• Identify the most-used neutral properties
• Investigate motivations for low inter-rater agreement
• Define explicit relations between CiTO properties
• Add support for customised properties
• Extend examples, labels and explanations
• Future work
• Improve CiTalO, a tool for identifying automatically the nature of
citations
• e.g., by investigating cognitive architectures in order to simulate
humans’ behaviour