This presentation is about the combination of data publishing and post-publication reviews, and it covers some recent work on nanopublications for data publishing and reputation mechanisms for Web-scale quality metrics of scientific contributions.
Semantic Publishing and NanopublicationsTobias Kuhn
Making available and archiving scientific results is for the most part still considered the task of classical publishing companies, despite the fact that classical forms of publishing centered around printed narrative articles no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this talk, I outline how we can design scientific data publishing as a Web-based bottom-up process, without top-down control of central authorities. I present a protocol and a server network to decentrally store and archive data in the form of nanopublications, a format based on Semantic Web techniques to represent scientific data with formal semantics. Such nanopublications can be made verifiable and immutable by applying cryptographic methods with identifiers called Trusty URIs. I show how this approach allows researchers to produce, publish, retrieve, address, verify, and recombine datasets and their individual nanopublications in a reliable and trustworthy manner, and I discuss how the current small network can grow to handle the large amounts of structured data that modern science is producing and consuming.
Semantic Publishing with Nanopublications Tobias Kuhn
Classical forms of publishing centered around printed narrative articles do not seem well-suited for publishing scientific datasets, which have become increasingly important for science. In this talk, I will introduce the concept of nanopublications, an approach to enable provenance-aware data publishing. I will present our work to make nanopublications more general by allowing for informal assertions and meta-nanopublications. By relying on Semantic Web technologies, they allow us to organize and structure scientific knowledge and discourse, and thereby to improve the efficiency of scientific processes. Using cryptographic hash values in their identifiers, we can make nanopublications trustworthy and reliable even in an open and decentralized environment. I will present our proposal of Trusty URIs and a preliminary implementation of a decentralized server network to publish, archive, and retrieve nanopublications in a reliable and efficient manner.
Network visualization: Fine-tuning layout techniques for different types of n...Nees Jan van Eck
An important issue in network visualization is the problem of obtaining a good layout for a network. For a given network, which may be either weighted or unweighted, the problem is to position the nodes in the network in a two-dimensional space in such a way that an attractive layout is obtained. Many layout techniques have been proposed [1]. In the visualization of bibliometric networks, multidimensional scaling and the layout technique of Kamada and Kawai [2] have for instance been used a lot. More recently, the VOS (visualization of similarities) layout technique [3], implemented in our VOSviewer software (www.vosviewer.com) [4], is often used for bibliometric network visualization.
There is no layout technique that is generally considered to give optimal results. One reason for this is that comparisons between layouts produced by different techniques involve a lot of subjectiveness. Someone may consider one layout to be more attractive than another, but someone else may have an opposite opinion on this. In addition, the attractiveness of a layout may depend on the type of visualization that is needed. For instance, some layouts may be more attractive for interactive visualizations (e.g., in a software tool with zooming functionality), while other layouts may be more attractive for static visualizations. Furthermore, different types of networks may benefit from different layout techniques.
In recent studies [5, 6], the idea of parameterized layout techniques has been introduced. Parameterized layout techniques produce different types of layouts depending on the values chosen for their parameters. In this research, we present a comprehensive study of a parameterized version of our VOS layout technique. Two parameters are included. Like in [5], these are referred to as attraction and repulsion parameters. We compare the layouts obtained for different parameter values. Comparisons are made both subjectively using the VOSviewer software (i.e., which layout do we find most appealing?) and more objectively using so-called meta-criteria [6, 7]. Sensitivity to local optima is taken into account as well. Comparisons are made for all important types of bibliometric networks, in particular co-authorship, citation, co-citation, bibliographic coupling, and co-occurrence networks. Both smaller and larger networks are considered.
Slides accompanying the OpenAIRE Research Graph consultation webinar as held on Janyary 30th 2020.
Presenter: Andrea Mannocci
Recording: https://youtu.be/PCwXMDQb3r8
To ensure that publications are assigned to clusters in a meaningful way, we introduce the notion of stable clusters. Essentially, a cluster is stable if it is insensitive to small changes in the underlying data. Bootstrapping is used to make small changes in the data.
Semantic Publishing and NanopublicationsTobias Kuhn
Making available and archiving scientific results is for the most part still considered the task of classical publishing companies, despite the fact that classical forms of publishing centered around printed narrative articles no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this talk, I outline how we can design scientific data publishing as a Web-based bottom-up process, without top-down control of central authorities. I present a protocol and a server network to decentrally store and archive data in the form of nanopublications, a format based on Semantic Web techniques to represent scientific data with formal semantics. Such nanopublications can be made verifiable and immutable by applying cryptographic methods with identifiers called Trusty URIs. I show how this approach allows researchers to produce, publish, retrieve, address, verify, and recombine datasets and their individual nanopublications in a reliable and trustworthy manner, and I discuss how the current small network can grow to handle the large amounts of structured data that modern science is producing and consuming.
Semantic Publishing with Nanopublications Tobias Kuhn
Classical forms of publishing centered around printed narrative articles do not seem well-suited for publishing scientific datasets, which have become increasingly important for science. In this talk, I will introduce the concept of nanopublications, an approach to enable provenance-aware data publishing. I will present our work to make nanopublications more general by allowing for informal assertions and meta-nanopublications. By relying on Semantic Web technologies, they allow us to organize and structure scientific knowledge and discourse, and thereby to improve the efficiency of scientific processes. Using cryptographic hash values in their identifiers, we can make nanopublications trustworthy and reliable even in an open and decentralized environment. I will present our proposal of Trusty URIs and a preliminary implementation of a decentralized server network to publish, archive, and retrieve nanopublications in a reliable and efficient manner.
Network visualization: Fine-tuning layout techniques for different types of n...Nees Jan van Eck
An important issue in network visualization is the problem of obtaining a good layout for a network. For a given network, which may be either weighted or unweighted, the problem is to position the nodes in the network in a two-dimensional space in such a way that an attractive layout is obtained. Many layout techniques have been proposed [1]. In the visualization of bibliometric networks, multidimensional scaling and the layout technique of Kamada and Kawai [2] have for instance been used a lot. More recently, the VOS (visualization of similarities) layout technique [3], implemented in our VOSviewer software (www.vosviewer.com) [4], is often used for bibliometric network visualization.
There is no layout technique that is generally considered to give optimal results. One reason for this is that comparisons between layouts produced by different techniques involve a lot of subjectiveness. Someone may consider one layout to be more attractive than another, but someone else may have an opposite opinion on this. In addition, the attractiveness of a layout may depend on the type of visualization that is needed. For instance, some layouts may be more attractive for interactive visualizations (e.g., in a software tool with zooming functionality), while other layouts may be more attractive for static visualizations. Furthermore, different types of networks may benefit from different layout techniques.
In recent studies [5, 6], the idea of parameterized layout techniques has been introduced. Parameterized layout techniques produce different types of layouts depending on the values chosen for their parameters. In this research, we present a comprehensive study of a parameterized version of our VOS layout technique. Two parameters are included. Like in [5], these are referred to as attraction and repulsion parameters. We compare the layouts obtained for different parameter values. Comparisons are made both subjectively using the VOSviewer software (i.e., which layout do we find most appealing?) and more objectively using so-called meta-criteria [6, 7]. Sensitivity to local optima is taken into account as well. Comparisons are made for all important types of bibliometric networks, in particular co-authorship, citation, co-citation, bibliographic coupling, and co-occurrence networks. Both smaller and larger networks are considered.
Slides accompanying the OpenAIRE Research Graph consultation webinar as held on Janyary 30th 2020.
Presenter: Andrea Mannocci
Recording: https://youtu.be/PCwXMDQb3r8
To ensure that publications are assigned to clusters in a meaningful way, we introduce the notion of stable clusters. Essentially, a cluster is stable if it is insensitive to small changes in the underlying data. Bootstrapping is used to make small changes in the data.
CitNetExplorer is a software tool for visualizing and analyzing citation networks of scientific publications. The tool allows citation networks to be imported directly from the Web of Science database. Citation networks can be explored interactively, for instance by drilling down into a network and by identifying clusters of closely related publications.
This project analyses the relations on Twitter between politicians and journalists in the triangle of political communication in a hybrid media system (Chadwick, 2013).
Linked Open Graph: browsing multiple SPARQL entry points to build your own LO...Paolo Nesi
A number of accessible RDF stores are populating the linked open data world. The navigation on data reticular relationships is becoming every day more relevant. Several knowledge base present relevant links to common vocabularies while many others are going to be discovered increasing the reasoning capabilities of our knowledge base applications. In this paper, the Linked Open Graph, LOG, is presented. It is a web tool for collaborative browsing and navigation on multiple SPARQL entry points. The paper presented an overview of major problems to be addressed, a comparison with the state of the arts tools, and some details about the LOG graph computation to cope with high complexity of large Linked Open Dada graphs. The LOG.disit.org tool is also presented by means of a set of examples involving multiple RDF stores and putting in evidence the new provided features and advantages using dbPedia, Getty, Europeana, Geonames, etc. The LOG tool is free to be used, and it has been adopted, developed and/or improved in multiple projects: such as ECLAP for social media cultural heritage, Sii-Mobility for smart city, and ICARO for cloud ontology analysis, OSIM for competence / knowledge mining and analysis. Keywords LOD, LOD browsing, knowledge base browsing, SPARQL entry points.
2013 Report on AngeI Investing Activity in Canada: Accelerating the Asset ClassIoana Stoica
The "2013 Report on Angel Investing Activity in Canada: Accelerating the Asset Class", the fourth of its kind, was released in partnership with the Government of Canada, KPMG Enterprise and BDC Venture Capital. The report captures 199 investments in 2013 totaling $89.0 million made through 29 Angel groups. Over 2,100 investors were represented, 40% in Western Canada, 5% in Eastern Canada, and 55% in Central Canada.
A Multilingual Semantic Wiki Based on Controlled Natural LanguageTobias Kuhn
This presentation introduces AceWiki-GF, a semantic wiki based on controlled natural language that makes its knowledge base viewable and editable in different languages applying high-quality rule-based machine translation.
Controlled Natural Language and Opportunities for StandardizationTobias Kuhn
(CC Attribution License does not apply to included third-party material; see the paper for the references: http://attempto.ifi.uzh.ch/site/pubs/papers/kuhn2013cl.pdf )
How to Evaluate Controlled Natural LanguagesTobias Kuhn
(CC Attribution License does not apply to third-party material on slides 5 and 6; see paper for details: http://attempto.ifi.uzh.ch/site/pubs/papers/cnl2009main_kuhn.pdf )
Underspecified Scientific Claims in NanopublicationsTobias Kuhn
(CC Attribution License does not apply to included third-party material on slides 3 and 4; see the paper for the references: http://www.tkuhn.ch/pub/kuhn2012wole.pdf )
nanopub-java: A Java Library for NanopublicationsTobias Kuhn
The concept of nanopublications was first proposed about six years ago, but it lacked openly available implementations. The library presented here is the first one that has become an official implementation of the nanopublication community. Its core features are stable, but it also contains unofficial and experimental extensions: for publishing to a decentralized server network, for defining sets of nanopublications with indexes, for informal assertions, and for digitally signing nanopublications. Most of the features of the library can also be accessed via an online validator interface.
Finding and Accessing Diagrams in Biomedical PublicationsTobias Kuhn
(CC Attribution License does not apply to included third-party material on slides 3, 6, 12, and 19; see the paper for the references: http://www.tkuhn.ch/pub/kuhn2012amia.pdf )
CitNetExplorer is a software tool for visualizing and analyzing citation networks of scientific publications. The tool allows citation networks to be imported directly from the Web of Science database. Citation networks can be explored interactively, for instance by drilling down into a network and by identifying clusters of closely related publications.
This project analyses the relations on Twitter between politicians and journalists in the triangle of political communication in a hybrid media system (Chadwick, 2013).
Linked Open Graph: browsing multiple SPARQL entry points to build your own LO...Paolo Nesi
A number of accessible RDF stores are populating the linked open data world. The navigation on data reticular relationships is becoming every day more relevant. Several knowledge base present relevant links to common vocabularies while many others are going to be discovered increasing the reasoning capabilities of our knowledge base applications. In this paper, the Linked Open Graph, LOG, is presented. It is a web tool for collaborative browsing and navigation on multiple SPARQL entry points. The paper presented an overview of major problems to be addressed, a comparison with the state of the arts tools, and some details about the LOG graph computation to cope with high complexity of large Linked Open Dada graphs. The LOG.disit.org tool is also presented by means of a set of examples involving multiple RDF stores and putting in evidence the new provided features and advantages using dbPedia, Getty, Europeana, Geonames, etc. The LOG tool is free to be used, and it has been adopted, developed and/or improved in multiple projects: such as ECLAP for social media cultural heritage, Sii-Mobility for smart city, and ICARO for cloud ontology analysis, OSIM for competence / knowledge mining and analysis. Keywords LOD, LOD browsing, knowledge base browsing, SPARQL entry points.
2013 Report on AngeI Investing Activity in Canada: Accelerating the Asset ClassIoana Stoica
The "2013 Report on Angel Investing Activity in Canada: Accelerating the Asset Class", the fourth of its kind, was released in partnership with the Government of Canada, KPMG Enterprise and BDC Venture Capital. The report captures 199 investments in 2013 totaling $89.0 million made through 29 Angel groups. Over 2,100 investors were represented, 40% in Western Canada, 5% in Eastern Canada, and 55% in Central Canada.
A Multilingual Semantic Wiki Based on Controlled Natural LanguageTobias Kuhn
This presentation introduces AceWiki-GF, a semantic wiki based on controlled natural language that makes its knowledge base viewable and editable in different languages applying high-quality rule-based machine translation.
Controlled Natural Language and Opportunities for StandardizationTobias Kuhn
(CC Attribution License does not apply to included third-party material; see the paper for the references: http://attempto.ifi.uzh.ch/site/pubs/papers/kuhn2013cl.pdf )
How to Evaluate Controlled Natural LanguagesTobias Kuhn
(CC Attribution License does not apply to third-party material on slides 5 and 6; see paper for details: http://attempto.ifi.uzh.ch/site/pubs/papers/cnl2009main_kuhn.pdf )
Underspecified Scientific Claims in NanopublicationsTobias Kuhn
(CC Attribution License does not apply to included third-party material on slides 3 and 4; see the paper for the references: http://www.tkuhn.ch/pub/kuhn2012wole.pdf )
nanopub-java: A Java Library for NanopublicationsTobias Kuhn
The concept of nanopublications was first proposed about six years ago, but it lacked openly available implementations. The library presented here is the first one that has become an official implementation of the nanopublication community. Its core features are stable, but it also contains unofficial and experimental extensions: for publishing to a decentralized server network, for defining sets of nanopublications with indexes, for informal assertions, and for digitally signing nanopublications. Most of the features of the library can also be accessed via an online validator interface.
Finding and Accessing Diagrams in Biomedical PublicationsTobias Kuhn
(CC Attribution License does not apply to included third-party material on slides 3, 6, 12, and 19; see the paper for the references: http://www.tkuhn.ch/pub/kuhn2012amia.pdf )
A Decentralized Network for Publishing Linked Data — Nanopublications, Trusty...Tobias Kuhn
Making available and archiving scientific results is for the most part still considered the task of classical publishing companies, despite the fact that classical forms of publishing centered around printed narrative articles no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this talk, I outline how we can design scientific data publishing as a Web-based bottom-up process, without top-down control of central authorities. I present a protocol and a server network to decentrally store and archive data in the form of nanopublications, a format based on Semantic Web techniques to represent scientific data with formal semantics. Such nanopublications can be made verifiable and immutable by applying cryptographic methods with identifiers called Trusty URIs. I show how this approach allows researchers to produce, publish, retrieve, address, verify, and recombine datasets and their individual nanopublications in a reliable and trustworthy manner, and I discuss how the current small network can grow to handle the large amounts of structured data that modern science is producing and consuming.
(CC license does not apply to third-party content)
Broadening the Scope of NanopublicationsTobias Kuhn
(CC Attribution License does not apply to included third-party material on slide 3; see the paper for the references: http://www.tkuhn.ch/pub/kuhn2013eswc.pdf )
In this lightning talk, I introduce an approach to publish RDF data in a provenance-aware and reliable manner. This approach is based on the concept of nanopublications, which we can give unique and verifiable identifiers using cryptographic hash values. Based on that, I present ongoing work to establish a decentralized server network for publishing, archiving, and retrieveing Linked Data in a reliable and trustworthy way.
Publishing without Publishers: a Decentralized Approach to Dissemination, Ret...Tobias Kuhn
Making available and archiving scientific results is for the most part still considered the task of classical publishing companies, despite the fact that classical forms of publishing centered around printed narrative articles no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science.
Here we propose to design scientific data publishing as a Web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data.
We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used for the Semantic Web in general. Evaluation of the current small network shows that this system is efficient and reliable.
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuousIntegration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of contentmining (TDM)
Enabling Data-Intensive Science Through Data InfrastructuresLIBER Europe
These slides are from a talk given at LIBER's 42nd annual conference by Carlos Morais Pires of the European Commission.
In light of the current data deluge, and plans by the European Commission to harness this deluge through the implementation of e-infrastructures for data driven science under Horizon 2020, Pires issued a call to action to libraries to engage in the data infrastructure and bring their own unique, and now much needed competencies, to bear in bringing meaning to, and spreading the word about, data-driven science.
Published on Jan 29, 2016 by PMR
Keynote talk to LEARN (LERU/H2020 project) for research data management. Emphasizes that problems are cultural not technical. Promotes modern approaches such as Git / continuous Integration, announces DAT. Asserts that the Right to Read in the Right to Mine. Calls for widespread development of content mining (TDM)
The Culture of Research Data, by Peter Murray-RustLEARN Project
1st LEARN Workshop. Embedding Research Data as part of the research cycle. 29 Jan 2016. Presentation by Peter Murray-Rust, ContentMine.org and University of Cambridge
Linked Data Publishing with NanopublicationsTobias Kuhn
When we think about scientific publishing, we mainly think about papers published in journals or proceedings. This publishing model, however, doesn't seem to work very well for datasets, which have become increasingly important in many areas of science. Instead of treating datasets like papers, couldn't we put the data first by allowing scientists to directly publish data entries that represent their results? Nanopublications are an approach in this direction that builds upon the recent maturation of Linked Data technologies and proposes an entirely new paradigm for the future of scholarly communication.
Drowning in information – the need of macroscopes for research fundingAndrea Scharnhorst
Andrea Scharnhorst (2015) Drowning in information – the need of macroscopes for research funding. Presentation at the international conference: PLANNING, PREDICTION, SCENARIOS - Using Simulations and Maps - 2015 Annual EA Conference - 11–12 May 2015 Bonn
Slides of the presentations gives as part of the Europeana Research panel "Cultural Heritage Data for Research: A Europeana Research Panel" at DH Benelux 2017 in Utrecht.
Similar to Data Publishing and Post-Publication Reviews (20)
Many scholars have pointed out that the classical way of publishing scientific articles is ill-suited to deal with the rapid growth of both, volume and complexity, of scientific contributions. To overcome these problems, next generation scientific publishing has to respond to the increasing importance of datasets and software, and needs to provide methods to automatically organize and aggregate reported scientific findings. Perhaps the most important shortcoming of the current publication system is that scientific papers do not come with formal semantics that could be processed, aggregated, and interpreted in an automated fashion.
Semantic publishing is a general approach to tackle this problem using the concepts and tools of the Semantic Web and related fields.
The Controlled Natural Language of Randall Munroe’s Thing Explainer Tobias Kuhn
It is rare that texts or entire books written in a Controlled Natural Language (CNL) become very popular, but exactly this has happened with a book that has been published last year. Randall Munroe's Thing Explainer uses only the 1'000 most often used words of the English language together with drawn pictures to explain complicated things such as nuclear reactors, jet engines, the solar system, and dishwashers. This restricted language is a very interesting new case for the CNL community. I describe here its place in the context of existing approaches on Controlled Natural Languages, and I provide a first analysis from a scientific perspective, covering the word production rules and word distributions.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
1. Data Publishing and
Post-Publication Reviews
Tobias Kuhn
http://www.tkuhn.ch
@txkuhn
ETH Zurich
PEERE Meeting: Peer Review: Past, Present and Future
ETH Zurich
22 April 2015
2. Peer Review in the Digital Age
Traditionally, peer review has two purposes:
(1) Assessment of the validity, accuracy, relevance, and quality of
presentation of the given scientific work ...
but this assessment is not made public (except for the fact that
some papers are accepted)
(2) Distribution of a scarce resource, namely pages in printed
journals, ...
but this scarce resource no longer exists (except for presentation
slots at conferences)
Is the current peer review system broken?
It’s at least terribly old-fashioned and inefficient.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 2 / 19
3. Publishing has Gone from Print to Online
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 3 / 19
4. The Increasing Importance of Digital Data Sets
(London Underground staff sorting 4M used tickets to analyse line use in 1939)
http://www.telegraph.co.uk/travel/picturegalleries/9791007/
The-history-of-the-Tube-in-pictures-150-years-of-London-Underground.html?frame=2447159
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 4 / 19
5. Approaches for the
Future of Scientific Publishing
Short term:
• Open post-publication reviews
• Data publishing
• ...
Longer term:
• Publish results and reviews as Linked Data
• Global reputation system that includes artificial agents (“bots”)
• ...
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 5 / 19
6. Open Post-Publication Reviews
• Publish first — collect reviews afterwards
• Make reviews public: Open Evaluation (similar to Open Access)
Such open post-publication reviews have existed for a long time in the case
of books and book reviews:
See e.g.: Kriegeskorte. Open evaluation: a vision for entirely transparent post-publication peer review and rating for
science. Frontiers in Computational Neuroscience 6, 2012.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 6 / 19
7. Benefits of Open Post-Publication Reviews
• Fast publication
• Transparent reviewing
• Publicly accessible detailed assessements of scientific works (as
compared to just knowing that a paper has been accepted for a
given journal)
• Incentives for reviewers
• Possibility for a multitude of evaluation metrics to be used in
parallel
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 7 / 19
8. Data Publishing
Data sets are becoming increasingly important for most scientific
disciplines, but so far they are not well integrated into the publishing
and reviewing process.
Current developments:
• Supplementary material for papers
• Data journals
• Data repositories: Figshare, Dryad, ...
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 8 / 19
9. Publishing and Reviewing Scientific Data
Scientific data that should be published and reviewed:
• Input/output data of scientific processes (raw/aggregated)
• Scientific interpretations/conclusions (e.g. causal relation
between gene and disease)
• Meta-data (e.g. involved researchers, citations, used methods)
• All of the above should ideally be formatted as Linked Data
• Source code of software
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 9 / 19
10. Nanopublications:
Provenance-Aware Semantic Publishing
Nanopublications are small pieces of scientific data with their
provenance information, represented in a machine-interpretable
language (RDF).
assertion
provenance
publication info
nanopublication
http://nanopub.org / @nanopub org
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 10 / 19
11. Vision: Changing Scholarly Communication
Now
Narrative articles at the center
Future
Nanopublications at the center
Images from Mons et al. The value of data. Nature genetics, 43(4):281–283, 2011
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 11 / 19
13. How to Publish Data?
Published data should be:
• Verifiable (Is this really the data I am looking for?)
• Immutable (Can I be sure that it hasn’t been modified?)
• Permanent (Will it be available in 1, 5, 20 years from now?)
• Trustworthy (Can I trust the source?)
• Reliable (Can it be efficiently retrieved whenever needed?)
• Granular (Can I refer to individual data entries?)
These points are important (much more so than for papers) because
data can and will be consumed and produced automatically by
algorithms (“bots”).
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 13 / 19
14. Trusty URIs: Cryptographic Hash Values for
Verifiable and Immutable Web Identifiers
Example:
.trighttp://example.org/r1. RA 5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70
http://...RAcbjcRI...
http://...RAQozo2w...
http://...RABMq4Wc...
http://...RAcbjcRI...
http://...RAQozo2w...
http://.../resource23
http://.../resource23
...
http://...RAUx3Pqu...
http://.../resource55
http://...RABMq4Wc...
http://.../resource55
http://...RARz0AX-...
...
http://...RAUx3Pqu...
...
http://...RARz0AX...
...
range of
verifiability
Kuhn, Dumontier. Making Digital Artifacts on the Web Verifiable and Reliable. IEEE Transactions on Knowledge and
Data Engineering. To appear. / Kuhn, Dumontier. Trusty URIs: Verifiable, Immutable, and Permanent Digital Artifacts
for Linked Data. ESWC 2014.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 14 / 19
15. Nanopublication Indexes
Sets of nanopublications can be described as indexes that are
nanopublications themselves. This allows us to define arbitrary sets,
while keeping the individual addressability of the data entries:
(a) (b)
(c) (f)
(d) (e)
has element
has sub-index
appends to
Kuhn et al. Publishing without Publishers: a Decentralized Approach to Dissemination, Retrieval, and Archiving of
Data. arXiv:1411.2749.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 15 / 19
16. Nanopublication Server Network
Nanopublications
with Trusty URIs
Publication
Retrieval
Propagation /
Archiving
http://npmonitor.inn.ac
Kuhn et al. Publishing without Publishers: a Decentralized Approach to Dissemination, Retrieval, and Archiving of
Data. arXiv:1411.2749.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 16 / 19
17. How to Review Data?
• Large data sets only allow for samples of data entries to be
manually reviewed
• Data is often produced and consumed by algorithms: How can
these data be linked to the algorithms. How should the data and
the algorithms be reviewed?
• Certain kinds of technical reviewing can be automated!
• Can all this be achieved in an fashion that is decentralized,
open, and real-time?
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 17 / 19
18. Bots and Reputation Mechanisms
Robust automatic calculation of reputation metrics in a decentralized
and open system:
gives positive
assessment for
is contributed by
Eigenvector
centrality (0-100)
77
Eigenvector
centrality with
bidirectional
contribution edges
77
100
71 1
0
85
4
0.0
0 0
50
50 0
0.4 0.4
0
98
78
31
100
69
1
31 31
71
71 28
42 1
0.7
1
Kuhn. Science Bots: A Model for the Future of Scientific Computation? SAVE-SD 2015.
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 18 / 19
19. Thank you for your attention!
Questions?
Tobias Kuhn, ETH Zurich Data Publishing and Post-Publication Reviews 19 / 19