Presentation given at the DASISH Workshop on PID services, December 8-9., 2014, GESIS Leibniz Institute of Social Sciences, Cologne.
Presentation of DataCite, its history, structure and services followed by a short introduction of the DOI registration service for social and economic data (da|ra).
DataCite is a global consortium that provides standards and services to make research data findable, accessible, and citable. It assigns digital object identifiers (DOIs) to datasets and other research works to improve how they are located and referenced. DataCite was founded in 2009 and is supported by many research institutions and libraries around the world. It develops best practices for data publishing and sharing and operates metadata and search services to help researchers discover and access data.
Possibilities of Digital Analysis of Charter corpora Georg Vogeler
Vogeler shows how boundaries of both nation and media are collapsing into the world-wide web, and demonstrates the collaborative possibilities of combined diplomatic corpora.
Presentation at the International Medieval Congress in Leeds, 2009 July
This document summarizes a presentation about Freebase, a large structured database, and its use of human computation through a system called RABJ (Redundant Array of Brains in a Jar). RABJ allows Freebase to leverage human judgments at scale to reconcile data from different sources, handling over 3.1 million judgments from 500+ queues across 20+ applications. It provides an abstraction layer that matches judges to available judgment tasks in a dynamic and low-latency way to improve data quality in Freebase.
Modelling of Multi-Scale Phenomena in Nano-SuspensionsEUDAT
This document summarizes research using multi-scale modeling and simulations to understand phenomena in nano-suspensions. It discusses (1) motivations for studying nano-suspensions for applications like biomedical, mechanical, chemical, and energy uses; (2) the multi-scale modeling approach using molecular dynamics and Brownian dynamics; (3) use of HPC facilities through PRACE and data management with EUDAT; (4) results on potential of mean force and hydration layers; and (5) perspectives on self-assembly of patchy nanoparticles and importance of managing data with EUDAT.
The document provides feedback on a pitch for a new product or service. The feedback notes several strengths of the pitch, including a clear understanding of the target audience and well-structured presentation. Suggestions for improvement include developing the business model and value proposition further to better convey how the idea would generate revenue. Samples of materials produced were requested to gain a clearer sense of what was being proposed.
The document discusses GBIF's (Global Biodiversity Information Facility) goals of facilitating open access to biodiversity data worldwide to support scientific research. GBIF shares over 200 million biodiversity records through data publishers and resources. The document proposes a Data Publishing Framework to improve data mobilization and cultural acceptance of open data sharing. It describes challenges to the framework and its potential impacts, such as increased data usage and quality through incentives like data papers and a Data Usage Index.
1) The document provides guidance on assigning Digital Object Identifiers (DOIs) through DataCite. It discusses decisions that must be made, such as what objects to assign DOIs to and DOI construction.
2) Maintaining DOIs requires ensuring a correct URL and metadata for the object. DOIs also commit an institution to long-term storage of the object for a minimum of 10 years.
3) The quality of the DOI system relies on objects being cite-worthy, having well described metadata, and the institution committing to long-term storage. Metadata must be provided in an XML file and displayed on a landing page.
This document summarizes research on using crowdsourcing via social bookmarking systems like CiteULike to evaluate articles and journals. It finds that tags assigned by readers provide additional perspectives beyond traditional metrics like citations. The researchers collected tags from social bookmarking systems and compared them to author keywords and controlled vocabularies for over 700 articles. They found tags had only around 20% overlap with other metadata, indicating tags reflect different views than authors or intermediaries. Analysis of tags over time can also reveal shifts in thematic focus areas of journals. Crowdsourcing evaluation through tags thus provides a multidimensional approach to understanding reader perception.
DataCite is a global consortium that provides standards and services to make research data findable, accessible, and citable. It assigns digital object identifiers (DOIs) to datasets and other research works to improve how they are located and referenced. DataCite was founded in 2009 and is supported by many research institutions and libraries around the world. It develops best practices for data publishing and sharing and operates metadata and search services to help researchers discover and access data.
Possibilities of Digital Analysis of Charter corpora Georg Vogeler
Vogeler shows how boundaries of both nation and media are collapsing into the world-wide web, and demonstrates the collaborative possibilities of combined diplomatic corpora.
Presentation at the International Medieval Congress in Leeds, 2009 July
This document summarizes a presentation about Freebase, a large structured database, and its use of human computation through a system called RABJ (Redundant Array of Brains in a Jar). RABJ allows Freebase to leverage human judgments at scale to reconcile data from different sources, handling over 3.1 million judgments from 500+ queues across 20+ applications. It provides an abstraction layer that matches judges to available judgment tasks in a dynamic and low-latency way to improve data quality in Freebase.
Modelling of Multi-Scale Phenomena in Nano-SuspensionsEUDAT
This document summarizes research using multi-scale modeling and simulations to understand phenomena in nano-suspensions. It discusses (1) motivations for studying nano-suspensions for applications like biomedical, mechanical, chemical, and energy uses; (2) the multi-scale modeling approach using molecular dynamics and Brownian dynamics; (3) use of HPC facilities through PRACE and data management with EUDAT; (4) results on potential of mean force and hydration layers; and (5) perspectives on self-assembly of patchy nanoparticles and importance of managing data with EUDAT.
The document provides feedback on a pitch for a new product or service. The feedback notes several strengths of the pitch, including a clear understanding of the target audience and well-structured presentation. Suggestions for improvement include developing the business model and value proposition further to better convey how the idea would generate revenue. Samples of materials produced were requested to gain a clearer sense of what was being proposed.
The document discusses GBIF's (Global Biodiversity Information Facility) goals of facilitating open access to biodiversity data worldwide to support scientific research. GBIF shares over 200 million biodiversity records through data publishers and resources. The document proposes a Data Publishing Framework to improve data mobilization and cultural acceptance of open data sharing. It describes challenges to the framework and its potential impacts, such as increased data usage and quality through incentives like data papers and a Data Usage Index.
1) The document provides guidance on assigning Digital Object Identifiers (DOIs) through DataCite. It discusses decisions that must be made, such as what objects to assign DOIs to and DOI construction.
2) Maintaining DOIs requires ensuring a correct URL and metadata for the object. DOIs also commit an institution to long-term storage of the object for a minimum of 10 years.
3) The quality of the DOI system relies on objects being cite-worthy, having well described metadata, and the institution committing to long-term storage. Metadata must be provided in an XML file and displayed on a landing page.
This document summarizes research on using crowdsourcing via social bookmarking systems like CiteULike to evaluate articles and journals. It finds that tags assigned by readers provide additional perspectives beyond traditional metrics like citations. The researchers collected tags from social bookmarking systems and compared them to author keywords and controlled vocabularies for over 700 articles. They found tags had only around 20% overlap with other metadata, indicating tags reflect different views than authors or intermediaries. Analysis of tags over time can also reveal shifts in thematic focus areas of journals. Crowdsourcing evaluation through tags thus provides a multidimensional approach to understanding reader perception.
DataCite - services and support for opening up research dataHerbert Gruttemeier
This document provides an overview of DataCite, a global consortium that provides services and support for opening up research data. DataCite maintains the DataCite Metadata Store (MDS) where members can mint Digital Object Identifiers (DOIs) and register metadata for datasets and other research objects. It also operates services like the DataCite Metadata Search, an OAI provider for harvesting metadata, and statistics on DOI registration and resolution. DataCite aims to improve the scholarly infrastructure around research data by establishing standards, best practices, and working with data centers and repositories.
This document summarizes a NISO webinar on guidelines and resources for developing data access plans in response to the Office of Science and Technology Policy's (OSTP) 2013 memo. The memo directs large federal funding agencies to develop public access plans for research results. The webinar outlines the required elements of these plans and provides existing guidelines and resources that can help agencies meet digital data requirements, such as standards for data dissemination, description, and long-term preservation. Speakers from the Interuniversity Consortium for Political and Social Research discuss how agencies can leverage existing infrastructure and best practices to develop plans that maximize access to and reuse of federal research data.
DataCite is a global consortium that provides persistent identifiers (DOIs) for scientific data to make it easily discoverable and citable. It aims to put datasets on the same level as research articles. DataCite has over 1.7 million DOIs registered and many member organizations worldwide. It develops standards and infrastructure like its metadata schema and search portal to help data archives and researchers globally.
2013 DataCite Summer Meeting - Making Research better
DataCite. Co-sponsored by CODATA.
Thursday, 19 September 2013 at 13:00 - Friday, 20 September 2013 at 12:30
Washington, DC. National Academy of Sciences
http://datacite.eventbrite.co.uk/
Riding the wave - Paradigm shifts in information accessdatacite
The document discusses the paradigm shifts in scientific information access over time from empirical observation to computational simulation. It outlines the challenges libraries now face in providing access to non-textual scientific content like research data and simulations. The document also introduces DataCite, a global consortium that issues digital object identifiers (DOIs) to datasets to help make them accessible, citable, and traceable like scholarly articles.
Preparing for the UK Research Data Registry and Discovery ServiceRepository Fringe
The document discusses the UK Research Data Registry and Discovery Service (RDRDS) project. It provides an overview of the project's vision and progress, including participating data repositories in the initial pilot phase. It also discusses what participation in RDRDS means for data repositories, including requirements for metadata and options for syndicating metadata through harvesting. The goals of the second phase of the project are outlined as further defining use cases, evaluating platform options, and testing the system usability.
Using Neo4j for exploring the research graph connections made by RD-Switchboardamiraryani
In this talk, Jingbo Wang (NCI) and Amir Aryani (ANDS) have presented the Neo4j queries that can help data managers to explore the connections between datasets, researchers, grants, and publications using the graph model and Research Data Switchboard. In addition, they have discussed a paper on "Graph connections made by RD-Switchboard using NCI’s metadata", presented in the Reproducible Open Science workshop in Hannover September 2016.
Today libraries face more and new challenges when enabling access to information. The growing amount of information in combination with new non-textual media-types demands a constant changing of grown workflows and standard definitions. Knowledge, as published through scientific literature, is the last step in a process originating from primary scientific data. These data are analysed, synthesised, interpreted, and the outcome of this process is published as a scientific article. Access to the original data as the foundation of knowledge has become an important issue throughout the world and different projects have started to find solutions.
Nevertheless science itself is international; scientists are involved in global unions and projects, they share their scientific information with colleagues all over the world, they use national as well as foreign information providers.
When facing the challenge of increasing access to research data, a possible approach should be global cooperation for data access via national representatives:
* a global cooperation, because scientists work globally, scientific data are created and accessed globally.
* with national representatives, because most scientists are embedded in their national funding structures and research organisations.
DataCite was officially launched on December 1st 2009 in London and has 12 information institutions and libraries from nine countries as members. By assigning DOI names to data sets, data becomes citable and can easily be linked to from scientific publications.
Data integration with text is an important aspect of scientific collaboration. DataCite takes global leadership for promoting the use of persistent identifiers for datasets, to satisfy the needs of scientists. Through its members, it establishs and promotes common methods, best practices, and guidance. The member organisations work independently with data centres and other holders of research data sets in their own domains. Based on the work of the German National Library of Science and Technology (TIB) as the first DOI-Registration Agency for data, DataCite has registered over 850,000 research objects with DOI names, thus starting to bridge the gap between data centers, publishers and libraries.
This presentation will introduce the work of DataCite and give examples how scientific data can be included in library catalogues and linked to from scholarly publications.
Approximation and Self-Organisation on the Web of DataKathrin Dentler
This document discusses using computational intelligence techniques like evolutionary computing and collective intelligence to handle challenges posed by the growing Web of Data. It describes how these techniques can provide adaptive, scalable, and robust approaches to tasks like ontology mapping, query answering, and reasoning. Evolutionary computing is proposed for optimization problems, while collective intelligence approaches may enable emergent behaviors from decentralized data flows and reasoning. While computational intelligence loses precision, it gains properties like adaptation, simplicity, scalability, and interactive behavior that are well-suited to the dynamic, distributed nature of the Web of Data.
Aopting and adapting SeaDataNet services for EMODnet ChemistryEUDAT
The document summarizes how EMODnet Chemistry is adopting and adapting services from SeaDataNet to unlock fragmented marine chemistry data across Europe. Key points include:
- EMODnet Chemistry aggregates chemistry data from over 100 organizations across 27 countries to produce pan-European data products on topics like eutrophication, ocean acidification, and contaminants.
- It is using SeaDataNet standards for metadata, vocabularies, data formats, and tools to allow discovery, access, and visualization of harmonized chemistry data.
- Services being adopted include the CDI interface, ODV software, DIVA maps, and the Ocean Browser. New vocabularies are also being developed for EMODnet Chemistry.
THOR: Connecting People, Places, and ThingsMarkus Stocker
The document describes Project THOR, a 30 month H2020 project aiming to connect people, places, and things through Persistent Identifiers (PIDs). The goals are to place PIDs at researchers' fingertips by integrating them into existing research services, and ensure PIDs are embedded in research outputs across various focus areas like biology, earth sciences, and social sciences. Project THOR will establish interoperability between PID infrastructures, develop services to capture PIDs and ORCIDs, and stimulate PID implementation through outreach events and training materials.
Introduction to DataCite and its Infrastructure for new MembersFrauke Ziedorn
This document provides an overview of DataCite and its infrastructure. It discusses DataCite's history and growth since its founding in 2008, its members and structure as a nonprofit association, its technical infrastructure including services like metadata storage, search, and content negotiation, and its working groups focused on business practices, metadata, and technology. It also outlines DataCite's cooperation with organizations like CrossRef, ORCID, and RDA.
This document provides an overview of research data and the data lifecycle. It discusses the creation, processing, analysis, preservation, and reuse of data. It also addresses metadata, data repositories, and challenges around long-tail and big data. The key points are: research data goes through stages from creation to reuse; metadata is critical for documenting and defining data; data repositories curate data and facilitate access and preservation; and there are differences between standardized big data and more bespoke long-tail data. Effectively managing both is important for reproducibility and trust in scientific results.
This document provides an overview of research data and the data lifecycle. It discusses the creation, processing, analysis, preservation, and reuse of data. It also addresses metadata, data repositories, and challenges around long-tail and big data. The key points are: research data goes through stages from creation to reuse; metadata is critical for documenting and defining data; data repositories curate data and facilitate access and preservation; and there are differences between standardized big data and more bespoke long-tail data. Reproducibility, transparency, and ensuring data is well cared for are important responsibilities for scientists.
re3data.org – Registry of Research Data RepositoriesHeinz Pampel
Heinz Pampel | GFZ German Research Centre for Geosciences, LIS
Maxi Kindling | Humboldt-Universität zu Berlin, Berlin School of Library and Information Science Frank Scholze | Karlsruhe Institute of Technology, KIT Library
RDA-Deutschland-Treffen 2015| Potsdam, November 26, 2015
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC
This presentation gives an overview on the H2020 INFRAEOSC PaNOSC project, showcasing its activities and expected results, as well as its vision, i.e., to create a PaN scientific commons
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC ServicesPhidias
High-Performance Computing (HPC) technology is becoming increasingly important as a key driver to push European economic growth and Scientific Research. A comprehensive tool that can support the development of a wide array of scientific domains (like Big Data, earth observation and ocean study) and impact societal challenges as well.
The Webinar aims at introducing the Phidias HPC initiative to the European HPC and Research community, including main features, expected impact and advantages for Research & HPC ecosphere. The project is paving the way to increase the HPC and Data capacities of the European Data Infrastructure by pursuing the following objectives:
- Building a prototype for earth scientific data
- Enabling Open Access to HPC Services
- Strengthening FAIRisation
- Creating a framework combining computing, dissemination and archiving resources.
DataCite - services and support for opening up research dataHerbert Gruttemeier
This document provides an overview of DataCite, a global consortium that provides services and support for opening up research data. DataCite maintains the DataCite Metadata Store (MDS) where members can mint Digital Object Identifiers (DOIs) and register metadata for datasets and other research objects. It also operates services like the DataCite Metadata Search, an OAI provider for harvesting metadata, and statistics on DOI registration and resolution. DataCite aims to improve the scholarly infrastructure around research data by establishing standards, best practices, and working with data centers and repositories.
This document summarizes a NISO webinar on guidelines and resources for developing data access plans in response to the Office of Science and Technology Policy's (OSTP) 2013 memo. The memo directs large federal funding agencies to develop public access plans for research results. The webinar outlines the required elements of these plans and provides existing guidelines and resources that can help agencies meet digital data requirements, such as standards for data dissemination, description, and long-term preservation. Speakers from the Interuniversity Consortium for Political and Social Research discuss how agencies can leverage existing infrastructure and best practices to develop plans that maximize access to and reuse of federal research data.
DataCite is a global consortium that provides persistent identifiers (DOIs) for scientific data to make it easily discoverable and citable. It aims to put datasets on the same level as research articles. DataCite has over 1.7 million DOIs registered and many member organizations worldwide. It develops standards and infrastructure like its metadata schema and search portal to help data archives and researchers globally.
2013 DataCite Summer Meeting - Making Research better
DataCite. Co-sponsored by CODATA.
Thursday, 19 September 2013 at 13:00 - Friday, 20 September 2013 at 12:30
Washington, DC. National Academy of Sciences
http://datacite.eventbrite.co.uk/
Riding the wave - Paradigm shifts in information accessdatacite
The document discusses the paradigm shifts in scientific information access over time from empirical observation to computational simulation. It outlines the challenges libraries now face in providing access to non-textual scientific content like research data and simulations. The document also introduces DataCite, a global consortium that issues digital object identifiers (DOIs) to datasets to help make them accessible, citable, and traceable like scholarly articles.
Preparing for the UK Research Data Registry and Discovery ServiceRepository Fringe
The document discusses the UK Research Data Registry and Discovery Service (RDRDS) project. It provides an overview of the project's vision and progress, including participating data repositories in the initial pilot phase. It also discusses what participation in RDRDS means for data repositories, including requirements for metadata and options for syndicating metadata through harvesting. The goals of the second phase of the project are outlined as further defining use cases, evaluating platform options, and testing the system usability.
Using Neo4j for exploring the research graph connections made by RD-Switchboardamiraryani
In this talk, Jingbo Wang (NCI) and Amir Aryani (ANDS) have presented the Neo4j queries that can help data managers to explore the connections between datasets, researchers, grants, and publications using the graph model and Research Data Switchboard. In addition, they have discussed a paper on "Graph connections made by RD-Switchboard using NCI’s metadata", presented in the Reproducible Open Science workshop in Hannover September 2016.
Today libraries face more and new challenges when enabling access to information. The growing amount of information in combination with new non-textual media-types demands a constant changing of grown workflows and standard definitions. Knowledge, as published through scientific literature, is the last step in a process originating from primary scientific data. These data are analysed, synthesised, interpreted, and the outcome of this process is published as a scientific article. Access to the original data as the foundation of knowledge has become an important issue throughout the world and different projects have started to find solutions.
Nevertheless science itself is international; scientists are involved in global unions and projects, they share their scientific information with colleagues all over the world, they use national as well as foreign information providers.
When facing the challenge of increasing access to research data, a possible approach should be global cooperation for data access via national representatives:
* a global cooperation, because scientists work globally, scientific data are created and accessed globally.
* with national representatives, because most scientists are embedded in their national funding structures and research organisations.
DataCite was officially launched on December 1st 2009 in London and has 12 information institutions and libraries from nine countries as members. By assigning DOI names to data sets, data becomes citable and can easily be linked to from scientific publications.
Data integration with text is an important aspect of scientific collaboration. DataCite takes global leadership for promoting the use of persistent identifiers for datasets, to satisfy the needs of scientists. Through its members, it establishs and promotes common methods, best practices, and guidance. The member organisations work independently with data centres and other holders of research data sets in their own domains. Based on the work of the German National Library of Science and Technology (TIB) as the first DOI-Registration Agency for data, DataCite has registered over 850,000 research objects with DOI names, thus starting to bridge the gap between data centers, publishers and libraries.
This presentation will introduce the work of DataCite and give examples how scientific data can be included in library catalogues and linked to from scholarly publications.
Approximation and Self-Organisation on the Web of DataKathrin Dentler
This document discusses using computational intelligence techniques like evolutionary computing and collective intelligence to handle challenges posed by the growing Web of Data. It describes how these techniques can provide adaptive, scalable, and robust approaches to tasks like ontology mapping, query answering, and reasoning. Evolutionary computing is proposed for optimization problems, while collective intelligence approaches may enable emergent behaviors from decentralized data flows and reasoning. While computational intelligence loses precision, it gains properties like adaptation, simplicity, scalability, and interactive behavior that are well-suited to the dynamic, distributed nature of the Web of Data.
Aopting and adapting SeaDataNet services for EMODnet ChemistryEUDAT
The document summarizes how EMODnet Chemistry is adopting and adapting services from SeaDataNet to unlock fragmented marine chemistry data across Europe. Key points include:
- EMODnet Chemistry aggregates chemistry data from over 100 organizations across 27 countries to produce pan-European data products on topics like eutrophication, ocean acidification, and contaminants.
- It is using SeaDataNet standards for metadata, vocabularies, data formats, and tools to allow discovery, access, and visualization of harmonized chemistry data.
- Services being adopted include the CDI interface, ODV software, DIVA maps, and the Ocean Browser. New vocabularies are also being developed for EMODnet Chemistry.
THOR: Connecting People, Places, and ThingsMarkus Stocker
The document describes Project THOR, a 30 month H2020 project aiming to connect people, places, and things through Persistent Identifiers (PIDs). The goals are to place PIDs at researchers' fingertips by integrating them into existing research services, and ensure PIDs are embedded in research outputs across various focus areas like biology, earth sciences, and social sciences. Project THOR will establish interoperability between PID infrastructures, develop services to capture PIDs and ORCIDs, and stimulate PID implementation through outreach events and training materials.
Introduction to DataCite and its Infrastructure for new MembersFrauke Ziedorn
This document provides an overview of DataCite and its infrastructure. It discusses DataCite's history and growth since its founding in 2008, its members and structure as a nonprofit association, its technical infrastructure including services like metadata storage, search, and content negotiation, and its working groups focused on business practices, metadata, and technology. It also outlines DataCite's cooperation with organizations like CrossRef, ORCID, and RDA.
This document provides an overview of research data and the data lifecycle. It discusses the creation, processing, analysis, preservation, and reuse of data. It also addresses metadata, data repositories, and challenges around long-tail and big data. The key points are: research data goes through stages from creation to reuse; metadata is critical for documenting and defining data; data repositories curate data and facilitate access and preservation; and there are differences between standardized big data and more bespoke long-tail data. Effectively managing both is important for reproducibility and trust in scientific results.
This document provides an overview of research data and the data lifecycle. It discusses the creation, processing, analysis, preservation, and reuse of data. It also addresses metadata, data repositories, and challenges around long-tail and big data. The key points are: research data goes through stages from creation to reuse; metadata is critical for documenting and defining data; data repositories curate data and facilitate access and preservation; and there are differences between standardized big data and more bespoke long-tail data. Reproducibility, transparency, and ensuring data is well cared for are important responsibilities for scientists.
re3data.org – Registry of Research Data RepositoriesHeinz Pampel
Heinz Pampel | GFZ German Research Centre for Geosciences, LIS
Maxi Kindling | Humboldt-Universität zu Berlin, Berlin School of Library and Information Science Frank Scholze | Karlsruhe Institute of Technology, KIT Library
RDA-Deutschland-Treffen 2015| Potsdam, November 26, 2015
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC
This presentation gives an overview on the H2020 INFRAEOSC PaNOSC project, showcasing its activities and expected results, as well as its vision, i.e., to create a PaN scientific commons
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC ServicesPhidias
High-Performance Computing (HPC) technology is becoming increasingly important as a key driver to push European economic growth and Scientific Research. A comprehensive tool that can support the development of a wide array of scientific domains (like Big Data, earth observation and ocean study) and impact societal challenges as well.
The Webinar aims at introducing the Phidias HPC initiative to the European HPC and Research community, including main features, expected impact and advantages for Research & HPC ecosphere. The project is paving the way to increase the HPC and Data capacities of the European Data Infrastructure by pursuing the following objectives:
- Building a prototype for earth scientific data
- Enabling Open Access to HPC Services
- Strengthening FAIRisation
- Creating a framework combining computing, dissemination and archiving resources.
Similar to Hausstein data cite-dara-dasish2014 (20)
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
1. DataCite DOI names for research
data
Brigitte Hausstein
DASISH Workshop on
PID Services
December 8-9, 2014
GESIS Cologne
2. 1998: Paper on the idea of using DOI names for data
citation (by the German GFZ, Potsdam)
2000: (17th International CODATA Conference, Italy) =>
German CODATA
2000 – 2002 German working group funded by the
German Research Foundation (DFG)
2003 – 2007 Project „Citability of primary data (DFG
funded) – Partners: WDC MARE Bremen (PANGAEA),
WDC CLIMATE Hamburg, GFZ Potsdam, TIB Hannover
Prologue
3. 3
First DOI names for data from project partners:
18.03.2004: doi:10.1594/WDCC/EH4_OPYC_SRES_A2 (DOI #1)
22.07.2004: doi:10.1594/GFZ/ICDP/KTB/KTB-GEOCH-GASCHR-P
14.12.2004: doi:10.1594/PANGAEA.119754
End of 2004: about 30 DOI names registered and referenced in TIB library catalogue
2004: First DOI names for research data
DOI #1
DOI #1
4. Think global – act local
Science is global
• it needs global standards
• Global workflows
• Cooperation of global players
Science is carried out locally
• By local scientist
• Beeing part of local infrastrucures
• Having local funders
5. • founded on December 1st, 2009 in London
• focuses on improving the scholarly infrastructure
around datasets and related information
• working with data centers and organizations that
hold content
• providing standards, workflows and best-practice
• global consortium carried by local institutions
• initially, but not exclusively based on the DOI system
DataCite
7. 1. Technische Informationsbibliothek (TIB)
2. Canada Institute for Scientific and Technical Information (CISTI),
3. California Digital Library, USA
4. Purdue University, USA
5. Office of Scientific and Technical
Information (OSTI), USA
6. Library of TU Delft,
The Netherlands
7. Technical Information
Center of Denmark
8. The British Library
9. ZB Med, Germany
10. ZBW, Germany
11. Gesis, Germany
12. Library of ETH Zürich
13. L’Institut de l’Information Scientifique
et Technique (INIST), France
14. Swedish National Data Service (SND)
15. Australian National Data Service (ANDS)
16. Conferenza dei Rettori delle Università Italiane (CRUI)
17. National Research Council of Thailand (NRCT)
18. The Hungarian Academy of Sciences
19. University of Tartu, Estonia
20. Japan Link Center (JaLC)
21. South African Environmental Observation Network (SAEON)
22. European Organisation for Nuclear Research (CERN)
DataCite members
Affiliated members:
1. Digital Curation Center (UK)
2. Microsoft Research
3. Interuniversity Consortium for
Political and Social Research (ICPS
1. Korea Institute of Science and
Technology Information (KISTI)
5. Bejiing Genomic Institute (BGI)
6. IEEE
7. Harvard University Library
8. World Data System (WDS)
9. GWDG
8. Anything that is the foundation
of further reserach
is research data
Data is evidence
Anything that is the foundation
of further reserach
is research data
Data is evidence!
DataCite
9. I RD
( grav / 10 cm 3)
San d
( %)
C aC O3
( %)
T OC
( %)
Rad i o
( %/ s and)
Smect
( %/clay )
I RD
( grav / 10 cm 3)
San d
( %)
C aC O3
( % )
T OC
( %)
Radi o
( %/ s and)
Sme ct
( %/clay )
I RD
( grav / 10 cm 3)
Sand
( %)
C aC O 3
( %)
T OC
( % )
Rad i o
( %/s and)
Sm ect
( % /clay )
I RD
( grav / 10 cm 3)
Sand
( %)
C aC O3
( %)
T O C
( %)
R adi o
( %/s and)
Smect
( %/clay )
I RD
( grav / 10 cm 3)
Sand
( %)
C aC O3
( %)
TO C
( %)
Radi o
( %/s and)
Sme ct
( %/c lay )
PS1 389- 3 P S139 0-3 PS1431-1 PS1640- 1 P S164 8-1
Age (kyr) max. : 233.55 kyr PS1389-3ff
0.0
100.0
200.0
0 20 0 100 0 15 0 0.5 0 5 0 0 10 0 0 20 0 100 0 15 0 0.5 0 50 0 100 0 20 0 100 0 15 0 0.5 0 50 0 100 0 20 0 10 0 0 15 0 0.5 0 50 0 100 0 20 0 100 0 15 0 0.5 0 50 0 100
54° 0' 54°0'
54°30' 54°30'
55° 0' 55°0'
55°30' 55°30'
11°
11°
12°
12°
13°
13°
14°
14°
15°
15°
World vector shore line
Grain size class KOLP A
Grain size class KOEHN2
Grain size class KOEHN
Geochemistry
Grain size class KOLP B
Grain size class KOLP DIN
20 m
Scale: 1:2695194 at Latitude 0°
Source: Baltic Sea Research Institute, Warnemünde.
Earth quake events => doi:10.1594/GFZ.GEOFON.gfz2009kciu
Climate models => doi:10.1594/WDCC/dphase_mpeps
Sea bed photos => doi:10.1594/PANGAEA.757741
Distributes samples => doi:10.1594/PANGAEA.51749
Medical case studies => doi:10.1594/eaacinet2007/CR/5-270407
Survey data => doi:10.4232/1.11004
Computational model => doi:10.4225/02/4E9F69C011BC8
Audio record => doi:10.7477/4:2:1
Grey Literature => doi:10.2314/GBV:489185967
Videos => doi:10.3207/2959859860
What type of data are we talking about?
10. ~ 3,980,000 DOI names registered so far
350 data centers (publication agents)
14,000,000 resolutions so far in 2014
New website launched (1 Decemeber 2014)
http://www.datacite.org/
DataCite Metadata schema published (in cooperation with
all members) http://schema.datacite.org
DataCite Metadata Store: http://search.datacite.org
DataCite in 2014
11. OAI and Statistics
OAI harvester http://oai.datacite.org
DataCite statistics (resolution and registration)
http://stats.datacite.org
Content negotiation (with CrossRef)
http://www.crosscite.org/cn/
DOI Citation Formatter http://crosscite.org/citeproc/
12. Latest developments
• ODIN project with ORCID http://datacite.labs.orcid-eu.org/
• MoU with Thomson Reuters to cooperate on Data
Citation Index
• DataCite plugin for next D-Space release
• Agreement with Re3Data and DataBib to include
their service (in 2016)
• MoU with RDA to become organizational affiliate
• Joint Declaration of Data Citation Principles
https://www.force11.org/datacitation
14. da|ra - Registration agency for social and
economic data
• special service for the Social
Sciences and the Economics
• provided by GESIS and ZBW
(since 2011)
• started as a service for German
data providers => increase of
international users
17. da|ra services
• web service (da|ra publication agents, DataCite)
• da|ra test system
• support metadata generation (web based entry mask,
controlled vocabularies)
• OAI-PMH harvesting (beta)
• export/registration plug-in for OJS
• link data und publications (Infolis)
• search the metadata (API)
=> da|raSearchNet (DFG 2014-2017)
Since its foundation, DataCite has been joined by many more organizations from around the world, all of whom share the vision of citing and sharing data. DataCite family extends to 30 members in 19 countries on 4 continents.
Members are not-for-profit organizations who use DataCite's DOI Registration Agency. DataCite members provide local representation and support for researchers in many countries. Data providers located outside of an identified DataCite service area, can contact the DataCite Managing Agent and it will make sure the request will be forwarded to a DataCite member ready to assist.
Organizations wishing to become members of DataCite are welcome to apply. There are two types of membership: full and affiliated.
ORCID and DataCite Interoperability Network
interoperability between open identifiers for data and contributors in different infrastructures.
Proof of concept interoperability solutions, use cases at CERN , BL Social Sciences;
The two organizations are working together to improve and harmonize DataCite and ORCID metadata schemas for data and researchers, to develop new interoperable and open source APIs for our services, and to create complementary user services.
What is ODIN?
ODIN – ORCID and DataCite Interoperability Network - is a two-year project which started in September 2012, funded by the European Commission’s ‘Coordination and Support Action’ under the FP7 programme.
Partners in ODIN are innovators in science, information science and the publishing industry: CERN, the British Library, ORCID, DataCite, Dryad, arXiv and the Australian National Data Service (see Partners).
The ODIN mission
ODIN will build on the ORCID and DataCite initiatives to uniquely identify scientists and data sets and connect this information across multiple services and infrastructures for scholarly communication. It will address some of the critical open questions in the area:
Referencing a data object
Tracking of use and re-use
Links between a data object, subsets, articles, rights statements and every person involved in its life-cycle.
Recommendations gaps in the PID infrastructure
This is the DataCite Community. There are four German full DataCite members. Two of them have decided to work together and provide a specific service.
It is a service for the Social Sciences and and the Economics and has been provided since 2010 and is based on the DataCite membership of GESIS and ZWB.
da|ra started as a German service.
Meanwhile the number of international users is increasing.