Metadata for Data Rescue and Data at RiskNico Carver
1. The document discusses the design of a metadata scheme for describing data that is at risk of being lost, unused, or destroyed.
2. It outlines the major questions and principles that informed the design, including what essential metadata is needed to aid in data rescue efforts across scientific disciplines.
3. The proposed metadata scheme is described which includes elements like research area, physical form of data, content and context, current holder, dates, and risk level. A case study testing the scheme is also summarized.
DataONE Education Module 01: Why Data Management?DataONE
Lesson 1 in a set of 10 created by DataONE on Best Practices fo Data Management. The full module can be downloaded from the DataONE.org website at: http://www.dataone.org/educaiton-modules. Released under a CC0 license, attribution and citation requested.
Data collection is the process of systematically gathering information to answer research questions. Accurate data collection is essential to maintaining research integrity. Issues that can compromise integrity include errors in data collection instruments or procedures. Quality assurance and quality control help ensure integrity. Quality assurance occurs before data collection through standardized protocols and manuals. Quality control occurs during and after collection through review and validation of data. Maintaining integrity supports accurate conclusions and prevents wasted resources.
Keeping up to date with information retrieval research: Summarized Research i...Patrice Chalon
The SuRe Info project aims to keep health technology assessment (HTA) information specialists up to date with information retrieval research. It has a team of international experts who collaborate online. The methodology involves identifying relevant studies, appraising them, and publishing chapters summarizing the research. To date, the project has appraised 34 studies and published 7 chapters, with 4 more in preparation. The goal is for SuRe Info to serve as an evidence-based resource for HTA information specialists and a platform for international collaboration.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Alain Frey Research Data for universities and information producersIncisive_Events
Research data is growing exponentially but is disparate and challenging to understand fully. Universities face challenges in managing research data to meet funding and standards requirements. Thomson Reuters launched the Data Citation Index to make research data discoverable, accessible, and citable by bringing important data from diverse repositories into one searchable index. This addresses the need for a single access point for quality research data across disciplines and locations.
Metadata for Data Rescue and Data at RiskNico Carver
1. The document discusses the design of a metadata scheme for describing data that is at risk of being lost, unused, or destroyed.
2. It outlines the major questions and principles that informed the design, including what essential metadata is needed to aid in data rescue efforts across scientific disciplines.
3. The proposed metadata scheme is described which includes elements like research area, physical form of data, content and context, current holder, dates, and risk level. A case study testing the scheme is also summarized.
DataONE Education Module 01: Why Data Management?DataONE
Lesson 1 in a set of 10 created by DataONE on Best Practices fo Data Management. The full module can be downloaded from the DataONE.org website at: http://www.dataone.org/educaiton-modules. Released under a CC0 license, attribution and citation requested.
Data collection is the process of systematically gathering information to answer research questions. Accurate data collection is essential to maintaining research integrity. Issues that can compromise integrity include errors in data collection instruments or procedures. Quality assurance and quality control help ensure integrity. Quality assurance occurs before data collection through standardized protocols and manuals. Quality control occurs during and after collection through review and validation of data. Maintaining integrity supports accurate conclusions and prevents wasted resources.
Keeping up to date with information retrieval research: Summarized Research i...Patrice Chalon
The SuRe Info project aims to keep health technology assessment (HTA) information specialists up to date with information retrieval research. It has a team of international experts who collaborate online. The methodology involves identifying relevant studies, appraising them, and publishing chapters summarizing the research. To date, the project has appraised 34 studies and published 7 chapters, with 4 more in preparation. The goal is for SuRe Info to serve as an evidence-based resource for HTA information specialists and a platform for international collaboration.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Alain Frey Research Data for universities and information producersIncisive_Events
Research data is growing exponentially but is disparate and challenging to understand fully. Universities face challenges in managing research data to meet funding and standards requirements. Thomson Reuters launched the Data Citation Index to make research data discoverable, accessible, and citable by bringing important data from diverse repositories into one searchable index. This addresses the need for a single access point for quality research data across disciplines and locations.
This document provides an introduction to data management. It discusses why data management is important, covering key aspects like developing data management plans, file organization, documentation and metadata, storage and backup, legal and ethical considerations, sharing and reuse, and preservation. Effective data management is critical for research success as it supports reproducibility, sharing, and preventing data loss. The document outlines best practices and resources like the library that can help with developing strong data management strategies.
This slideshow was used in an Introduction to Research Data Management course taught for the Mathematical, Physical and Life Sciences Division, University of Oxford, on 2015-02-09. It provides an overview of some key issues, looking at both day-to-day data management, and longer term issues, including sharing, and curation.
This document provides step-by-step instructions for conducting academic library research. It outlines choosing a topic and keywords, constructing a search strategy, choosing appropriate research tools like books, articles, primary sources, and datasets, running searches and evaluating results. Key tips include using synonyms, limiting or expanding search terms, combining terms with "and" or "or", trying different databases and subject headings, and getting full text or requesting items through interlibrary loan when not available locally.
This document provides an overview of data management for librarians. It defines data and data management, which involves helping researchers cite, store, catalogue, and provide access to their data. A data management plan is a plan for how a researcher will handle their data during and after their research. Many funding agencies now require a data management plan. The document encourages librarians to refer researchers to their institution's data services team for help with data management tasks and creating data management plans. It also emphasizes that data management is now an important part of the librarian's role to support researchers.
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
A presentation on research data management presented at the Utah Library Association conference in May 2015. Main topics included federal mandates, data repositories, metadata, and file naming conventions. Presenters: Rebekah Cummings, Elizabeth Smart, Becky Thoms, and Brit Faggerheim.
Data and Donuts: The Impact of Data ManagementC. Tobin Magle
Good data management practices are becoming increasingly important in the digital age. Because we now have the technology to freely share research data and also because funding agencies want to do more with decreasing research funds, many funding agencies and journals require authors and grantees to share their research data. To provide training in this area, Tobin Magle, the Morgan Library's Data Management Specialist, is putting on a series of data management workshops called "Data and Donuts". Join us to learn about data management topics throughout the research data lifecycle.
This is the PowerPoint for my "Data Management for Undergraduate Researchers" workshop for the Office of Undergraduate Research Seminar and Workshop Series. Major topics include motivations behind good data management, file naming, version control, metadata, storage, and archiving.
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
Oxford DTP - Sansone - Data publications and Scientific Data - Dec 2014Susanna-Assunta Sansone
- The document discusses the need for open and accessible data in research. It notes that over 50% of studies are not published due to selective reporting of results.
- There is a movement for "FAIR data" in life and medical sciences, where data is findable, accessible, interoperable, and reusable. However, not much data currently meets these standards.
- Publishers can play a role in incentivizing data sharing by implementing policies requiring data availability and format standards for publishing research. This includes supporting data citations and data journals.
Rebecca Raworth presented a workshop on research data management. The presentation covered:
- Why research data management plans are important, such as satisfying funder requirements and increasing research efficiency.
- Current requirements for data management plans in Canada.
- Tools for research data management, including Portage for creating data management plans and Dataverse for data storage and access.
- Best practices for organizing, documenting, storing and sharing research data, including using metadata standards, file naming conventions, and choosing appropriate data repositories.
Responsible conduct of research: Data ManagementC. Tobin Magle
A presentation for the Food and Nutrition Science Responsible conduct of research class on data management best practices. Covers material in the context of writing a data management plan.
This document provides an introduction to data management. It discusses the importance of data management and introduces best practices. These include making a data management plan, properly organizing and naming files, adding descriptive metadata, securely storing and backing up data, considering legal and ethical issues, enabling sharing and reuse, and ensuring long-term preservation. Effective data management is important across all disciplines and throughout the entire data lifecycle from creation to archiving.
Next generation data services at the Marriott LibraryRebekah Cummings
This document discusses next generation data services at the Marriott Library. It begins by asking how data needs in the social sciences and humanities may change over the next five years, and how libraries can partner with faculty on data needs. The document then discusses the library's role in data curation, challenges, and examples of data services like research data consultation, metadata assistance, and repository services. It provides examples of collaborations like embedded librarianship and a project with the UCLA Civil Rights Project to archive publications and datasets. The discussion emphasizes the changing landscape and growing importance of data sharing and management.
This document provides strategies and resources for finding statistics and data for research. It discusses the differences between data and statistics, and examples of types of statistics such as point-in-time measurements, time series, and geographic areas. Sources of statistics mentioned include static tables, databases, and poll data from organizations like Gallup. The document recommends strategies like searching indexes, statistics databases, and library guides, as well as mining the references of relevant secondary sources.
Presentation for Northwestern University's first Computational Research Day, April 22, 2014. http://www.it.northwestern.edu/research/about/campus-events/research-day/agenda.html . By Cunera Buys, e-Science Librarian, and Claire Stewart, Director, Center for Scholarly Communication and Digital Curation and Head, Digital Collections
Managing, Sharing and Curating Your Research Data in a Digital Environmentphilipdurbin
This document discusses research data management and curation. It describes how data sharing has increased as open science mandates have promoted data availability. Research data is now often shared alongside research articles through bi-directional linking. Self-curation repositories are being developed to help researchers publish and share their data. The benefits of open access include increased visibility, new discoveries through wider collaboration, and compliance with funder mandates. Key requirements for open data include availability, access, redistribution and reuse. Dataverse is presented as a solution for research data management that facilitates data sharing, preservation, citation, exploration and analysis. It issues persistent identifiers and supports various data formats and protocols. Challenges of data management include meaningful aggregation, privacy concerns
There are many online and in-person courses available for librarians to learn about research data management, data analysis, and visualization, but after you have taken a course, how do you go about applying what you have learned? While it is possible to just start offering classes and consultations, your service will have a better chance of becoming relevant if you consider stakeholders and review your institutional environment. This lecture will give you some ideas to get started with data services at your institution.
This document provides an introduction to data management. It discusses why data management is important, covering key aspects like developing data management plans, file organization, documentation and metadata, storage and backup, legal and ethical considerations, sharing and reuse, and preservation. Effective data management is critical for research success as it supports reproducibility, sharing, and preventing data loss. The document outlines best practices and resources like the library that can help with developing strong data management strategies.
This slideshow was used in an Introduction to Research Data Management course taught for the Mathematical, Physical and Life Sciences Division, University of Oxford, on 2015-02-09. It provides an overview of some key issues, looking at both day-to-day data management, and longer term issues, including sharing, and curation.
This document provides step-by-step instructions for conducting academic library research. It outlines choosing a topic and keywords, constructing a search strategy, choosing appropriate research tools like books, articles, primary sources, and datasets, running searches and evaluating results. Key tips include using synonyms, limiting or expanding search terms, combining terms with "and" or "or", trying different databases and subject headings, and getting full text or requesting items through interlibrary loan when not available locally.
This document provides an overview of data management for librarians. It defines data and data management, which involves helping researchers cite, store, catalogue, and provide access to their data. A data management plan is a plan for how a researcher will handle their data during and after their research. Many funding agencies now require a data management plan. The document encourages librarians to refer researchers to their institution's data services team for help with data management tasks and creating data management plans. It also emphasizes that data management is now an important part of the librarian's role to support researchers.
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
A presentation on research data management presented at the Utah Library Association conference in May 2015. Main topics included federal mandates, data repositories, metadata, and file naming conventions. Presenters: Rebekah Cummings, Elizabeth Smart, Becky Thoms, and Brit Faggerheim.
Data and Donuts: The Impact of Data ManagementC. Tobin Magle
Good data management practices are becoming increasingly important in the digital age. Because we now have the technology to freely share research data and also because funding agencies want to do more with decreasing research funds, many funding agencies and journals require authors and grantees to share their research data. To provide training in this area, Tobin Magle, the Morgan Library's Data Management Specialist, is putting on a series of data management workshops called "Data and Donuts". Join us to learn about data management topics throughout the research data lifecycle.
This is the PowerPoint for my "Data Management for Undergraduate Researchers" workshop for the Office of Undergraduate Research Seminar and Workshop Series. Major topics include motivations behind good data management, file naming, version control, metadata, storage, and archiving.
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
Oxford DTP - Sansone - Data publications and Scientific Data - Dec 2014Susanna-Assunta Sansone
- The document discusses the need for open and accessible data in research. It notes that over 50% of studies are not published due to selective reporting of results.
- There is a movement for "FAIR data" in life and medical sciences, where data is findable, accessible, interoperable, and reusable. However, not much data currently meets these standards.
- Publishers can play a role in incentivizing data sharing by implementing policies requiring data availability and format standards for publishing research. This includes supporting data citations and data journals.
Rebecca Raworth presented a workshop on research data management. The presentation covered:
- Why research data management plans are important, such as satisfying funder requirements and increasing research efficiency.
- Current requirements for data management plans in Canada.
- Tools for research data management, including Portage for creating data management plans and Dataverse for data storage and access.
- Best practices for organizing, documenting, storing and sharing research data, including using metadata standards, file naming conventions, and choosing appropriate data repositories.
Responsible conduct of research: Data ManagementC. Tobin Magle
A presentation for the Food and Nutrition Science Responsible conduct of research class on data management best practices. Covers material in the context of writing a data management plan.
This document provides an introduction to data management. It discusses the importance of data management and introduces best practices. These include making a data management plan, properly organizing and naming files, adding descriptive metadata, securely storing and backing up data, considering legal and ethical issues, enabling sharing and reuse, and ensuring long-term preservation. Effective data management is important across all disciplines and throughout the entire data lifecycle from creation to archiving.
Next generation data services at the Marriott LibraryRebekah Cummings
This document discusses next generation data services at the Marriott Library. It begins by asking how data needs in the social sciences and humanities may change over the next five years, and how libraries can partner with faculty on data needs. The document then discusses the library's role in data curation, challenges, and examples of data services like research data consultation, metadata assistance, and repository services. It provides examples of collaborations like embedded librarianship and a project with the UCLA Civil Rights Project to archive publications and datasets. The discussion emphasizes the changing landscape and growing importance of data sharing and management.
This document provides strategies and resources for finding statistics and data for research. It discusses the differences between data and statistics, and examples of types of statistics such as point-in-time measurements, time series, and geographic areas. Sources of statistics mentioned include static tables, databases, and poll data from organizations like Gallup. The document recommends strategies like searching indexes, statistics databases, and library guides, as well as mining the references of relevant secondary sources.
Presentation for Northwestern University's first Computational Research Day, April 22, 2014. http://www.it.northwestern.edu/research/about/campus-events/research-day/agenda.html . By Cunera Buys, e-Science Librarian, and Claire Stewart, Director, Center for Scholarly Communication and Digital Curation and Head, Digital Collections
Managing, Sharing and Curating Your Research Data in a Digital Environmentphilipdurbin
This document discusses research data management and curation. It describes how data sharing has increased as open science mandates have promoted data availability. Research data is now often shared alongside research articles through bi-directional linking. Self-curation repositories are being developed to help researchers publish and share their data. The benefits of open access include increased visibility, new discoveries through wider collaboration, and compliance with funder mandates. Key requirements for open data include availability, access, redistribution and reuse. Dataverse is presented as a solution for research data management that facilitates data sharing, preservation, citation, exploration and analysis. It issues persistent identifiers and supports various data formats and protocols. Challenges of data management include meaningful aggregation, privacy concerns
There are many online and in-person courses available for librarians to learn about research data management, data analysis, and visualization, but after you have taken a course, how do you go about applying what you have learned? While it is possible to just start offering classes and consultations, your service will have a better chance of becoming relevant if you consider stakeholders and review your institutional environment. This lecture will give you some ideas to get started with data services at your institution.
This document summarizes Susanna-Assunta Sansone's presentation on open access and open data at Nature Publishing Group. Some key points discussed include:
- The benefits of open data including reducing errors/fraud and increasing return on investment in research. However, barriers also exist such as lack of incentives and standards.
- Recent initiatives at NPG to improve data/reproducibility such as requiring data behind figures and expanding methods sections.
- The role of data journals in increasing credit/visibility for shared data and promoting standards/best practices.
- Market research found researchers want increased visibility, usability, and credit for sharing their data.
FAIR for the future: embracing all things dataARDC
FAIR for the future: embracing all things data - Natasha Simons, Keith Russell and Liz Stokes, presented at Taylor & Francis Scholarly Summits in Sydney 11 Feb 2019 and Melbourne 14 Feb 2019.
This presentation was provided by Chris Erdmann of Library Carpentries and by Judy Ruttenberg of ARL during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
The slides that will accompany my live webcast for OpenCon 2014 attendees, all about open data in research. The benefits, the how to (both legally & technically), examples, pitfalls, and the future of open research data.
Simon Hodson discusses key aspects of open science including open access to research outputs, FAIR data principles, and engaging society. Open science requires addressing technical, funding, skills, and mindset challenges. While data created with public funds should be open by default, legitimate exceptions exist for commercial interests, privacy, and security. Criteria for data appraisal, selection and preservation need input from disciplines. Barriers to data sharing include concerns over misuse and lack of credit, while benefits include advancing research and building institutional reputation. Open science governance is needed to balance openness with other priorities like intellectual property, and define roles and responsibilities among stakeholders.
A presentation offering an introduction to managing and sharing research data given at the Czech Open Science days as part of the EC-funded FOSTER project.
The document summarizes Susanna-Assunta Sansone's presentation on enabling FAIR (Findable, Accessible, Interoperable, Reusable) digital resources. It discusses the driving forces behind FAIR including reproducibility crises, new data types, and changing publishing. It then outlines community efforts to develop standards, policies, and tools to improve metadata and data sharing according to FAIR principles. These include domain-specific standards, the FAIRsharing registry, metrics to assess FAIRness, and ongoing work to provide FAIR guidance and services.
Aim:- To show how research data management can contribute to the success of your PhD.
*What is research data and why it is important?
*The Research Data lifecycle
* Research Data – more than just your results
* FAIR data and Open Research
* DMP online tool
Research Data Sharing and Re-Use: Practical Implications for Data Citation Pr...SC CTSI at USC and CHLA
Date: Apr 4, 2018
Speaker: Hyoungjoo Park, PhD candidate, School of Information Studies, University of Wisconsin-Milwaukee, and Dietmar Wolfram, PhD
Overview: It is increasingly common for researchers to make their data freely available. This is often a requirement of funding agencies but also consistent with the principles of open science, according to which all research data should be shared and made available for reuse. Once data is reused, the researchers who have provided access to it should be acknowledged for their contributions, much as authors are recognised for their publications through citation. Hyoungjoo Park and Dietmar Wolfram have studied characteristics of data sharing, reuse, and citation and found that current data citation practices do not yet benefit data sharers, with little or no consistency in their format. More formalised citation practices might encourage more authors to make their data available for reuse.
Data Science Meets Biomedicine, Does Anything ChangePhilip Bourne
Data science is driving major changes in biomedical research by enabling new types of integrative, multi-scale analyses. However, biomedical research may no longer lead data science due to a lack of comprehensive data infrastructure and cultural barriers. Responsible data science that balances openness, ethics, and benefiting patients could help establish biomedicine's continued leadership role. Major challenges include limited resources, attracting diverse talent, and prioritizing strategic initiatives over conforming to traditional models of research.
This document provides an overview of research data and the role of libraries in supporting research data services. It discusses that research data takes many forms and differs across disciplines. Libraries can help with research data in several ways, including learning about data practices in their organizations, identifying gaps, and helping researchers find and manage data through various services and skills like data analysis and visualization. The document outlines potential areas libraries can provide support and ways to continue building data skills, such as through online courses and conferences.
Open science curriculum for students, June 2019Dag Endresen
Living Norway seminar on Open Science in Trondheim 12th June 2019.
https://livingnorway.no/2019/04/26/living-norway-seminar-2019/
https://www.gbif.no/events/2019/living-norway-seminar.html
This document discusses the need for critical infrastructure to promote data synthesis and evidence-based nutrient management. It outlines 10 steps for real-time data uptake, analysis, and customized nutrient recommendations. Key challenges include data standards, minimum data sets, provenance, and repositories. The Purdue University Research Repository is presented as a solution, providing preservation, curation, and publication of agricultural data. Hands-on support from librarians and agronomists is discussed to help researchers transition data and ensure best practices.
Presentation during the 14th Association of African Universities (AAU) Conference and African Open Science Platform (AOSP)/Research Data Alliance (RDA) Workshop in Accra, Ghana, 7-8 June 2017.
This document discusses open science and FAIR data principles. It begins by outlining the benefits of open data, including enabling reproducibility, avoiding replication gaps, and allowing data reuse and reinterpretation. Open data practices have transformed areas like genomics and astronomy. FAIR data principles help enable large-scale data use and machine analysis. The document then defines open science, including open access, open data, FAIR data principles, and engagement with society. It discusses frameworks for developing open data strategies at the national and institutional levels. These include developing policies, incentives, skills training, and data infrastructure. While open data brings benefits, it also requires investment and cultural changes to fully realize. Stakeholders like government and research institutions can benefit
Paper was presented at European Survey Research Association 2013, in the session Research Data Management for Re-use: Bringing Researchers and Archivists closer.
The Research Data Alliance aims to build social and technical infrastructure for data sharing worldwide. It brings together members in Working Groups and Interest Groups to develop solutions to specific data infrastructure challenges. Recent Working Group deliverables include recommendations for dynamically citing changing datasets, a prototype metadata standards directory, and a common framework for wheat data terminology. The Data Citation Working Group focused on identifying and citing subsets of large, dynamic datasets in a machine-readable way through approaches like data versioning and timestamping.
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
More information on the community of practice: https://www.openaire.eu/cop-training
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
- In late 2020, Research Data Officers from universities in the French-speaking part of Belgium got together to establish a network to work on issues like data management plans, training, awareness, and data repositories.
- In 2021, they launched a community of Data Ambassadors within these universities to increase peer-to-peer support on research data management and help disseminate good practices.
- Testimonials from Data Ambassadors highlight the benefits of networking and knowledge sharing across disciplines, and their plans to advocate for better research data management within their departments.
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
20221121_KU Leuven Research Data Repository_OpenScienceBelgium.pptxOpenAccessBelgium
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
Presentation at the “Open Science: connecting the actors” event on the 21st of November 2022:
Share best practices, foster community, and encourage knowledge-sharing on Open Science.
At the heart of the Open Access Belgium community is the ambition to open up the way we organize and conduct scientific research.
The Open Science teams of the Belgian universities have developed and tested a wide range of training methods, training materials, networking activities
and data solutions to facilitate and foster Open Science. Achievements, tools and lessons learned by different institutions will be shared in this networking event.
Programme can be found here: https://openaccess.be/2022/10/04/open-science-connecting-the-actors/
the OpenAIRE Research graph is a massive collection of metadata and links connecting research entities such as articles, datasets, software, and other research outputs
Openaccess.be is the central information space for Open Science in Belgium. Open Access Belgium is a collaboration between the Open Science teams of the Belgian universities. Apart from keeping this webpage up to date and writing blogposts about Open Science in Belgium, we also organise a yearly event during Open Access Week for the Belgian Open Science community.
The OpenAIRE project, in the vanguard of the open access and open data movements in Europe was commissioned by the EC to support their nascent Open Data policy by providing a catch-all repository for EC funded research. CERN, an OpenAIRE partner and pioneer in open source, open access and open data, provided this capability and Zenodo was launched in May 2013.
In support of its research programme CERN has developed tools for Big Data management and extended Digital Library capabilities for Open Data. Through Zenodo these Big Science tools could be effectively shared with the long-tail of research.
To address problems with the peer-review process, many journals have experimented with open_science_logodifferent types of peer-review models. Open peer review was adopted by several journals in order to encourage transparency in the process, and there are now a number of different ways in which this is implemented. By Axel Cleeremans (ULB), Chief Editor for Frontiers in Psychology, Louisa Flintoft, Executive Editor, BMC In-House Journals.
To address problems with the peer-review process, many journals have experimented with open_science_logodifferent types of peer-review models. Open peer review was adopted by several journals in order to encourage transparency in the process, and there are now a number of different ways in which this is implemented.
• Introduction by Emilie Menz
The section provides an overview of the open science requirements and how to comply with them stipulated by F.N.R.S. Presentation is by Sandrine Brognaux (UMons).
This document discusses open data, FAIR data principles, and research data management. It provides the following key points:
1) Open/FAIR data aims to make research data available for reuse by shifting away from traditional models where data is undervalued. Data can have varying degrees of sharing from open to restricted to closed.
2) FAIR data principles describe attributes that enable data to be findable, accessible, interoperable, and reusable by humans and machines. Data can adhere to FAIR principles to varying degrees.
3) FAIR data does not necessarily mean data needs to be openly shared - data can be both FAIR and restricted. Good research data management is needed to plan, collect,
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
PPT on Sustainable Land Management presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
HUMAN EYE By-R.M Class 10 phy best digital notes.pdf
Introduction to open-data
1. @openaire_eu
The case of Open
Research Data
HermansEmilie
GhentUniversity
Based on Mounce, R. (2014), “‘The State of Open Research Data”. talk for OpenCon 2014 (Washington D.C.).
https://www.slideshare.net/rossmounce/open-con-mouncedata?qid=d8f441d1-c968-4c4a-ab4d-
eb2d04d7fc3a&v=&b=&from_search=24
2. Side note….
Whenever I talk about data in this talk,
assume I’m talking about non-sensitive data e.g.
NOT sensitive medical data
NOT bio-weapons research data
etc. etc….
4. Adapted original source: The University of California, Santa Cruz, Data Management LibGuide, Research Data Management Lifecycle, diagram, viewed 5th May 2018 http://guides.library.ucsc.edu/datamanagement
Challenge:
Adapted original source: The University of California, Santa Cruz, Data Management LibGuide, Research Data Management Lifecycle, diagram, viewed 5th May 2018 http://guides.library.ucsc.edu/datamanagement
From liniar
process to
research data
lifecycle!
5. Open means anyone can
freely access, use,
modify, and share for any
purpose.
Restricted access to limited
amount of people under
certain conditions
Open Data Data sharing
Whatisopendata?
8. Data sharing (upon request)
e.g. “The full profile listings are on floppy disks
which are available upon request”*
* Fernolz et al (1989) A survey A survey of measurements and measuring techniques in rapidly distorted compressible turbulent boundary layers.
10. Datasharingincertaindisciplines
Community agreements
The Bermuda Principles for sharing DNA sequences data
• Automatic release of sequence
assemblies larger than 1 kb
(preferably within 24 hours).
• Immediate publication of finished
annotated sequences.
• Aim to make the entire sequence
freely available in the public domain
Data online as supplementary material
11. Databydefaultanddatapapers
Data papers
• A searchable metadata document, describing a
particular dataset or a group of datasets, published
as peer—reviewed article
• Primary purpose: to describe data and collection,
rather than to report hypotheses and conclusions.
Journal policy
• Journals are increasingly asking for associated data to be
deposited (PLOS, Springer, Nature, BMC, BMJ….) as well
as required by funders (EC, FWO)
15. Research integrity
“It was a mistake in a spreadsheet that could
have been easily overlooked: a few rows left out
of an equation to average the values in a
column. The spreadsheet was used to draw the
conclusion of an influential 2010 economics
paper: that public debt of more than 90% of GDP
slows down growth. This conclusion was later
cited by the International Monetary Fund and
the UK Treasury to justify programmes of
austerity that have arguably led to riots, poverty
and lost jobs.”
18. 1. e.g. Piwowar HA, Vision TJ. (2013) Data reuse and the open data citation advantage. PeerJ 1:e175 https://doi.org/10.7717/peerj.175, Piwowar HA, Day RS, Fridsma DB (2007) Sharing Detailed Research Data Is Associated with Increased Citation Rate. PLoS ONE 2(3): e308.
doi:10.1371/journal.pone.0000308
Prevents data loss
Maximize usefulness
Write a data paper
Credit & longer shelf life 1
Increases transparency
Promote integrity Influence society
DATAMANAGEMENT
AND
OPENDATA
22. Where to deposit data?
• Disciplinary/Institutional data repository
Best practice: Research data repository
• Zenodo cost-free data repository
• Matches data needs
• Directory of data repositories:
www.Re3data.org
23. FAIRdataprinciples
• How to discover your
data?
• How to understand your
data?
• Where to find your
data?
• Can people access
your data?
• Metadata
• Persistent identifier
• Naming convention
• Keywords
• Versioning
• Software,
documentation
• Data repository
• Open Standards
• Vocabulary
• Methodologies
• Licensing
Findable
ReusableInteroperable
Accessible
FAIR principles
24. FAIR is best practice
• (Open) licenses for data can help you
greatly
• Can be time-consuming, especially when not
incorporated in research process.
• Importance of commonly used standards,
open file formats and metadata
• e.g. creative commons.
Recommended:
25. Aim for the (near?) future
It’s somewhere
in some form
It’s somewhere in
a structured form
It’s somewhere in
an open format
And you can
POINT at it!
It can even TALK
(to other data)
5-star deployment scheme for Open Data: 5stardata.info
26. Hope for the (near) future?
• Research institutions will significantly improve research data
management training for ALL staff & students, old and new alike
• Research funding bodies will tighten-up their rules to ensure
immediate post-publication data sharing. No embargoes, no
bullshit.
• If no published data comes from your funded research, it will negatively
effect your future chances of funding
• Good journals will strictly enforce mandatory data sharing.
Journals that don't will get a bad reputation for irreproducible
research
27. @openaire_eu
Alternative…
Imagine a world where no-one shared
their data (post-publication)
How would we know what was truth & what was lies / fraud / error?
Imagine the waste of time & resources
if everyone had to re-generate data de novo every time
How would we make progress?
We would be in the dark….
Idea- experiment – data analyse and writing paper – finally time for some pizza while paper gets reviewed – paper: jeej, al your hard work dissapears
FROM DATA IN A SCIENTIFIC PIPELINE TO RESEARCH DATA LIFECYCLE
Managing data in a research project is a process that runs throughout the project. Good data management is one of the foundations for reproducible research. Good management is essential to ensure that data can be preserved and remain accessible in the long-term, so it can be re-used and understood by future researchers. Begin thinking about how you’ll manage your data before you start collecting it.
Open data is data that is free to access, reuse, repurpose, and redistribute. The Open Research Data Pilot aims to make the research data generated by selected Horizon 2020 projects accessible with as few restrictions as possible, while at the same time protecting sensitive data from inappropriate access
Data sharing
restricted data to restricted organisations or individuals. Access to this data is usually restricted because it is sensitive in some way, either because it is personal or because its general release might cause security problems.
expiration date of mediums and data
GenBank is a sequence database released in 1982. being one of the earliest bioinformatics community projects on the Internet
The Bermuda Principles set out rules for the rapid and public release of DNA sequence data. The Human Genome Project, a multinational effort to sequence the human genome, generated vast quantities of data about the genetic make-up of humans and other organisms. But, in some respects, even more remarkable than the impressive quantity of data generated by the Human Genome Project is the speed at which that data has been released to the public. At a 1996 summit in Bermuda, leaders of the scientific community agreed on a groundbreaking set of principles requiring that all DNA sequence data be released in publicly accessible databases within twenty-four hours after generation. These “Bermuda Principles” (also known as the "Bermuda Accord") contravened the typical practice in the sciences of making experimental data available only after publication. These principles represent a significant achievement of private ordering in shaping the practices of an entire industry and have established rapid pre-publication data release as the norm in genomics and other fields.
The three principles retained originally were:
Automatic release of sequence assemblies larger than 1 kb (preferably within 24 hours).
Immediate publication of finished annotated sequences.
Aim to make the entire sequence freely available in the public domain for both research and development in order to maximise benefits to society.
Innovatiion and progres s. a collaborative effort to find the biological markers that show the progression of Alzheimer’s disease in the human brain.
But we all realized that we would never get biomarkers unless all of us parked our egos and intellectual-property noses outside the door and agreed that all of our data would be public immediately.” , At first, the collaboration struck many scientists as worrisome — they would be giving up ownership of data, and anyone could use it, publish papers, maybe even misinterpret it and publish information that was wrong.
Prevents data loss: 80% of data is lost after 10 years. Data is fragile and reproducibility very difficult without data.
2, Maximize usefulness and built much more efficient on previous work: Maximize usefulness: organize, make
understandable, reusable and avoid duplication. Preserves data for further research by organizing, Stop drowning in irrelevant stuff. Reproducibility crisis.
3. Fosters creativity, interdisciplinary use of data and meta-analysis
4, public participation in scientific research
5. Promote integrity and increases transparency: managing data is part of good research, avoid accusations of sloppy science
4. Data tend to have a (much!) longer shelf life than interpretation
After accounting for other factors affecting citation rate, we find a robust citation benefit from open data.1
Interoperability: how can my data be combined with other datasets and used in other fields?
Licensing: who can access my data and for what perpuse can it be used
3 stars: You can manipulate the data in any way you like
4 stars: link to it, bookmark it, reuse parts of the data, combine with other data
5 stars: discover more related data,