A large part of the NECDMC curriculum uses case studies to teach best practices in data management for many different science disciplines. This presentation goes through the methodology of a case study, how to develop a case study, and presents an actual example of a research case study.
This document introduces the Brain Imaging Data Structure (BIDS) standard for organizing neuroimaging data. BIDS aims to standardize how neuroimaging data is organized to facilitate data sharing, processing, and analysis. It specifies a common file and folder structure that encodes metadata in filenames and separate metadata files. Key principles are adopting existing standards where possible, including enough metadata for most experiments while allowing extensions, and making the format simple to implement.
This document outlines the development of a research data management curriculum. It describes a 4 phase process: 1) Planning, 2) Content Development, 3) Piloting, and 4) Evaluation. During the planning phase, needs were assessed through student and faculty interviews. In phase 2, modules and teaching cases were created covering topics such as data types, storage, and sharing. The curriculum was piloted in 2013 and train-the-trainer sessions were held. Evaluation focuses on ensuring the content remains useful across different teaching contexts. The goal is to educate researchers and librarians on best practices for managing research data.
This document discusses reproducible research and provides guidance on how to conduct research in a reproducible manner. It covers:
1. The importance of reproducible research due to large datasets, computational analyses, and the potential for human error. Ensuring reproducibility requires new expertise and infrastructure.
2. Key aspects of reproducible research include data management plans, version control, use of file formats and software/tools that allow reproducibility, and publishing data and code to allow others to replicate results.
3. Reproducible research benefits the scientific community by increasing transparency and allows researchers to re-analyze their own data in the future. Journals and funders are increasingly requiring reproducibility.
Towards automated phenotypic cell profiling with high-content imagingOla Spjuth
Presentation by Ola Spjuth (Uppsala University and Scaleout) at the Chemical Biology Seminar Series, February 6th, at Karolinska Institutet and Science for Life Laboratory, Stockholm, Sweden.
ABSTRACT
Phenotypic profiling of cells with high-content imaging is emerging as an important methodology with high predictive power. The true power of these methods comes when integrated into automated, robotized systems that can be run continuously and not restricted to batch analysis. One of the main challenges then becomes how to manage and continuously analyze the large amounts of data produced. In this talk I will present our efforts to establish an automated lab for cell profiling of drugs using multiplexed fluorescence imaging (Cell Painting). I will describe our computational and lab infrastructure as well as the systems, tools an methods we are developing to sustain continuous profiling of cells and continuous AI modeling. A key objective in the group is on improving screening and toxicity assessment, but also to explore predictions of mechanisms and pathways. The long-term goal is to build a closed-loop system where results from analyses are used by an AI system to design the next round of experiments and iteratively improve the confidence in predictions. Research website: https://pharmb.io
RARE and FAIR Science: Reproducibility and Research ObjectsCarole Goble
Keynote at JISC Digifest 2015 on Reproducibility and Research Objects in Scholarly Communication
Includes hidden slides
All material except maybe the IT Crowd screengrab reusable
This document provides information and recommendations for preventing data loss through proper storage, organization, and backup of research files. It discusses developing a consistent file naming convention and folder structure for projects. The document also recommends storing multiple copies of important files in different locations and using version control software to track changes over time. Activities are included to help attendees evaluate their current practices and develop improved plans for organizing, backing up, and locking important versions of their data and files.
Responsible Conduct of Research: Data ManagementKristin Briney
This presentation was given by myself and Brad Houston (http://www.slideshare.net/herodotusjr), for UWM's Responsible Conduct of Research (RCR) series in Fall of 2013. It covers data management plans and practical data management tips. The corresponding handout is also available on Slideshare: http://www.slideshare.net/kbriney/rcr-data-management-handout
This document introduces the Brain Imaging Data Structure (BIDS) standard for organizing neuroimaging data. BIDS aims to standardize how neuroimaging data is organized to facilitate data sharing, processing, and analysis. It specifies a common file and folder structure that encodes metadata in filenames and separate metadata files. Key principles are adopting existing standards where possible, including enough metadata for most experiments while allowing extensions, and making the format simple to implement.
This document outlines the development of a research data management curriculum. It describes a 4 phase process: 1) Planning, 2) Content Development, 3) Piloting, and 4) Evaluation. During the planning phase, needs were assessed through student and faculty interviews. In phase 2, modules and teaching cases were created covering topics such as data types, storage, and sharing. The curriculum was piloted in 2013 and train-the-trainer sessions were held. Evaluation focuses on ensuring the content remains useful across different teaching contexts. The goal is to educate researchers and librarians on best practices for managing research data.
This document discusses reproducible research and provides guidance on how to conduct research in a reproducible manner. It covers:
1. The importance of reproducible research due to large datasets, computational analyses, and the potential for human error. Ensuring reproducibility requires new expertise and infrastructure.
2. Key aspects of reproducible research include data management plans, version control, use of file formats and software/tools that allow reproducibility, and publishing data and code to allow others to replicate results.
3. Reproducible research benefits the scientific community by increasing transparency and allows researchers to re-analyze their own data in the future. Journals and funders are increasingly requiring reproducibility.
Towards automated phenotypic cell profiling with high-content imagingOla Spjuth
Presentation by Ola Spjuth (Uppsala University and Scaleout) at the Chemical Biology Seminar Series, February 6th, at Karolinska Institutet and Science for Life Laboratory, Stockholm, Sweden.
ABSTRACT
Phenotypic profiling of cells with high-content imaging is emerging as an important methodology with high predictive power. The true power of these methods comes when integrated into automated, robotized systems that can be run continuously and not restricted to batch analysis. One of the main challenges then becomes how to manage and continuously analyze the large amounts of data produced. In this talk I will present our efforts to establish an automated lab for cell profiling of drugs using multiplexed fluorescence imaging (Cell Painting). I will describe our computational and lab infrastructure as well as the systems, tools an methods we are developing to sustain continuous profiling of cells and continuous AI modeling. A key objective in the group is on improving screening and toxicity assessment, but also to explore predictions of mechanisms and pathways. The long-term goal is to build a closed-loop system where results from analyses are used by an AI system to design the next round of experiments and iteratively improve the confidence in predictions. Research website: https://pharmb.io
RARE and FAIR Science: Reproducibility and Research ObjectsCarole Goble
Keynote at JISC Digifest 2015 on Reproducibility and Research Objects in Scholarly Communication
Includes hidden slides
All material except maybe the IT Crowd screengrab reusable
This document provides information and recommendations for preventing data loss through proper storage, organization, and backup of research files. It discusses developing a consistent file naming convention and folder structure for projects. The document also recommends storing multiple copies of important files in different locations and using version control software to track changes over time. Activities are included to help attendees evaluate their current practices and develop improved plans for organizing, backing up, and locking important versions of their data and files.
Responsible Conduct of Research: Data ManagementKristin Briney
This presentation was given by myself and Brad Houston (http://www.slideshare.net/herodotusjr), for UWM's Responsible Conduct of Research (RCR) series in Fall of 2013. It covers data management plans and practical data management tips. The corresponding handout is also available on Slideshare: http://www.slideshare.net/kbriney/rcr-data-management-handout
Lab Notebooks as Data Management (SLA Winter Virtual Conference 2012)Kristin Briney
This talk, aimed at librarians, describes the data management issues surrounding paper and electronic lab notebooks. It offers several ways for librarians to support good practices and the transition from paper to electronic.
Week 1 lecture for High School Bioinformatics course; covers why we need to use computers in biology, what bioinformatics/computational biology is, an introduction to machine learning, and examples from current research
TechSmithMoraewas used on a laptop computer to conduct usability testing of the recently revised WVU Libraries Database web application, using test questions from the first round of testing that were still relevant to the web application. This round of usability testing was internal and focused on Undergraduate Student employees.
Database Web Application Usability TestingTim Broadwater
TechSmithMoraewas used on a laptop computer to conduct usability testing of the newly designed WVU Libraries Database web application. This round of usability testing was internal and focused on WVU Libraries primary target audience.
Talk given for UW-Madison Ebling Library and School of Medicine and Public Health on 3 Dec 2013. It covers electronic laboratory notebooks and what to look for in the software.
This document discusses the importance of lab notebooks for scientific data management, both currently and in the future. It identifies that lab notebooks are a critical tool for organizing pre-publication research data but practices vary widely. Ideal notebooks would contain all raw data, metadata, analyses, and citations in an electronic, searchable format. The document outlines how librarians can help by developing resources on best practices for organizing digital data and recording this in notebooks, as well as instruction on electronic notebook software. It recognizes that notebooks are shifting to fully digital formats and this will further impact data management.
Workshop - finding and accessing data - Cambridge August 22 2016Fiona Nielsen
Finding and accessing human genomic data for research
University of Cambridge, United Kingdom | Seminar Room G
Monday, 22 August 2016 from 10:00 to 12:00 (BST)
Charlotte, Nadia and Fiona presented an overview of data sources around the world where you can find genomics data for your research and gave examples of the data access application for dbGaP and EGA with specific details relevant for University of Cambridge researchers.
Using electronic laboratory notebooks in the academic life sciences: a group ...SC CTSI at USC and CHLA
This document summarizes a webinar on using electronic laboratory notebooks (eLNs). The webinar featured a presentation by Dr. Ulrich Dirnagl on his experience using eLNs to make research teams more efficient. He believes paper notebooks are outdated and that eLNs can help address the reproducibility crisis in research. The webinar covered the benefits of eLNs like collaboration, data sharing, and compliance with regulations. It also reviewed different types of eLNs and pricing models. While implementation challenges exist, eLNs were found to improve oversight, record keeping, and transparency if selected and supported properly.
Presentation given at Organization for Human Brain Mapping Annual Meeting in Singapore 2018
Video recording: https://www.pathlms.com/ohbm/courses/8246/sections/12538/video_presentations/116214
Practical Data Management - ACRL DCIG WebinarKristin Briney
This document summarizes a webinar on practical data management. It discusses best practices for file organization, naming conventions, documentation, storage, backups, and ensuring future usability. Key recommendations include organizing files logically by project or type, using consistent naming conventions, thoroughly documenting data collection and analysis methods, storing data in multiple locations both on and off-site, backing up data regularly including testing backups, and future-proofing data through file format conversion and migration to new media. Resources for further information on data management best practices are also provided.
This document provides guidance on creating a teaching case study for a final project. It explains that a teaching case presents a research problem and questions for users to consider solutions within acceptable practices. The case study should craft a narrative based on an interview with a researcher, describing their project and data management challenges. It should highlight specific areas or "teachable moments" around research data management practices. The document provides tips for structuring the case narrative, identifying teaching points linked to data principles, and crafting discussion questions. The final product will be a teaching case presented in a 30-minute presentation, in order to educate others about data management best practices.
Datat and donuts: how to write a data management planC. Tobin Magle
This document provides guidance on how to write a data management plan (DMP). It discusses what a DMP is, why researchers should care about data management, and where data management fits into the research cycle. It also covers the key components of a successful DMP, including a data inventory, a strategy for describing the data, a plan for long-term data preservation, and methods for making the data accessible. The document provides examples and exercises to help researchers develop the sections of a DMP for their own research projects.
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
This document provides a guide to practicing open science. It discusses transparency, reproducibility, and collaboration as key principles of open science. It then provides recommendations for openly sharing data, code, and papers. For data, it recommends using structured formats like BIDS and sharing in curated repositories. For code, it recommends using version control systems like GitHub and containerization tools like Docker. For papers, it recommends practices like preregistration, publishing preprints, and openly reviewing other works. The overall message is that open science promotes collaboration and reproducible research even if adopted incrementally.
Presentation from a University of York Library workshop on research data management. The workshop provides an introduction to research data management, covering best practice for the successful organisation, storage, documentation, archiving, and sharing of research data.
The case for cloud computing in Life SciencesOla Spjuth
This document summarizes Ola Spjuth's background and research interests related to cloud computing in life sciences. Spjuth is an associate professor who manages bioinformatics resources at SciLifeLab and UPPMAX. His research focuses on developing e-infrastructure, automation methods, and applied e-science using tools like Docker and Kubernetes. He is working on projects applying these technologies to problems in drug discovery and predictive modeling of image data.
Lab Notebooks as Data Management (SLA Winter Virtual Conference 2012)Kristin Briney
This talk, aimed at librarians, describes the data management issues surrounding paper and electronic lab notebooks. It offers several ways for librarians to support good practices and the transition from paper to electronic.
Week 1 lecture for High School Bioinformatics course; covers why we need to use computers in biology, what bioinformatics/computational biology is, an introduction to machine learning, and examples from current research
TechSmithMoraewas used on a laptop computer to conduct usability testing of the recently revised WVU Libraries Database web application, using test questions from the first round of testing that were still relevant to the web application. This round of usability testing was internal and focused on Undergraduate Student employees.
Database Web Application Usability TestingTim Broadwater
TechSmithMoraewas used on a laptop computer to conduct usability testing of the newly designed WVU Libraries Database web application. This round of usability testing was internal and focused on WVU Libraries primary target audience.
Talk given for UW-Madison Ebling Library and School of Medicine and Public Health on 3 Dec 2013. It covers electronic laboratory notebooks and what to look for in the software.
This document discusses the importance of lab notebooks for scientific data management, both currently and in the future. It identifies that lab notebooks are a critical tool for organizing pre-publication research data but practices vary widely. Ideal notebooks would contain all raw data, metadata, analyses, and citations in an electronic, searchable format. The document outlines how librarians can help by developing resources on best practices for organizing digital data and recording this in notebooks, as well as instruction on electronic notebook software. It recognizes that notebooks are shifting to fully digital formats and this will further impact data management.
Workshop - finding and accessing data - Cambridge August 22 2016Fiona Nielsen
Finding and accessing human genomic data for research
University of Cambridge, United Kingdom | Seminar Room G
Monday, 22 August 2016 from 10:00 to 12:00 (BST)
Charlotte, Nadia and Fiona presented an overview of data sources around the world where you can find genomics data for your research and gave examples of the data access application for dbGaP and EGA with specific details relevant for University of Cambridge researchers.
Using electronic laboratory notebooks in the academic life sciences: a group ...SC CTSI at USC and CHLA
This document summarizes a webinar on using electronic laboratory notebooks (eLNs). The webinar featured a presentation by Dr. Ulrich Dirnagl on his experience using eLNs to make research teams more efficient. He believes paper notebooks are outdated and that eLNs can help address the reproducibility crisis in research. The webinar covered the benefits of eLNs like collaboration, data sharing, and compliance with regulations. It also reviewed different types of eLNs and pricing models. While implementation challenges exist, eLNs were found to improve oversight, record keeping, and transparency if selected and supported properly.
Presentation given at Organization for Human Brain Mapping Annual Meeting in Singapore 2018
Video recording: https://www.pathlms.com/ohbm/courses/8246/sections/12538/video_presentations/116214
Practical Data Management - ACRL DCIG WebinarKristin Briney
This document summarizes a webinar on practical data management. It discusses best practices for file organization, naming conventions, documentation, storage, backups, and ensuring future usability. Key recommendations include organizing files logically by project or type, using consistent naming conventions, thoroughly documenting data collection and analysis methods, storing data in multiple locations both on and off-site, backing up data regularly including testing backups, and future-proofing data through file format conversion and migration to new media. Resources for further information on data management best practices are also provided.
This document provides guidance on creating a teaching case study for a final project. It explains that a teaching case presents a research problem and questions for users to consider solutions within acceptable practices. The case study should craft a narrative based on an interview with a researcher, describing their project and data management challenges. It should highlight specific areas or "teachable moments" around research data management practices. The document provides tips for structuring the case narrative, identifying teaching points linked to data principles, and crafting discussion questions. The final product will be a teaching case presented in a 30-minute presentation, in order to educate others about data management best practices.
Datat and donuts: how to write a data management planC. Tobin Magle
This document provides guidance on how to write a data management plan (DMP). It discusses what a DMP is, why researchers should care about data management, and where data management fits into the research cycle. It also covers the key components of a successful DMP, including a data inventory, a strategy for describing the data, a plan for long-term data preservation, and methods for making the data accessible. The document provides examples and exercises to help researchers develop the sections of a DMP for their own research projects.
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
This document provides a guide to practicing open science. It discusses transparency, reproducibility, and collaboration as key principles of open science. It then provides recommendations for openly sharing data, code, and papers. For data, it recommends using structured formats like BIDS and sharing in curated repositories. For code, it recommends using version control systems like GitHub and containerization tools like Docker. For papers, it recommends practices like preregistration, publishing preprints, and openly reviewing other works. The overall message is that open science promotes collaboration and reproducible research even if adopted incrementally.
Presentation from a University of York Library workshop on research data management. The workshop provides an introduction to research data management, covering best practice for the successful organisation, storage, documentation, archiving, and sharing of research data.
The case for cloud computing in Life SciencesOla Spjuth
This document summarizes Ola Spjuth's background and research interests related to cloud computing in life sciences. Spjuth is an associate professor who manages bioinformatics resources at SciLifeLab and UPPMAX. His research focuses on developing e-infrastructure, automation methods, and applied e-science using tools like Docker and Kubernetes. He is working on projects applying these technologies to problems in drug discovery and predictive modeling of image data.
This document summarizes a seminar on data management for undergraduate researchers. It discusses what data is, why it needs to be managed, and key aspects of the data management process such as data organization, metadata, storage, and archiving. Topics covered include file naming best practices, version control, documentation, metadata standards, storage options, and long-term archiving. The goal is to help researchers organize and document their data so it can be understood, preserved, and reused.
Data Literacy: Creating and Managing Reserach Datacunera
This document discusses best practices for creating and managing research data. It covers defining data, the importance of data management, developing a data management plan, file naming conventions, metadata, data sharing and preservation. Key points include making a data management plan addressing types of data, standards, access and sharing policies; using descriptive file names with dates; storing multiple versions of data; and including metadata to explain the data. Resources for data management support are provided.
Introduction to Research Data Management for postgraduate studentsMarieke Guy
The document provides an introduction to research data management for postgraduate students, outlining what research data is, the research process, what research data management involves and why it is important, and how students can start thinking about good research data management practices. It discusses defining and organizing data, storage and security, and maintaining findable and understandable data throughout the research lifecycle. The goal is to explain the importance of research data management and the roles students play in effective data management.
Genome sharing projects around the world nijmegen oct 29 - 2015Fiona Nielsen
Genome sharing projects across the world
Did you ever wonder what happened to the exponential increase in genome sequencing data? It is out there around the world and a lot of it is consented for research use. This means that if you just know where to find the data, you can potentially analyse gigabytes of data to power your research.
In this talk Fiona will present community genome initiatives, the genome sharing projects across the world, how you can benefit from this wealth of data in your work, and how you can boost your academic career by sharing and collaboration.
by Fiona Nielsen, Founder and CEO of DNAdigest and Repositive
With a background in software development Fiona pursued her career in bioinformatics research at Radboud University Nijmegen. Now a scientist-turned-entrepreneur Fiona founded DNAdigest and its social enterprise spin-out Repositive Ltd. Both the charity and company focus on efficient and ethical sharing of genetics data for research to accelerate diagnostics and cures for genetic diseases.
This document discusses the need for critical infrastructure to promote data synthesis and evidence-based nutrient management. It outlines 10 steps for real-time data uptake, analysis, and customized nutrient recommendations. Key challenges include data standards, minimum data sets, provenance, and repositories. The Purdue University Research Repository is presented as a solution, providing preservation, curation, and publication of agricultural data. Hands-on support from librarians and agronomists is discussed to help researchers transition data and ensure best practices.
Data Management for Postgraduate students by Lynn Woolfreypvhead123
This document discusses research data management for postgraduates. It explains that research data management refers to storing, accessing, and preserving research data. It notes that funders and universities now require data management plans for funding proposals and research. The document provides reasons for doing research data management, such as ensuring long-term data preservation, preventing fraud, and enabling data reuse. It outlines elements to include in a data management plan and resources for writing plans. The document advises that data services can help take the burden of research data management off researchers.
This document discusses the need to make research data more discoverable and usable by connecting disparate data through metadata. Currently, the majority of research data is stored in isolated locations like personal hard drives, resulting in lost opportunities for analysis across experiments. The document advocates for culture change where researchers curate and share their data in centralized repositories to enable new insights from aggregating and comparing data in connected ways. This would help address challenges like variability between specimens and complexity in living systems that reductionist approaches cannot capture alone. Ensuring long-term sustainability of data repositories and defining roles for libraries and institutions are also discussed.
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
This document summarizes an interactive workshop on using Pinterest effectively. It introduces Pinterest as a photo-sharing site where users create boards to organize interests and collections. It then covers key aspects of using Pinterest like signing up, creating boards, pinning content, following others, and strategies for business and marketing uses. The workshop agenda aims to provide starters on using Pinterest and tips for curating engaging content and growing an audience.
Automation and Integrated Library SystemsJulie Goldman
Simmons LIS 489: Technology Foundations for Information Science
Social and Professional Aspects Final Presentation: Automation and Integrated Library Systems. Focuses on two different automation systems used by libraries.
This document discusses how libraries can use Pinterest to engage patrons. It provides an overview of Pinterest, including how to sign up and the basic terminology. It then covers strategies for libraries, such as creating boards focused on specific topics, following other users and pinning content from other sites. The document recommends evaluating other library profiles for ideas and tips for marketing a library's presence on Pinterest through specific boards, descriptions and comments. It also briefly mentions using a business account and Pinterest apps.
Zebrafish and Data Management Midterm ProjectJulie Goldman
LIS 532G: Scientific Research Data Management
Midterm Project Presentation: Zebrafish and Data Management
Research project at The Ohio State University; data interview with research graduate student; data management plan and evaluation; about data management at Ohio State.
Zebrafish and Data Management Final ProjectJulie Goldman
LIS 532G: Scientific Research Data Management
Final Project Presentation: Zebrafish and Data Management
Teaching Case Study focuses on model organisms in a neuroscience research lab: Using Zebrafish as a Model System for Studying Motor Axon Guidance & Motoneuron Disease
Data Interview and Data Management PlansJulie Goldman
LIS 532G: Midterm Project Presentation
Data Interview: What is it and how do you do it?
Preparation for Midterm Project to design and conduct a Data Interview, and to create a Data Management Plan.
This document provides an overview of the Dataverse Network Project, which is a repository for research data hosted at Harvard University. It allows researchers to deposit, share, and organize their data in a curated network. Key features include long-term preservation of data and metadata, access and sharing capabilities, and archiving best practices to promote data access and reproducibility. Researchers can create individual dataverses to organize their studies and deposit data through a web interface or via software installation. The network supports various file types and formats and provides data citation and version control.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Developing a Research Case Study
1. Developing a Research
Case Study
Julie Goldman, MLIS
@jgolds2
Library Fellow
Lamar Soutter Library
UMass Medical School
New England Collaborative
Data Management Curriculum
Scientific Research Data Management by Lamar Soutter Library, University of Massachusetts Medical
School is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
#teachingNECDMC
2. Outline
• Case Study Methodology
• Developing Case Study
• NECDMC Research Case Study
• Teaching NECDMC
4. “The case method packs more experience into every
hour of learning than any other instructional approach.”
Harvard Business Publishing,
Hints for Case Teaching
http://www.expand2web.com/blog/marketing-case-study-how-weight-watchers-dominated-the-weight-loss-industry/
5. Case Study Method
• Present problems
• Focus on a topic or
“teachable moment”
• Prepare users
http://blog.tradeshift.com/tradeshift-case-study-broadway-design-company/
6. Creating a Case Study
• Background Research
• Data Interview
• Case Narrative
• Discussion Questions
• Data Management Plan
http://amitkaps.com/bring-the-right-brain-at-work/
13. Using Zebrafish as a Model
System for Studying Motor Axon
Guidance & Motoneuron Disease
14. Research Questions
• What is the biological basis of the
motoneuron disease SMA?
• How can modeling ALS in zebrafish be
useful as a tool for drug and genetic
screening?
• What genes define motor axon
outgrowth?
16. Data Interview
• Draw out information about project
• Questions focus on the data story
https://www.youtube.com/user/nealsciencebootcamp/feed
17. Tips and Reminders
• Do your homework
• Use follow-up questions
• Make the meeting about the
researcher not the library
• Establishes relationships
• Associates the library with data
https://yellowdoggraphics.wordpress.com/page/3/
19. Initial Interview
1. As a research focused university
what kind of NIH grant do you have?
2. What other funding sources do
you have?
3. How long has this research project
been going on?
4. What is the overarching purpose
of this research?
5. What is your role in the research
process?
6. What kinds of experiments are
you doing with the zebrafish?
7. Who else works on this project?
8. What types of data products are
being produced?
9. What file formats are your data
produced in?
10. How is data analyzed?
11. How is the data managed?
12. Does your lab have naming
conventions for files/data?
13. Where and how long is data and
notebooks stored?
14. What kind of backup and security
protocols does your lab have?
15. What is shared publicly and with
the neuroscience community?
16. Who is allowed access to the data?
17. Are there security concerns within
the lab?
18. Who owns and is responsible for the
research data?
19. With a long research project, there is
personnel turnover within the lab. How
is data passed down among the research
team?
20. How is you lab ensuring long term
preservation of your research and data?
20. Follow up Email
1. What lab instruments you use?
2. You mentioned .TIFF files, what
other file formats do these
instruments create?
3. Do you have to change file formats
to make them accessible to everyone?
4. Any idea how many files are being
produced daily?
5. In terms of metadata, is there are a
data dictionary to go along with it?
6. Who exactly is reusing the research
data? Other OS labs? US labs?
International?
7. You told me about your lab notebook
and I am aware your lab is very low tech,
but has the university or the lab thought
about implementing electronic lab
notebooks?
8. How much interaction is there between
your PI and you and everyone else working
in the lab?
21. Second Interview
1. When you collect data about your fish,
how do you make sure that information is
linked to that fish?
2. Where do all of the image files you
are collecting end up? Only in your lab
notebook and on your personal
computer?
3. Can you explain the scoring system
you use to quantify the defect in the
motor neurons that are imaged on
the florescent microscope?
4. Do you know how often the programs you
use are updated for new versions?
5. Only zebrafish genetic lines are sent to
the international registry. What happens
with all the other data about the fish?
6. Does your lab submit data or publications
to Ohio State University’s institutional
repository?
7. Do you know if Ohio State has policies for
data sharing or data preservation?
24. Research
• Motoneuron diseases SMA and ALS
• Genetic and molecular cues
• Genetic models of zebrafish
• Research since 1996
https://science.nichd.nih.gov/confluence/display/zfig/Home
26. SMA
• Spinal muscular atrophy
• Caused by mutations in the
survival motoneuron gene (SMN)
• SMN protein is critical to the
health and survival of nerve cells in
the spinal cord responsible for
muscle contraction
• Occurs early in lifehttp://en.wikipedia.org/wiki/Spinal_muscular_atrophy
27. SMA
• Protein knockdown technology
• What function of SMN leads to motoneuron dysfunction
• Cell death in SMA caused by motor neuron defects
• Use scoring system on florescent microscope images
http://www.smasupportuk.org.uk/blog/research/sma-support-uk-at-the-cure-sma-conference-2013
28. ALS
• Amyotrophic lateral sclerosis
or Lou Gehrig’s disease
• Muscle weakness and atrophy
• Defect on chromosome 21
which codes for superoxide
dismutase (SOD1) enzyme http://en.academic.ru/dic.nsf/enwiki/17085
29. ALS
• Genetic mutation :
SOD1 gene to
generate SOD G93A
and G85R transgenic
zebrafish
• Drug screens with
zebrafish larva
• Rescue motor neurons
early in development
http://oncampus.osu.edu/article.php?id=1645
33. Zebrafish Facility
• Facility supports three labs
• 1200 sq ft
• 1234 tanks & 40,000 fish
• Tank labels : research’s name,
fish name, DOB, stock number
http://medicine.osu.edu/neuroscience/neuroscience-core-
services/core-b-genetics/ii-zebrafish-and-genome-manipulation-
facility/pages/index.aspx
34. General Lab Work
• PCR : polymerase chain reaction
• Agarose gel electrophoresis: separate DNA
• Western Blot : detect protein levels in tissue
• Microscopy : scoring system (axon morphology)
doi: 10.1083/jcb.200303168
35. Equipment and Products
• Bio-Rad (RT)-qPCR : Microsoft™ Excel™ files
• Thermo Scientific™ NanoDrop™ : Excel™ files
• Western Blots : film developed in a dark room
• Agarose gels : read on a gel box and
printed/scanned for densitometry quantification
• Microscopes : TIFF and JPEG files
• Data analysis : Excel™ or SPSS™
36. Programs
• SPSS® : statistics software
• ImageJ™ : public domain, Java™-based image
processing program developed by NIH
• Adobe® Photoshop® : photo editing
• Microsoft® Office : Word™, Excel™, PowerPoint™
37. Data Flow
• Data produced on old computers attached to equipment
• Transferred to the big (old) lab computer for processing
and data analysis
Example: florescent microscopy images
are saved on the computer attached to
the microscope which are then printed
out and sent to other computers
www.labx.com
https://u.osu.edu/beattie.24/
38. File Naming Conventions
• No standardization
• Personal
• Become more professional when sent
to the PI and goes to publication
http://dilbert.com/strip/2011-04-23
39. Lab Notebooks
• Paper lab notebooks for non-digital data
• Personal data keeping techniques
• Records detailed descriptions of experiments
• Notebooks stay in the lab
http://2012.igem.org/Team:LMU-Munich/Lab_Notebook
40. Backup and Security
• Use personal computers
• Responsible for keeping
external hard drives
• Security: passwords and
key access to labhttp://d7.library.gatech.edu/research-data/home
41. Sharing
• Sharing via Dropbox™ and Google Drive™
• Data from previous graduate students
passed down through the use of CDs
http://www.creativewomenscircle.com.au/social-media-using-dropbox/
42. Access
• Once published: public access to data
• Anyone can ask for reagents and animals
• Fish genetic lines are submitted to
international database for zebrafish
http://www.ru.nl/library/services/research/researchdata/finding/
43. • Nature
• Science
• PubMed
• Any one can ask for reagents,
antibodies, enzymes, and/or fish that
• OSU: get anything pre-publication
45. Preservation
• Archive: duration of the grant
• NIH: 3 years to have access to it
https://grants.nih.gov/grants/policy/data_sharing/data_sharing_guidance.htm
NIH Data Sharing Policy and Implementation Guidance
http://wiki.dpconline.org/
53. Data Management Plan
• Breakout Activity
• Use SDMP
• Create DMP
http://www.ru.nl/library/services/research/researchdata/dmp/
54. Creating a Case Study
• Background Research
• Data Interview
• Case Narrative
• Discussion Questions
• Data Management Plan
http://amitkaps.com/bring-the-right-brain-at-work/
55. Develop Your Own
• Use the case study methodology
• Understand the process and steps involved
• Follow this format: teaching points, narrative
and discussion questions
• NECDMC research case
http://www.dtpli.vic.gov.au/planning/urban-design-and-development/design-case-studies
56. Next Up…
• Research Case Study
• Identify Data Management Needs
• Create Data Management Plan
• Teaching with NECDMC
57. References
Ferguson (2012) Lurking in the Lab: Analysis of Data from Molecular Biology Laboratory Instruments:
http://escholarship.umassmed.edu/jeslib/vol1/iss3/5/
The Beattie Lab at OSU, Department of Neuroscience:
https://u.osu.edu/beattie.24/
Ohio State University Neuroscience Graduate Program:
http://ngsp.osu.edu/
Ohio State University Library:
http://library.osu.edu/staff/admin-plus/AdminPlusNotes_20110427.pdf
Johns Hopkins University Data Management Services:
http://dmp.data.jhu.edu/sites/default/files/Questionnaire.doc
National Institute of Environmental Health Sciences:
http://www.niehs.nih.gov/news/newsletter/2013/9/science-ntptalk/
58. Developing a Research
Case Study
Julie Goldman, MLIS
@jgolds2
Library Fellow
Lamar Soutter Library
UMass Medical School
New England Collaborative
Data Management Curriculum
Scientific Research Data Management by Lamar Soutter Library, University of Massachusetts Medical
School is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
#teachingNECDMC
Editor's Notes
Current library fellow
Graduate of Simmons – took scientific research data management taught by UMMS
Contributed to the curriculum – biological lab case study
Joined NECDMC teaching team
Methodology – what is it and how it works for NECMDC
Developing – background, interview, narrative, discussion questions, DMP
Case Study – zebrafish developed case study which will be used for the breakout activity
Teaching – Donna will then expand on the case study method and how it relates to data management education and teaching using NCDMC case studies and materials
A large part of the NECDMC curriculum uses case studies to teach best practices in data management for many different science disciplines
Used for education in many areas – science, business, law
Present problems users must attempt to solve within acceptable practices
Teachable moment useful for educating others
highlights specific data management practices or needs of a specific discipline or type of research
builds an understanding of scientific research
Prepare users for similar situations in the future
First step is to identify the environment: background information on institution, researcher, research project (who, what, when, where, why)
Hold a data interview with a researcher
Write narrative of research project
NECMDC case studies include the narrative, questions and highlight data management issues
Develop discussion questions
Craft a mock DMP
Typical research lifecycle through a project – data flow in the in center
Librarians typically downstream – collecting books/journals/datasets – data discovery/archiving
Should be more involved upstream at the iteration/project start – project planning with data collection/organization/description
Case study example in NECDMC
Research project using model organisms in a neuroscience research lab
My researcher (at the time) was a graduate student in the neuroscience program at OS
Specifically interested in neurological disorders and gene therapy
Research lab at the Ohio State University
The Beattie Lab at OS
Use zebrafish as model organism for research
Using zebrafish as a model organism for studying motor axons in motoneuron diseases
Research Questions:
Biological basis of SMA
Modeling ALS in zebrafish for drug and genetic screening
Identifying genes for motor axon growth
Go through the steps in setting up, conducting and analyzing a data interview
Use this information so the librarian can address data lifecycle issues and begin to implement a data management plan
Acquire as much information as possible from the researcher
Questions on data story, purpose, and life span
Great resource from New England Science Boot Camp – videos featuring UMMS librarians discussing “how to talk to researchers”
Homework on researcher and their research – impress them with your knowledge
Follow up during and after the interview – ask more questions to fully understand the project workflow
Make the meeting about the researcher – not about the library and what the library can do (yet)
Establishes relationships with institutional researchers
Researchers will see that librarians are interested in the research process and understanding research needs/issues/challenges
Associates the library with data – once the librarian understands the research project – make recommendations more how the library can help with the data needs…
Create an interview instrument – lots of options for templates
Digital Curation Centre’s Checklist for a Data Management Plan
Purdue Data Information Literacy Interview Instruments
University of Virginia Data Interview Initiative
John Hopkins
In addition to using a template – create own questions related to specific researcher/research
One question at a time
Avoid yes/no questions – open ended (#12 & 17 bad)
Limit questions to CORE aspects (I might have had too many questions…)
Set up interview – I asked for 30 minutes of her time to talk about her research
Skype interview and recorded
Use follow-up questions – #10 (You have a lot of digital files being created, what file formats are those generated in? → Is there non-digital data?)
Offer check-ins/copy of interview transcript
More follow up – multiple meetings
I sent very specific questions via email
More follow-up
Make sure you fully understand the research – shows your dedication to the project but do not become overbearing/disruptive
Questions really tailored to the research projects
This is what I learned: could have done more investigation beforehand!
Who what when where why
Construct narrative from the data interview transcript
Telling the story of the research project and the research data lifecycle
Include as many details as possible and point out missing pieces/challenges researcher expresses
Now share my research case…
Neuroscience research lab
Investigating the biological basis of motoneuron diseases SMA and ALS
Genetic and molecular cues that guide motor axons to their target muscle
Using zebrafish as a genetic model for these diseases (click)
Research project since 1996 – multiple grants
NIH ro1 – award made to support a discrete, specified, circumscribed project
Government is strict about data keeping and can ask to see data and notebooks any time
NIH has the legal right to audit and examine record relevant to any research grant award
Private funding: SMA & ALA families, foundations/organizations and private companies
Overview of SMA helps to understand the research experiments and data collected
Low levels of “survival of motor neuron” (SMN) protein leads to muscle atrophy and weakness
Occurs early in life - is the leading genetic cause of death in infants and toddlers
How zebrafish serve as genetic model of SMA
Protein knockdown technology in zebrafish development
Cell death in SMA caused by motor neuron defects during early development
Use scoring system on florescent microscope images to determine conditions and development of motor neurons (severe, moderate, mild, no defect)
Overview of ALS to understand experiments and data collection
Muscle weakness and atrophy throughout the body due to degeneration of the upper and lower motor neurons
Superoxide disumates 1, soluble (or SOD1) is a gene responsible for the enzyme on chromosome 21 that protects the body from free radicals
Free radical accumulation can damage DNA and proteins produced within cells
Looking at 2 specific gene mutations – correcting the effects of the mutant SOD1 gene
Use zebrafish larva to understand defects in the early stages of neuron development
Example of zebrafish microscopy images:
Optineurin (OPTN) is a 577 amino acid protein of versatile functions which interacts with a variety of proteins
Mutations in OPTN gene have been associated with ALS
OPTN interacts with aggregating proteins (SOD1 and G93A) involved in ALS
OPTN depletion in zebrafish causes motor axonopathy and mutant SOD1 increases motor axonopathy
Images shows:
Zebrafish injected with OPTN-specific translation blocking (OPTN ATG-AMO) morpholino showed a phenotype (curved tail indicated by the arrow)
This research project is just focusing on SOD1
The overexpressing SOD1 G93A at 48 hours after fertilization in comparison with non-injected zebrafish or zebrafish injected with control morpholino (control AMO)
Looking at the interaction of these proteins in motor axon development – causing axonopathy or axon degradation – curved spinal cord and loss of mobility
From the previous image you can see that zebrafish larva are easy to use as an experimental model organism – easy to grow and see through
Used for many kids of research
Why zebrafish?
Model established
Genome fully sequenced
Well-understood, easily observable
Testable developmental behaviors
Rapid embryonic development
Large, robust, transparent embryos
Develop outside mother
Similar to mammalian models and humans
Important factor in this research project is the zebrafish facility
Facility supports three research labs – 10-12 people using fish for different projects
Each person has their own fish – labeled accordingly
Common stock for breeding and controls
Facility manager over sees breeding, facility and IACUC compliance (institutional animal care and use control)
Google Drive database for logging information and updates on fish/experiments – used to have a paper log book in the facility
Researcher described “general lab work”:
PCR: polymerase chain reaction amplifies copies of a particular DNA sequence
Running agarose gel electrophoresis - DNA manipulation/separation
Western blots to detect protein levels
Microscope imaging
Bio-Rad (RT)-qPCR machine – amplifies a single or a few copies of a piece of DNA across several orders of magnitude, produce Microsoft™ Excel™ files
Thermo Scientific™ NanoDrop™ spectrophotometer – measures light transmittance or reflection intensity as a function of the light source wavelength
Western blot images developed are scanned
Agarose gels are read on a gel box and images are scanned and printed for densitometry quantification, the measure of light absorption through the medium
Microscopes produce Tagged Image File Format (TIFF) and JPEG (.jpeg) files
The team uses Excel™ and SPSS™ for data analysis
SPSS™ statistical software for data analysis
Edits image files in ImageJ™ - Java™-based image processing program developed by the NIH - or Adobe™ Photoshop™
Uses Portable Document Format (PDF) (.pdf) and PowerPoint™ (.pptx) files for figures and publications.
Typical data flow involves analysis of direct microscope images of manipulated fish samples, or gel images of DNA/protein analysis
These image files are produced on computers attached to the lab equipment
Files are analyzed on a computer and/or sent to the experimenter’s personal computer
The researcher always prints out a hard copy of any images and pastes them in her paper lab notebook
Lab does not have any standardized way to document its data
No naming conventions for saving and locating documents and images
No data dictionary
Files usually just involve the person’s name and a title or description meaningful to that person
These files are re-named for the PI and/or publication
Pastes hard copy images (gels, microscope) in notebook
Graduate student feels her notebook is comprehensible and easy to follow/better than post doc
Team uses their personal computers for lab work
She feels each person is responsible for his or her own data management
She backs up the lab files on her computer to an external hard drive
Security: passwords and key access to building and lab
When the graduate student began working in the lab - given CDs with pervious data
CDs containing images and data analysis
Now the lab team shares its data with each other using Google Drive and Dropbox on university server
Ultimately PI: responsible for data
Graduate student feels she can get any data she needs pre-publication
The graduate student feels that once the research is published, then anyone who wants it has access to the relevant data in the article
Anyone can ask for reagents and animals used in published study
Places of publication – science/nature/pubmed
Any one can ask for reagents, antibodies, enzymes, and/or fish that were used in any published study
Share and use data via repositories that house data on genetically modified zebrafish
ZFIN – NIH-funded zebrafish model organism database
Zebrafish Gene Collection (ZGC) – NIH initiative supports the production of cDNA libraries, clones and sequences of expressed genes for zebrafish – publicly accessible to the biomedical research community – all ZGC sequences are deposited in GenBan
ZF-HEALTH – is a Large-scale Integrating Project funded by the European Commission
Publications are considered the primary “electronic” form of data conservation in her lab
At the time – NIH’s Data Sharing Policy – no formal data management plan beyond sharing
Expected to archive for the duration of the project/grant as the NIH could ask for data/lab notebooks/ect
Case study on site
Teaching points
Case narrative
Discussion Questions
Teaching points – integrate the data story into the simplified data management plan & highlight where the researcher is doing something well, not doing something, or doing something that is not of best practice
Narrative – overview of research project (background, researcher) & data flow (collection, storage, sharing, preservation)
Discussion questions – highlight the data management topics and needs within a research case, help when teaching with the case study, prompt people to identify the data flow within a project, understand the components of a SDMP and what areas librarians can help researchers
Case study example in NECDMC
Breakout activity after lunch
Create a data management plan using the case study (case narrative, teaching points, discussion questions) and the SDMP that Donna will expand on in her presentation
First step is to identify the environment: background information on institution, researcher, research project (who, what, when, where, why)
Hold a data interview with a researcher
Write narrative
Develop discussion questions
Craft a mock DMP
NECMDC case studies include the narrative, questions and highlight data management issues
Now that you have the methodology for why case studies are useful, and the steps to create one – create your own
As you can see this case is not long
Go through the process –identify a researcher at your institution or other place, understand the institution or environment, conduct an interview, create a narrative, create discussion questions to highlight the data management needs and challenges
We can add your research case to our list of examples on the NECDMC site – we are always looking for new areas of science and disciplines
After the research story and understanding the data flow throughout the project…
Librarians can identify the data management flaws/challenges and needs
Make recommendations for fixing/avoiding problems
Use the Simplified Data Management Plan to develop a formalized plan for the research team to follow
Donna will work though another research case study and talk about how to identify the data management needs, then how to develop a DMP, and how to teach with NECDMC…