Data management is a key skill in the age of large, complex data sets. Collaborative research makes the process of managing research data harder. This presentation will cover some key features of the Open Science Framework that facilitate collaborative research.
DataONE Education Module 03: Data Management PlanningDataONE
Lesson 3 in a set of 10 created by DataONE on Best Practices fo Data Management. The full module can be downloaded from the DataONE.org website at: http://www.dataone.org/educaiton-modules. Released under a CC0 license, attribution and citation requested.
Data management is a key skill in the age of large, complex data sets. Collaborative research makes the process of managing research data harder. This presentation will cover some key features of the Open Science Framework that facilitate collaborative research.
DataONE Education Module 03: Data Management PlanningDataONE
Lesson 3 in a set of 10 created by DataONE on Best Practices fo Data Management. The full module can be downloaded from the DataONE.org website at: http://www.dataone.org/educaiton-modules. Released under a CC0 license, attribution and citation requested.
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsPaul Groth
A look at how the thinking about Web Data and the sources of semantics can help drive decisions on combining latent and explicit knowledge. Examples from Elsevier and lots of pointers to related work.
Responsible conduct of research: Data ManagementC. Tobin Magle
A presentation for the Food and Nutrition Science Responsible conduct of research class on data management best practices. Covers material in the context of writing a data management plan.
With the explosion of interest in both enhanced knowledge management and open science, the past few years have seen considerable discussion about making scientific data “FAIR” — findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are not FAIR. When left to their own devices, scientists do an absolutely terrible job creating the metadata that describe the experimental datasets that make their way in online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to re-analyse them, and to integrate those datasets with other data. The Center for Expanded Data Annotation and Retrieval (CEDAR) has the goal of enhancing the authoring of experimental metadata to make online datasets more useful to the scientific community. The CEDAR work bench for metadata management will be presented in this webinar. CEDAR illustrates the importance of semantic technology to driving open science. It also demonstrates a means for simplifying access to scientific data sets and enhancing the reuse of the data to drive new discoveries.
Our regular Introduction to Data Management (DM) workshop (90-minutes). Covers very basic DM topics and concepts. Audience is graduate students from all disciplines. Most of the content is in the NOTES FIELD.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
A basic course on Research data management: part 1 - part 4Leon Osinski
Slides belonging to a basic course on research data management. The course consists of 4 parts:
Part 1: what and why
1.1 data management plans
Part 2: protecting and organizing your data
2.1 data safety and data security
2.2 file naming, organizing data (TIER documentation protocol)
Part 3: sharing your data
3.1 via collaboration platforms (during research)
3.2 via data archives (after your research)
Part 4: caring for your data, or making data usable
4.1 tidy data
4.2 documentation/metadata
4.3 licenses
4.4 open data formats
These are the slides presented by Denis Engemann in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. You can find the original version on https://speakerdeck.com/dengemann/mne-hcp-pitch-biomag-2018
Donders Repository - removing barriers for management and sharing of research...Robert Oostenveld
This is the presentation I gave at the monthly meeting of the Donders Institute PhD council. It shortly explains the Donders Repository, but mainly addresses how to deal with direct and indirectly identifying personal data, with anonymization, pseudomimization and de-identification, and with blurring of research data prior to sharing.
Software is very special. I is grand, spectacular, regenerative and perpetual source of value---like nothing else we know.
Perhaps for this very reason it is misused and wasted. By cooperatively REUSING ALL ARTIFACTS of software, we can reap unheard of benefits repeatedly. Here is an outline of how we can do it. That is ReSAR. Let's start.
A basic course on Research data management, part 1: what and whyLeon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Combining Explicit and Latent Web Semantics for Maintaining Knowledge GraphsPaul Groth
A look at how the thinking about Web Data and the sources of semantics can help drive decisions on combining latent and explicit knowledge. Examples from Elsevier and lots of pointers to related work.
Responsible conduct of research: Data ManagementC. Tobin Magle
A presentation for the Food and Nutrition Science Responsible conduct of research class on data management best practices. Covers material in the context of writing a data management plan.
With the explosion of interest in both enhanced knowledge management and open science, the past few years have seen considerable discussion about making scientific data “FAIR” — findable, accessible, interoperable, and reusable. The problem is that most scientific datasets are not FAIR. When left to their own devices, scientists do an absolutely terrible job creating the metadata that describe the experimental datasets that make their way in online repositories. The lack of standardization makes it extremely difficult for other investigators to locate relevant datasets, to re-analyse them, and to integrate those datasets with other data. The Center for Expanded Data Annotation and Retrieval (CEDAR) has the goal of enhancing the authoring of experimental metadata to make online datasets more useful to the scientific community. The CEDAR work bench for metadata management will be presented in this webinar. CEDAR illustrates the importance of semantic technology to driving open science. It also demonstrates a means for simplifying access to scientific data sets and enhancing the reuse of the data to drive new discoveries.
Our regular Introduction to Data Management (DM) workshop (90-minutes). Covers very basic DM topics and concepts. Audience is graduate students from all disciplines. Most of the content is in the NOTES FIELD.
A basic course on Research data management, part 4: caring for your data, or ...Leon Osinski
A basic course on research data management for PhD students. The course consists of 4 parts. The course was given at Eindhoven University of Technology (TUe), 24-01-2017
Dr. Dennis Wang discusses possible ways to enable ML methods to be more powerful for discovery and to reduce ambiguity within translational medicine, allowing data-informed decision-making to deliver the next generation of diagnostics and therapeutics to patients quicker, at lowered costs, and at scale.
The talk by Dr. Dennis Wang was followed by a panel discussion with Mr. Albert Wang, M. Eng., Head, IT Business Partner, Translational Research & Technologies, Bristol-Myers Squibb.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
A basic course on Research data management: part 1 - part 4Leon Osinski
Slides belonging to a basic course on research data management. The course consists of 4 parts:
Part 1: what and why
1.1 data management plans
Part 2: protecting and organizing your data
2.1 data safety and data security
2.2 file naming, organizing data (TIER documentation protocol)
Part 3: sharing your data
3.1 via collaboration platforms (during research)
3.2 via data archives (after your research)
Part 4: caring for your data, or making data usable
4.1 tidy data
4.2 documentation/metadata
4.3 licenses
4.4 open data formats
These are the slides presented by Denis Engemann in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. You can find the original version on https://speakerdeck.com/dengemann/mne-hcp-pitch-biomag-2018
Donders Repository - removing barriers for management and sharing of research...Robert Oostenveld
This is the presentation I gave at the monthly meeting of the Donders Institute PhD council. It shortly explains the Donders Repository, but mainly addresses how to deal with direct and indirectly identifying personal data, with anonymization, pseudomimization and de-identification, and with blurring of research data prior to sharing.
Software is very special. I is grand, spectacular, regenerative and perpetual source of value---like nothing else we know.
Perhaps for this very reason it is misused and wasted. By cooperatively REUSING ALL ARTIFACTS of software, we can reap unheard of benefits repeatedly. Here is an outline of how we can do it. That is ReSAR. Let's start.
Slides for presentation of "A reuse repository with automated synonym suppor...Laust Rud Jacobsen
Having a code reuse repository available can be a great asset for a programmer. But locating components can be difficult if only static documentation is available, due to vocabulary mismatch. Identifying informal synonyms used in documentation can help alleviate this mismatch. The cost of creating a reuse support system is usually fairly high, as much manual effort goes into its construction.
This project has resulted in a fully functional reuse support sys- tem with clustering of search results. By automating the construc- tion of a reuse support system from an existing code reuse repository, and giving the end user a familiar interface, the reuse support system constructed in this project makes the desired functionality available. The constructed system has an easy to use interface, due to a fa- miliar browser-based front-end. An automated method called LSI is used to handle synonyms, and to some degree polysemous words in indexed components.
In the course of this project, the reuse support system has been tested using components from two sources, the retrieval performance measured, and found acceptable. Clustering usability is evaluated and clusters are found to be generally helpful, even though some fine-tuning still has to be done.
Masters Thesis: A reuse repository with automated synonym support and cluster...Laust Rud Jacobsen
Having a code reuse repository available can be a great asset for a programmer. But locating components can be difficult if only static documentation is available, due to vocabulary mismatch. Identifying informal synonyms used in documentation can help alleviate this mismatch. The cost of creating a reuse support system is usually fairly high, as much manual effort goes into its construction.
This project has resulted in a fully functional reuse support sys- tem with clustering of search results. By automating the construc- tion of a reuse support system from an existing code reuse repository, and giving the end user a familiar interface, the reuse support system constructed in this project makes the desired functionality available. The constructed system has an easy to use interface, due to a fa- miliar browser-based front-end. An automated method called LSI is used to handle synonyms, and to some degree polysemous words in indexed components.
In the course of this project, the reuse support system has been tested using components from two sources, the retrieval performance measured, and found acceptable. Clustering usability is evaluated and clusters are found to be generally helpful, even though some fine-tuning still has to be done.
Improving Support for Researchers: How Data Reuse Can Inform Data CurationOCLC
Presented at Strategic Conversations at Harvard Library, 9 June 2016
Details are here: http://library.harvard.edu/hlsc
In this talk, Ixchel Faniel from OCLC discussed data reuse practices within academic communities as a means to inform data curation. Knowledge of data reuse and curation processes can shape the activities and services of researchers, librarians, and other information professionals in order to enhance data reuse and accelerate research discoveries.
Ixchel M. Faniel is a Research Scientist at OCLC Research.
Evaluation of full brain parcellation schemes using the NeuroVault database o...Krzysztof Gorgolewski
Slides from a talk given at SfN 2016.
The task of dividing the human brain into regions has been captivating scientists for many years. In the following work we revisit this challenge and introduce a new evaluation technique that works for both cortical and subcortical parcellations. Our approach is based on data from a diverse set of cognitive experiments that employs nonparametric methods to account for smoothness and parcel size biases.
As reported before parcel variance was a function of parcel size in that smaller parcels were more likely to be homogenous (even in random data). However, when we used map-specific null distributions to account for both smoothness of statistical maps as well as number of parcels in atlases, unbiased estimates become apparent. Both Yeo et al. and Collins et al. parcellations produce scores for random data similar to those derived from real data. In contrast, Shen et al., AAL, and Gordon et al. show lower within parcel variance when applied to real data than when applied to random data (but no distinction can be made between them).
In addition to looking at within parcel variance we also applied a novel metric based on the intuition that different parts of the brain should not only be homogenous, but also different from each other. To quantify this we calculated a ratio of between and within parcel variances (standardized using individual null models). This approach indirectly penalizes parcellations with too many unnecessary parcels. Using this measure we show that Yeo et al. parcellation fits data better (Figure 1) than Collins et al. atlas despite having fewer parcels (7 vs 10).
We present a novel approach to evaluating atlases and parcellations of the human brain that captures diverse patterns observed across many cognitive studies. Our testing methodology overcomes biases introduced by the size of the parcels and smoothness of input data, but also, in contrast to previous methods, can be applied to whole brain volumetric data. We have found that in contrast to previous reports based on resting state cortico cortical connectivity Shen et al. and AAL atlases can delineate brain regions with above average accuracy.
Researcher data management shared service for the UK – John Kaye, Jisc
Hydra - Tom Cramer, Stanford University and Chris Awre, University of Hull
Addressing the preservation gap at the University of York - Jenny Mitcham, University of York
Emulation developments - David Rosenthal, Stanford University
Jisc and CNI conference, 6 July 2016
A First Attempt at Describing, Disseminating and Reusing Methodological Knowl...ariadnenetwork
Presentation by Cesar Gonzalez-Perez, (Incipit) and Patricia Martín-Rodilla.
Spanish National Research Council (CSIC)
EAA 2013 in the 'New Digital Developments in Heritage Management and Research' session
Pilsen, Czech Republic
5 September 2013
Meeting Federal Research Requirements for Data Management Plans, Public Acces...ICPSR
These slides cover evolving federal research requirements for sharing scientific data. Provided are updates on federal agency responses to the 2013 OSTP memo, guidance on data management plans, resources for data management and curation training for staff/researchers, and tips for evaluating public data-sharing services. ICPSR's public data-sharing service, openICPSR, is also presented. Recording of this presentation is here: https://www.youtube.com/watch?v=2_erMkASSv4&feature=youtu.be
Presentation by Prof. Dr. Henning Müller.
Overview:
- Medical image retrieval projects
- Image analysis and 3D texture modeling
- Data science evaluation infrastructures (ImageCLEF, VISCERAL, EaaS – Evaluation as a Service)
- What comes next?
On March 23, 2016, Prof. Henning Müller (HES-SO Valais-Wallis and Martinos Center) presented Medical image analysis and big data evaluation infrastructures at Stanford medicine.
Reproducibility in human cognitive neuroimaging: a community-driven data sha...Nolan Nichols
Access to primary data and the provenance of derived data are increasingly recognized as an essential aspect of reproducibility in biomedical research. While productive data sharing has become the norm in some biomedical communities, human brain imaging has lagged in open data and descriptions of provenance. The overarching goal of my dissertation was to identify barriers to neuroimaging data sharing and to develop a fundamentally new, granular data exchange standard that incorporates provenance as a primitive to document cognitive neuroimaging workflow.
For my dissertation research, I led the development of the Neuroimaging Data Model (NIDM), an extension to the W3C PROV standard for the domain of human brain imaging. NIDM provides a language to communicate provenance by representing primary data, computational workflow, and derived data as bundles of linked Agents, Activities, and Entities. Similar to the way a sentence conveys a standalone thought, a bundle contains provenance statements that parsimoniously express the way a given piece of data was produced. To demonstrate a system that implements NIDM, I developed a modern, semantic Web application platform that provides neuroimaging workflow as a service and captures provenance statements as NIDM bundles. The course of this work necessitated interaction with an international community, which adopted and extended central elements of this work into prevailing brain imaging software. My dissertation contributes neuroinformatics standards to advance the current state of computational infrastructure available to the cognitive neuroimaging community.
Slides from Wednesday 1st August - Data in the Scholarly Communications Life Cycle Course which is part of the FORCE11 Scholarly Communications Institute.
Presenter - Natasha Simons
Genome sharing projects around the world nijmegen oct 29 - 2015Fiona Nielsen
Genome sharing projects across the world
Did you ever wonder what happened to the exponential increase in genome sequencing data? It is out there around the world and a lot of it is consented for research use. This means that if you just know where to find the data, you can potentially analyse gigabytes of data to power your research.
In this talk Fiona will present community genome initiatives, the genome sharing projects across the world, how you can benefit from this wealth of data in your work, and how you can boost your academic career by sharing and collaboration.
by Fiona Nielsen, Founder and CEO of DNAdigest and Repositive
With a background in software development Fiona pursued her career in bioinformatics research at Radboud University Nijmegen. Now a scientist-turned-entrepreneur Fiona founded DNAdigest and its social enterprise spin-out Repositive Ltd. Both the charity and company focus on efficient and ethical sharing of genetics data for research to accelerate diagnostics and cures for genetic diseases.
Data Communities - reusable data in and outside your organization.Paul Groth
Description
Data is a critical both to facilitate an organization and as a product. How can you make that data more usable for both internal and external stakeholders? There are a myriad of recommendations, advice, and strictures about what data providers should do to facilitate data (re)use. It can be overwhelming. Based on recent empirical work (analyzing data reuse proxies at scale, understanding data sensemaking and looking at how researchers search for data), I talk about what practices are a good place to start for helping others to reuse your data. I put this in the context of the notion data communities that organizations can use to help foster the use of data both within your organization and externally.
Data and Donuts: How to write a data management planC. Tobin Magle
This presentation describes best practices for how to write a data management plan for your research data. Additionally, it provides information about finding funder requirements, metadata standards, and repositories.
"Open Science, Open Data" training for participants of Software Writing Skills for Your Research - Workshop for Proficient, Helmholtz Centre Potsdam - GFZ German Research Centre for Geosciences, Telegrafenberg, December 16, 2015
Data Landscapes: The Neuroscience Information FrameworkMaryann Martone
Overview of how to use the Neuroscience Information Framework for data discovery presented at the Genetics of Addiction Workshop, held at Jackson Lab Aug 28- Sept 1, 2014.
Similar to Share and Reuse: how data sharing can take your research to the next level (20)
Presentation given at Organization for Human Brain Mapping Annual Meeting in Singapore 2018
Video recording: https://www.pathlms.com/ohbm/courses/8246/sections/12538/video_presentations/116214
No research is done in a void: science is constantly expanding previous hypotheses, building upon past knowledge. We live in a digital age where information is ubiquitous, yet we struggle to preserve accurate machine readable and quantitative descriptions of our research compromising our capacity to use them in our inferences. In the following talk I will show how and why we incorporate assumptions in our studies based on three experiments we have conducted: (i) dissociating metacognitive subdomains in medial and lateral anterior prefrontal cortex, (ii) relating reading comprehension to individual differences in the default mode network, and (iii) exploring neural correlates of the content and form of self-generated thoughts. This will be followed by introducing a new inference method - probabilistic Regions of Interest (pROI) - which allows the use of prior knowledge in the form of a probabilistic map. This approach provides the middle ground between ROI and full brain analysis, by giving researchers more flexibility in formalizing priors. The quality of prior probability maps based on the literature can be improved by using unthresholded statistical maps instead of peak coordinates. To facilitate this we have created NeuroVault.org - a community - wide effort to collect unthresholded statistical maps. Taking the initiative a step further I will describe the concept of data papers - publications purely dedicated to datasets. Together those three mechanisms (pROI, NeuroVault.org and data papers) are a small but significant steps towards better, more reusable and reproducible science.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
6. Human Connectome Project
• > 500 subjects (will reach 1200)
– Young and healthy (22-35yrs)
– 200 twins!
• 1 hour worth of MRI scanning:
– State of the art sequences – high temporal and spatial resolution
– Resting-state fMRI (R-fMRI)
– Task-evoked fMRI (T-fMRI)
• Working Memory
• Gambling
• Motor
• Language
• Social Cognition
• Relational Processing
• Emotion Processing
– Diffusion MRI (dMRI)
– MEG and EEG
– 7T coming soon
7. Human Connectome Project
• Rich phenotypical data
– Cognition, personality, substance abuse etc.
• Genotyping! (not yet available)
• Methodological developments
– Fine tuned sequences
– Innovative field inhomogeneitycorrections
– New preprocessing techniques
• Ready to use preprocessed data
12. FCP/INDI Usage Survey
Survey Courtesy of Stan Colcombe & Cameron Craddock
FCP/INDI Data Usage Description
Master's thesis research 11.94%
Doctoral dissertation research 38.81%
Teaching resource (projects or examples) 13.43%
Pilot data for grant applications 16.42%
Research intended for publication 76.12%
Independent study (e.g., teach self about analysis) 37.31%
FCP/INDI Users; 10% respondent rate
16. Data sharing saves money
$878,988
cost of reacquiring data for each of the
reuses of OpenfMRI datasets
17. Data sharing fears
• Fear of being scooped
• Fear of someone finding a mistake
• Misconceptions about the ownership of the
data
18. Studies sharing data have higher
statistical quality
Wicherts JM, Bakker M, Molenaar D (2011) Willingness to Share Research Data Is
Related to the Strength of the Evidence and the Quality of Reporting of
Statistical Results. PLoS ONE 6(11): e26828. doi: 10.1371/journal.pone.0026828
23. Baby steps
• Everything is a question of cost and benefit
– If we keep the cost low even small benefit (or
just conviction that data sharing is GOOD) will
suffice
24. NeuroVault.org
simple data sharing
• Minimize the cost!
• We just want your statistical maps with
minimum description (DOI)
– If you want you can put more metadata, but
you don’t have to
• We streamline login process (Google,
Facebook)
28. Benefits - other
• Private collections
• Multiple contributors to one collection
• Sharable persistent URLs
• Viewer embeddable on your labs website or
your private blog
• Improved exposure of your research
• Improved reusability of your results
• Long term storage in Stanford Digital
Repository
29. Using NeuroVault…
• Improves collaboration
• Makes your paper more attractive
• Shows you care about transparency
• Takes only five minutes
• Gives you warm and fuzzy feeling that you
helped future meta-analyses
37. Solution – data papers
• Authors get recognizable credit for their
work.
– Even smaller contributors such as RAs can be
included.
• Acquisition methods are described in
detail.
• Quality of metadata is being controlled by
peer review.
39. • Neuroinformatics (Springer)
• GigaScience (BGI, BioMed Central)
• Scientific Data (Nature Publising Group)
• F1000Research (Faculty of 1000)
• Data in Brief (Elsevier)
• Journal of Open Psychology Data (Ubiquity
press)
Where to publish data papers?
40.
41.
42.
43.
44. What makes a good data paper?
• Clear and accurate description of the
acquisition protocol.
• Good data organization.
• Ease of access to data.
• Data quality description.
• Fair credit attribution.
45. How to improve the impact of your
dataset?
• Provide preprocessed data.
• Reach out to your peers…
– …and people outside of your field (ML)
• Build a community around the data.
47. Repositories
• Field specific
– OpenfMRI.org (task based fMRI)
– FCP/INDI (resting state fMRI)
– COINS
• Field agnostic
– DataVerse (Harvard)
– Figshare (only small datasets)
– DataDryad (fees may apply)
48. OpenfMRI
• Will host any MRI dataset
• No fees
• Curated and uncurated datasets
• Recommended by many journals (including
Scientific Data)
51. Prepare in advance
• Make sure your consent form includes data
sharing
• Decide which database you want to send
your data to in advance
– Organize your data according to their
requirements
• Work on anonymized data as much as you
can
53. Ultimate consent form
• Inform participants about your intention to
share data
• Explain the benefits
• Discuss the risks
open-brain-consent.readthedocs.org
54. If I haven’t convinced you yet
• Why to share data:
– It’s the ethical thing to do (Brakewood and
Poldrack 2013)
– The journal might require it (PLoS).
– Your funders might require it (NIH).
– Track record of data sharing can improve your
chances of getting your next grant.
55. Sharing data is related to higher
citation rate
Piwowar, Day & Fridsma (2007) Piwowar & Vision(2013)
56. Acknowledgements
Russell A. Poldrack
Jean-Baptiste Poline
Yannick Schwarz
Tal Yarkoni
Michael Milham
Daniel Margulies
Yannick Schwartz
Gael Varoquox
Joseph Wexler
Gabriel Rivera
Camile Maumet
Vanessa Sochat
Thomas Nichols
MPI CBS Resting state group
Poldrack Lab
INCF Data Sharing Task
Force