The Brain Imaging Data Structure and its use for fNIRSRobert Oostenveld
These slides were prepared for the NIRS toolkit course at the Donders, which due to the Corona crisis has been postponed. The slides present BIDS, explain how fNIRS often involves multiple signals, and relates the two to synchronization and data management
This is a presentation for the Erwin Hahn Instiutute in Essen, explaining the background, functional design and technical architecture of the Donders Repository. Furthermore, it explains how it aligns with the DCCN project management and with the researchers workflow
CuttingEEG - Open Science, Open Data and BIDS for EEGRobert Oostenveld
Starting with education, inception of research questions, planning, acquisition, analysis and reporting, there are multiple points where Open Science should play a role. In my presentation at the CuttingEEG conference in Paris, I argue that we should not only be sharing primary outcomes as Open Access publications, but that openness involves the full research cycle. Specifically, I will be sharing my experience with Open Data, privacy challenges and possibilities under the GDPR, Open Source for sharing analysis methods, dealing with imperfections in science and versioning of data, code and results. Finally, I will introduce BIDS for EEG, a new effort to increase the impact of shared and well-documented EEG data.
Donders Repository - removing barriers for management and sharing of research...Robert Oostenveld
This is the presentation I gave at the monthly meeting of the Donders Institute PhD council. It shortly explains the Donders Repository, but mainly addresses how to deal with direct and indirectly identifying personal data, with anonymization, pseudomimization and de-identification, and with blurring of research data prior to sharing.
These are the slides presented by Denis Engemann in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. You can find the original version on https://speakerdeck.com/dengemann/mne-hcp-pitch-biomag-2018
IDCC Workshop: Analysing DMPs to inform research data services: lessons from ...Amanda Whitmire
A workshop as part of the International Digital Curation Conference 2016 on DMP development and support. This presentation demonstrates how we can use data management plans as a source of information to better understand researcher data stewardship practices and how to support them. Be sure to see the slide notes to better understand the presentation (most slides are just photos/icons).
The Brain Imaging Data Structure and its use for fNIRSRobert Oostenveld
These slides were prepared for the NIRS toolkit course at the Donders, which due to the Corona crisis has been postponed. The slides present BIDS, explain how fNIRS often involves multiple signals, and relates the two to synchronization and data management
This is a presentation for the Erwin Hahn Instiutute in Essen, explaining the background, functional design and technical architecture of the Donders Repository. Furthermore, it explains how it aligns with the DCCN project management and with the researchers workflow
CuttingEEG - Open Science, Open Data and BIDS for EEGRobert Oostenveld
Starting with education, inception of research questions, planning, acquisition, analysis and reporting, there are multiple points where Open Science should play a role. In my presentation at the CuttingEEG conference in Paris, I argue that we should not only be sharing primary outcomes as Open Access publications, but that openness involves the full research cycle. Specifically, I will be sharing my experience with Open Data, privacy challenges and possibilities under the GDPR, Open Source for sharing analysis methods, dealing with imperfections in science and versioning of data, code and results. Finally, I will introduce BIDS for EEG, a new effort to increase the impact of shared and well-documented EEG data.
Donders Repository - removing barriers for management and sharing of research...Robert Oostenveld
This is the presentation I gave at the monthly meeting of the Donders Institute PhD council. It shortly explains the Donders Repository, but mainly addresses how to deal with direct and indirectly identifying personal data, with anonymization, pseudomimization and de-identification, and with blurring of research data prior to sharing.
These are the slides presented by Denis Engemann in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. You can find the original version on https://speakerdeck.com/dengemann/mne-hcp-pitch-biomag-2018
IDCC Workshop: Analysing DMPs to inform research data services: lessons from ...Amanda Whitmire
A workshop as part of the International Digital Curation Conference 2016 on DMP development and support. This presentation demonstrates how we can use data management plans as a source of information to better understand researcher data stewardship practices and how to support them. Be sure to see the slide notes to better understand the presentation (most slides are just photos/icons).
Our regular Introduction to Data Management (DM) workshop (90-minutes). Covers very basic DM topics and concepts. Audience is graduate students from all disciplines. Most of the content is in the NOTES FIELD.
University of Bath Research Data Management training for researchersJez Cope
Slides from a workshop on Research Data Management for research staff and students at the University of Bath.
Part of the Research360 project (http://blogs.bath.ac.uk/research360).
Authors: Cathy Pink and Jez Cope, University of Bath
A basic course on Research data management: part 1 - part 4Leon Osinski
Slides belonging to a basic course on research data management. The course consists of 4 parts:
Part 1: what and why
1.1 data management plans
Part 2: protecting and organizing your data
2.1 data safety and data security
2.2 file naming, organizing data (TIER documentation protocol)
Part 3: sharing your data
3.1 via collaboration platforms (during research)
3.2 via data archives (after your research)
Part 4: caring for your data, or making data usable
4.1 tidy data
4.2 documentation/metadata
4.3 licenses
4.4 open data formats
Introduction to research data management; Lecture 01 for GRAD521Amanda Whitmire
Lesson 1: Introduction to research data management. From a series of lectures from a 10-week, 2-credit graduate-level course in research data management (GRAD521, offered at Oregon State University).
The course description is: "Careful examination of all aspects of research data management best practices. Designed to prepare students to exceed funder mandates for performance in data planning, documentation, preservation and sharing in an increasingly complex digital research environment. Open to students of all disciplines."
Major course content includes: Overview of research data management, definitions and best practices; Types, formats and stages of research data; Metadata (data documentation); Data storage, backup and security; Legal and ethical considerations of research data; Data sharing and reuse; Archiving and preservation.
See also, "Whitmire, Amanda (2014): GRAD 521 Research Data Management Lectures. figshare. http://dx.doi.org/10.6084/m9.figshare.1003835. Retrieved 23:25, Jan 07, 2015 (GMT)"
Drug Repurposing using Deep Learning on Knowledge GraphsDatabricks
Discovering new drugs is a lengthy and expensive process. This means that finding new uses for existing drugs can help create new treatments in less time and with less time. The difficulty is in finding these potential new uses.
How do we find these undiscovered uses for existing drugs?
We can unify the available structured and unstructured data sets into a knowledge graph. This is done by fusing the structured data sets, and performing named entity extraction on the unstructured data sets. Once this is done, we can use deep learning techniques to predict latent relationships.
In this talk we will cover:
Building the knowledge graph
Predicting latent relationships
Using the latent relationships to repurpose existing drugs
Developing data services: a tale from two Oregon universitiesAmanda Whitmire
While the generation or collection of large, complex research datasets is becoming easier and less expensive all the time, researchers often lack the knowledge and skills that are necessary to properly manage them. Having these skills is paramount in ensuring data quality, integrity, discoverability, integration, reproducibility, and reuse over time. Librarians have been preserving, managing and disseminating information for thousands of years. As scholarly research is increasingly carried out digitally, and products of research have expanded from primarily text-based manuscripts to include datasets, metadata, maps, software code etc., it is a natural expansion of scope for libraries to be involved in the stewardship of these materials as well. This kind of evolution requires that libraries bring in faculty with new skills and collaborate more intimately with researchers during the research data lifecycle, and this is exactly what is happening in academic libraries across the country. In this webinar, two researchers-turned-data-specialists, both based in academic libraries, will share their experiences and perspectives on the development of research data services at their respective institutions. Each will share their perspective on the important role that libraries can play in helping researchers manage, preserve, and share their data.
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Talk at JISC Repositories conference intended for repository managers or research managers on some of the issues involved. Talk had to be originally given unaided because of a technology problem!
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
Our regular Introduction to Data Management (DM) workshop (90-minutes). Covers very basic DM topics and concepts. Audience is graduate students from all disciplines. Most of the content is in the NOTES FIELD.
University of Bath Research Data Management training for researchersJez Cope
Slides from a workshop on Research Data Management for research staff and students at the University of Bath.
Part of the Research360 project (http://blogs.bath.ac.uk/research360).
Authors: Cathy Pink and Jez Cope, University of Bath
A basic course on Research data management: part 1 - part 4Leon Osinski
Slides belonging to a basic course on research data management. The course consists of 4 parts:
Part 1: what and why
1.1 data management plans
Part 2: protecting and organizing your data
2.1 data safety and data security
2.2 file naming, organizing data (TIER documentation protocol)
Part 3: sharing your data
3.1 via collaboration platforms (during research)
3.2 via data archives (after your research)
Part 4: caring for your data, or making data usable
4.1 tidy data
4.2 documentation/metadata
4.3 licenses
4.4 open data formats
Introduction to research data management; Lecture 01 for GRAD521Amanda Whitmire
Lesson 1: Introduction to research data management. From a series of lectures from a 10-week, 2-credit graduate-level course in research data management (GRAD521, offered at Oregon State University).
The course description is: "Careful examination of all aspects of research data management best practices. Designed to prepare students to exceed funder mandates for performance in data planning, documentation, preservation and sharing in an increasingly complex digital research environment. Open to students of all disciplines."
Major course content includes: Overview of research data management, definitions and best practices; Types, formats and stages of research data; Metadata (data documentation); Data storage, backup and security; Legal and ethical considerations of research data; Data sharing and reuse; Archiving and preservation.
See also, "Whitmire, Amanda (2014): GRAD 521 Research Data Management Lectures. figshare. http://dx.doi.org/10.6084/m9.figshare.1003835. Retrieved 23:25, Jan 07, 2015 (GMT)"
Drug Repurposing using Deep Learning on Knowledge GraphsDatabricks
Discovering new drugs is a lengthy and expensive process. This means that finding new uses for existing drugs can help create new treatments in less time and with less time. The difficulty is in finding these potential new uses.
How do we find these undiscovered uses for existing drugs?
We can unify the available structured and unstructured data sets into a knowledge graph. This is done by fusing the structured data sets, and performing named entity extraction on the unstructured data sets. Once this is done, we can use deep learning techniques to predict latent relationships.
In this talk we will cover:
Building the knowledge graph
Predicting latent relationships
Using the latent relationships to repurpose existing drugs
Developing data services: a tale from two Oregon universitiesAmanda Whitmire
While the generation or collection of large, complex research datasets is becoming easier and less expensive all the time, researchers often lack the knowledge and skills that are necessary to properly manage them. Having these skills is paramount in ensuring data quality, integrity, discoverability, integration, reproducibility, and reuse over time. Librarians have been preserving, managing and disseminating information for thousands of years. As scholarly research is increasingly carried out digitally, and products of research have expanded from primarily text-based manuscripts to include datasets, metadata, maps, software code etc., it is a natural expansion of scope for libraries to be involved in the stewardship of these materials as well. This kind of evolution requires that libraries bring in faculty with new skills and collaborate more intimately with researchers during the research data lifecycle, and this is exactly what is happening in academic libraries across the country. In this webinar, two researchers-turned-data-specialists, both based in academic libraries, will share their experiences and perspectives on the development of research data services at their respective institutions. Each will share their perspective on the important role that libraries can play in helping researchers manage, preserve, and share their data.
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Talk at JISC Repositories conference intended for repository managers or research managers on some of the issues involved. Talk had to be originally given unaided because of a technology problem!
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
New Developments in H2O: April 2017 EditionSri Ambati
H2O presentation at Trevor Hastie and Rob Tibshirani's Short Course on Statistical Learning & Data Mining IV: http://web.stanford.edu/~hastie/sldm.html
PDF and Keynote version of the presentation available here: https://github.com/h2oai/h2o-meetups/tree/master/2017_04_06_SLDM4_H2O_New_Developments
Implementation of GPU-based bioinformatic tools at the ENCODE DCCENCODE-DCC
An overview of the assays performed and distributed by the ENCODE DCC as well as a summary of the uniform processing pipelines that are being implemented by the ENCODE Consortium. Here, we talk about the impact using GPUs has on speed of running the ChIP-seq pipeline.
State-Of-The Art Machine Learning Algorithms and How They Are Affected By Nea...inside-BigData.com
In this deck from the HPC Knowledge Portal 2017 Conference, Rob Farber from TechEnablement presents: State-Of-The Art Machine Learning Algorithms and How They Are Affected By Near-Term Technology Trends.
"Industry and Wall Street projections indicate that Machine Learning will touch every piece of data in the data center by 2020. This has created a technology arms race and algorithmic competition as IBM, NVIDIA, Intel, and ARM strive to dominate the retooling of the computer industry to support ubiquitous machine learning workloads over the next 3-4 years. Similarly, algorithm designers compete to create faster and more accurate training and inference techniques that can address complex problems spanning speech, image recognition, image tagging, self-driving cars, data analytics and more. The challenges for researchers and technology providers encompass big data, massive parallelism, distributed processing, and real-time processing. Deep-learning and low-precision inference (based on INT8 and FP16 arithmetic) are current hot topics."
Watch the video: https://wp.me/p3RLHQ-i2K
Learn more: http://www.hpckp.org/index.php/conference/2017
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Discovery Engines for Big Data: Accelerating Discovery in Basic Energy SciencesIan Foster
Argonne’s Discovery Engines for Big Data project is working to enable new research modalities based on the integration of advanced computing with experiments at facilities such as the Advanced Photon Source (APS). I review science drivers and initial results in diffuse scattering, high energy diffraction microscopy, tomography, and pythography. I also describe the computational methods and infrastructure that we leverage to support such applications, which include the Petrel online data store, ALCF supercomputers, Globus research data management services, and Swift parallel scripting. This work points to a future in which tight integration of DOE’s experimental and computational facilities enables both new science and more efficient and rapid discovery.
BDSE 2015 Evaluation of Big Data Platforms with HiBencht_ivanov
We evaluate the performance of DataStax Enterprise (DSE) using the HiBench benchmark suite and compare it with the corresponding Cloudera’s Distribution of Hadoop (CDH) results. Both systems, DSE and CDH were stress tested using CPU-bound (WordCount), I/O-bound (Enhanced DFSIO) and mixed (HiveBench) workloads.
This contains the agenda of the Spark Meetup I organised in Bangalore on Friday, the 23rd of Jan 2014. It carries the slides for the talk I gave on distributed deep learning over Spark
Similar to Organizing EEG data using the Brain Imaging Data Structure (20)
BIOMAG2018 - Tzvetan Popov - HCP from a user's perspectiveRobert Oostenveld
These are the slides presented by Tzvetan Popov in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://www.humanconnectome.org/study/hcp-young-adult
These are the slides presented by Vladimir Litvak in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://www.frontiersin.org/research-topics/5158
These are the slides presented by Jan-Mathijs Schoffelen in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://cobidas.wordpress.com
These are the slides presented by Darren Price in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also http://www.cam-can.org
Using Open Science to accelerate advancements in auditory EEG signal processingRobert Oostenveld
In this presentation at the AESoP conference in Leuven, I will provide arguments for more open research methods. Open Science and Open Data is not only expected from us by our funding agencies, but actually starts making more and more sense from the perspective of the individual researchers. Specifically, I will introduce BIDS as new initiative to organize and share EEG data.
This presentation is for the data stewards of the Radboud University. It explains the design and daily usage of the Data Repository of the Donders Institute.
This short set of slides explains how "time" is to be understood in "real-time". Furthermore it shows the effect of block size differences on the jitter.
These are the slides that I presented at the first Brain Control Club hackathon in Paris, see http://cri-paris.org/scientific-clubs/brain-control-club/
On the large scale of studying dynamics with MEG: Lessons learned from the Hu...Robert Oostenveld
As part of the Human Connectome Project (HCP), which includes high-quality fMRI, anatomical MRI, DTi and genetic data from 1200 subjects, we have scanned and investigated a subset of 100 subjects (mostly comprised of pairs of twins) using MEG. The raw data acquired in the HCP has been analyzed using standard pipelines [ref1] and both raw and results at various levels of processing have been shared though the ConnectomeDB [ref2].
Throughout the process of the HCP we have not only analyzed (resting state) MEG data, but also have developed the data analysis protocols, the software and the strategies to achieve reproducible MEG connectivity results. The MEG data analysis software is based on FieldTrip, an open source toolbox [ref3], and is shared alongside the data to allow the analyses to be repeated on independent data.
In this presentation I will outline what the HCP MEG team has learned along the way and I will provide recommendations on what to do and what to avoid in making MEG studies on (resting state) connectivity more reproducible.
1. Larson-Prior LJ, Oostenveld R, Della Penna S, Michalareas G, Prior F, Babajani-Feremi A, Schoffelen JM, Marzetti L, de Pasquale F, Di Pompeo F, Stout J, Woolrich M, Luo Q, Bucholz R, Fries P, Pizzella V, Romani GL, Corbetta M, Snyder AZ; WU-Minn HCP Consortium. Adding dynamics to the Human Connectome Project with MEG. Neuroimage, 2013.
doi:10.1016/j.neuroimage.2013.05.056
2. Hodge MR, Horton W, Brown T, Herrick R, Olsen T, Hileman ME, McKay M, Archie KA, Cler E, Harms MP, Burgess GC, Glasser MF, Elam JS, Curtiss SW, Barch DM, Oostenveld R, Larson-Prior LJ, Ugurbil K, Van Essen DC, Marcus DS. ConnectomeDB-Sharing human brain connectivity data. Neuroimage, 2016. doi:10.1016/j.neuroimage.2015.04.046
3. Oostenveld R, Fries P, Maris E, Schoffelen JM. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Comput Intell Neurosci. 2011. doi:10.1155/2011/156869
Presentation for the Biomag 2016 satellite meeting "From raw MEG to publication - How to perform MEG group analysis with free academic software"
Corresponding MATLAB scripts can be found on https://github.com/fieldtrip/Wakeman-and-Henson-2015
I presented this in the MEG/EEG meeting at the Donders Centre for Cognitive Neuroimaging, which is part of the Donders Institute, Radboud University, NL.
EEGSynth is a hardware platform and codebase for real-time sonification/visualisation of EEG, ECG and EMG for the purpose of artistic and educational performances. It features a distributed open-source development by neuroscientists and engineers, and is developed together with artists to function ‘out of the box’ in electronic music situations.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
ESR spectroscopy in liquid food and beverages.pptx
Organizing EEG data using the Brain Imaging Data Structure
1. Organizing EEG data using the
Brain Imaging Data Structure
Robert Oostenveld
Donders Institute, Radboud University, Nijmegen, NL
Karolinska Institutet, Stockholm, SE
r.oostenveld@donders.ru.nl
4. Open Science – infrastructure and tools
Git and GitHub, Gitlab, BitBucket
Work together on code for analyses
Open Science Framework (osf.io)
Work together on documenting
DataVerse, Zenodo, etc
Sharing of data
Code Ocean, Microsoft Azure, Anaconda Clould
Cloud-based computational reproducibility platform
Past - Black-and-white version of article printed on paper and stored in library
Present - PDF for download, sometimes online supplementary material
Future - Online notebooks that allow full reproduction of results (and more)
Lab notebook
Science is getting more exiting – but also requires new skills
5. Outline of this talk
How is BIDS extended to EEG?
What is specific for EEG?
File formats
Channels and electrodes
Coordinates
Other details
Re-using BIDS EEG datasets
Creating BIDS EEG datasets
6. Brain Imaging Data Structure
https://www.nature.com/articles/sdata201644
https://www.nature.com/articles/sdata2018110
8. BIDS for MRI, MEG, EEG, iEEG,etc.
data/README
CHANGES
dataset_description.json
participants.tsv
/sub-01/anat/…
/sub-01/meg/…
/sub-01/eeg/sub-01_task-auditory_eeg.edf
/sub-01/eeg/sub-01_task-auditory_eeg.json
/sub-01/eeg/sub-01_task-auditory_channels.tsv
/sub-01/eeg/sub-01_task-auditory_events.tsv
/sub-01/eeg/sub-01_electrodes.tsv
/sub-01/eeg/sub-01_coordinates.json
raw EEG data
general
metadata
electrode
positions
9. Fileformats for EEG data in BIDS
Recommended
BrainVision Core file fomat (*.vhdr, *.vmrk, *.eeg)
European Data Format (*.edf)
Allowed
Biosemi (*.bdf)
EEGLAB (*.set) -> MATLAB, HDF5-based
Limited number of formats that should …
1) address the needs of a large portion of the global EEG community
2) be a FAIR format, with a focus on Interoperability
3) meet the technical requirements of neuroscientific workflows
https://www.brainproducts.com/productdetails.php?id=21&tab=5
https://www.biosemi.com/faq/file_format.htm
https://www.edfplus.info
https://www.hdfgroup.org/solutions/hdf5/ and EEGLAB
10. EEG data sidecar file
Capture information that is not present in the data file
• What was the task or instruction to the subject?
• Which electrode cap was used?
• Which amplifier was used?
• Is line noise at 50Hz or 60Hz?
• Where was the reference electrode placed?
• …
Makes some information more easily available
• Number of channels with EEG, EOG, etc.
• …
11. Channels and electrodes
Electrode
A single point of contact between the acquisition
system and the recording site
Channel
A single analog-to-digital converter in the
recording system that regularly samples the value
of a transducer, which results in the signal being
represented as a time series in the digitized data.
Note that GND and REF are electrodes, but usually not
channels.
12. Coordinates
If EEG electrode positions are digitized and specified,
then the coordinate system should also be specified.
Fiducials
objects with a well defined location used to facilitate the localization of electrodes and
co-registration with other geometric data such as the participant's own T1 weighted
magnetic resonance head image, a T1 weighted template head image, or a spherical
head model.
Anatomical landmarks
locations on a research subject such as the nasion, which is the intersection of the
frontal bone and two nasal bones of the human skull.
13. Task and event details
http://www.hedtags.org
http://wiki.cogpo.org
https://www.cognitiveatlas.org
14. Outline of this talk
How is BIDS extended to EEG?
What is specific for EEG?
File formats
Channels and electrodes
Coordinates
Other details
Re-using BIDS EEG datasets
Creating BIDS EEG datasets
15. Re-using BIDS EEG datasets
…. just use your standard tools!
EEG data in BIDS is stored in formats supported by all EEG software.
Metadata is in human and machine-readable format (tsv, json).
In the future it is likely that complete (semi)-automatic analysis
pipelines will be implemented to process complete datasets.
16. Creating BIDS EEG datasets
http://www.fieldtriptoolbox.org/example/bids/
Tools are also currently being developed in
EEGLAB, BrainStorm, SPM, MNE-Python, etc.
BrainVision is also extending their
software to be more BIDS friendly.
18. Why share your EEG data BIDS?
Sharing your data (externally) increases your research impact.
Archiving your data properly (internally) increases its future value.
Document for your “future self”, so that you don’t forget the details
(and that others will know them as well).
Plan your experiment, acquire the data, organize it in BIDS,
and only then start analyzing it!
19. Share/publish your research output
Peer-reviewed publication is the primary output!
More details in your analysis than fits in your “Methods” section.
Not possible to describe all details in human-oriented text.
Raw data
EEG, lab notes, etc
Analysis details
MATLAB, Python, R, SPSS, Bash, …
Analysis results
Output of the computations
-> BIDS for raw EEG data
-> BIDS derivatives (work-in-progress)
-> possibly BIDS apps, otherwise
github or similar
Imagine in 10 years from now, where quite some of us will be working at different places, we might not be using our current tools any more: we want to archive and publish data such that it remains accessible
A channel can be connected to two electrodes (to measure the potential difference between them), a magnetic field or magnetic gradient sensor, temperature sensor, accelerometer, etc.