These are the slides presented by Darren Price in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also http://www.cam-can.org
Our Project to make a system which make the process easy for doctors to see cells and diagnose the disease.
What We will make:-
A –determining and concentrating on infects that can harm the body of any animal.
B – Making the process of diagnosing the disease that harm the cell easy.
C – Reducing the amount of time and efforts of researchers to accomplish their tasks.
D – Making the resistance of new infects fast.
E – Help students who interested in veterinary to go along their studies.
F– Cope pace with greatest countries in its technology.
Our Project to make a system which make the process easy for doctors to see cells and diagnose the disease.
What We will make:-
A –determining and concentrating on infects that can harm the body of any animal.
B – Making the process of diagnosing the disease that harm the cell easy.
C – Reducing the amount of time and efforts of researchers to accomplish their tasks.
D – Making the resistance of new infects fast.
E – Help students who interested in veterinary to go along their studies.
F– Cope pace with greatest countries in its technology.
What is Deep Learning and how it helps to Healthcare Sector?Cogito Tech LLC
To know what is Deep Learning and how it helps to Healthcare Sector check this presentation that shows the top use cases of deep learning process of this technology backed systems, applications or machines in the healthcare industry. The entire presentation shows the deep learning definition and how it is changing the healthcare industry. This PPT is represented by Cogito to get to know the role of deep learning in healthcare as Cogito is providing the training data sets for deep learning and machine learning with best accuracy.
Visit: http://bit.ly/2QRrSc2
Adaptive Real Time Data Mining Methodology for Wireless Body Area Network Bas...acijjournal
Since the population is growing, the need for high quality and efficient healthcare, both at home and in hospital, is becoming more important. This paper presents the innovative wireless sensor network based Mobile Real-time Health care Monitoring (WMRHM) framework which has the capacity of giving health predictions online based on continuously monitored real time vital body signals. Developments in sensors, miniaturization of low-power microelectronics, and wireless networks are becoming a
significant opportunity for improving the quality of health care services. Physiological signals like ECG, EEG, SpO2, BP etc. can be monitor through wireless sensor networks and analyzed with the help of data mining techniques. These real-time signals are continuous in nature and abruptly changing hence there is a need to apply an efficient and concept adapting real-time data stream mining techniques for taking intelligent health care decisions online. Because of the high speed and huge volume data set in data streams, the traditional classification technologies are no longer applicable. The most important criteria are to solve the real-time data streams mining problem with ‘concept drift’ efficiently. This paper presents the state-of-the art in this field with growing vitality and introduces the methods for detecting
concept drift in data stream, then gives a significant summary of existing approaches to the problem of concept drift. The work is focused on applying these real time stream mining algorithms on vital signals of human body in Wireless Body Area Network( WBAN) based health care environment.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
The Royal Society of Chemistry hosts large scale data collections and provides access to the data to the chemistry community. The largest RSC data set of wide scale interest to the community offers access to tens of millions of compounds. The host platform, ChemSpider, is limited as it is a structure centric hub only. A new architecture, the RSC data repository, has been developed that extends support to reactions, spectral data, crystallography data and related property data. It is also the architecture underlying a series of exemplar projects for managing data for a number of diverse laboratories. The adoption of data standards for the integration and distribution of data has been essential. Specific standards include molecular structure formats such as molfiles and InChIs, and spectral data formats such as JCAMP. This presentation will report on our development of the data repository, the importance of utilizing standards for data integration, the flexible nature of the architecture to deliver solutions for various laboratories and our efforts to develop new large data collections. This includes text-mining efforts to extract large spectrum-structure collections from large corpuses.
Next generation genomics: Petascale data in the life sciencesGuy Coates
Keynote presentation at OGF 28.
The year 2000 saw the release of "The" human genome, the product of a the combined sequencing effort of the whole planet. In 2010, single institutions are sequencing thousands of genomes a year, producing petabytes of data. Furthermore, many of the large scale sequencing projects are based around international collaboration and consortia. The talk will explore how Grid and Cloud technologies are being used to share genomics data around the planet, revolutionizing life science research.
SafeguardAI and Surprise Based Learning -- Protect your AI solutions from Uni...NAVER Engineering
발표자: 류봉균(EpiSys Science, Inc. 대표)
발표일: 2018.3.
In this talk, I will introduce Surprise Based Learning (SBL), a novel machine learning algorithm founded on the concepts of Complementary Discrimination Learning first introduced by Dr. Wei-Min Shen and Nobel laureate professor Herbert Simon. In contrast to most competitive learning algorithms which focus on structured learning (e.g., Bayesian Networks, Hidden Markov Models) or parameter learning (e.g., Neural Networks, Deep Learning) SBL offers the best of both worlds, meaning that it is capable of learning both the structure (i.e., number of states) and the parameters (i.e., input/output correlation) of a system.
EpiSys Science (EpiSci) is adapting SBL for several application domains. One of them is SafeguardAI, which identify when and what the DL model observes is unfamiliar, and communicate to a human supervisor, ‘I’m not sure what to do. The key insight is to embed a set of well-positioned intelligent agents inside the neural nets during the DL training process. These agents will continuously live inside a trained DL model during runtime and report out-of-distribution inputs as “surprises” or unusual behaviors. DL solutions can use these intelligent agents to safeguard its decision-making process.
I will present several experimental results that demonstrate the effectiveness of the SafeguardAI, and discuss other areas of applications.
High Performance Computing and the Opportunity with Cognitive TechnologyIBM Watson
With the ability to reduce “time to insight” and accelerate research breakthroughs by providing immense computational power, high performance computing is becoming increasingly important in the marketplace. Meanwhile, cognitive technology has risen to prominence, similarly accelerating new insight, but through a very different approach - by analyzing previously ignored unstructured data, which accounts for 80% of new data created today.
By combining the powerful computing power of the HPC market, along with the machine learning, natural language processing, and even computer vision techniques found within cognitive technology, there is a huge opportunity to accelerate breakthroughs and enable better decision making than ever before.
Watch the replay of the webinar: https://www.youtube.com/watch?v=Hxgieboj3W0
Master's Thesis - deep genomics: harnessing the power of deep neural networks...Enrico Busto
The human genome project [1], an international scientific research project with the goal of determining the sequence of nucleotide base pairs that make up human DNA, lasted roughly 15 years and cost $5 billion (adjusted for inflation). With the recent advances in genome sequencing technology, that cost has now reduced to a few hundreds dollars [2] and can be done overnight.
Being able to access this kind of information may have a deep impact on the way complex diseases are treated: physicians will shift from general-purpose treatments to specific ones, tailored on the individual patient’s genomic features.This approach is referred to as precision medicine.
There are however several caveats: first of all, due to the nature of the problem, knowledge of both the biomedical and the computer science domain are required in order to correctly approach it; second, unlike more classical scenarios such as image classification or object detection, it is much more difficult to determine the accuracy of the system due to the complex and multifactorial nature of complex diseases such as cancer and neurodegenerative diseases.
Moreover, a black box kind of solution is unlikely to be of any use, due to legal and ethical reasons: interpretability of the model is crucial more than ever.
The goal of this thesis is to explore the possibilities and the limits of techniques based on deep neural networks for the analysis of biomolecular data, experimenting with publicly available datasets.
What is Deep Learning and how it helps to Healthcare Sector?Cogito Tech LLC
To know what is Deep Learning and how it helps to Healthcare Sector check this presentation that shows the top use cases of deep learning process of this technology backed systems, applications or machines in the healthcare industry. The entire presentation shows the deep learning definition and how it is changing the healthcare industry. This PPT is represented by Cogito to get to know the role of deep learning in healthcare as Cogito is providing the training data sets for deep learning and machine learning with best accuracy.
Visit: http://bit.ly/2QRrSc2
Adaptive Real Time Data Mining Methodology for Wireless Body Area Network Bas...acijjournal
Since the population is growing, the need for high quality and efficient healthcare, both at home and in hospital, is becoming more important. This paper presents the innovative wireless sensor network based Mobile Real-time Health care Monitoring (WMRHM) framework which has the capacity of giving health predictions online based on continuously monitored real time vital body signals. Developments in sensors, miniaturization of low-power microelectronics, and wireless networks are becoming a
significant opportunity for improving the quality of health care services. Physiological signals like ECG, EEG, SpO2, BP etc. can be monitor through wireless sensor networks and analyzed with the help of data mining techniques. These real-time signals are continuous in nature and abruptly changing hence there is a need to apply an efficient and concept adapting real-time data stream mining techniques for taking intelligent health care decisions online. Because of the high speed and huge volume data set in data streams, the traditional classification technologies are no longer applicable. The most important criteria are to solve the real-time data streams mining problem with ‘concept drift’ efficiently. This paper presents the state-of-the art in this field with growing vitality and introduces the methods for detecting
concept drift in data stream, then gives a significant summary of existing approaches to the problem of concept drift. The work is focused on applying these real time stream mining algorithms on vital signals of human body in Wireless Body Area Network( WBAN) based health care environment.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
The Royal Society of Chemistry hosts large scale data collections and provides access to the data to the chemistry community. The largest RSC data set of wide scale interest to the community offers access to tens of millions of compounds. The host platform, ChemSpider, is limited as it is a structure centric hub only. A new architecture, the RSC data repository, has been developed that extends support to reactions, spectral data, crystallography data and related property data. It is also the architecture underlying a series of exemplar projects for managing data for a number of diverse laboratories. The adoption of data standards for the integration and distribution of data has been essential. Specific standards include molecular structure formats such as molfiles and InChIs, and spectral data formats such as JCAMP. This presentation will report on our development of the data repository, the importance of utilizing standards for data integration, the flexible nature of the architecture to deliver solutions for various laboratories and our efforts to develop new large data collections. This includes text-mining efforts to extract large spectrum-structure collections from large corpuses.
Next generation genomics: Petascale data in the life sciencesGuy Coates
Keynote presentation at OGF 28.
The year 2000 saw the release of "The" human genome, the product of a the combined sequencing effort of the whole planet. In 2010, single institutions are sequencing thousands of genomes a year, producing petabytes of data. Furthermore, many of the large scale sequencing projects are based around international collaboration and consortia. The talk will explore how Grid and Cloud technologies are being used to share genomics data around the planet, revolutionizing life science research.
SafeguardAI and Surprise Based Learning -- Protect your AI solutions from Uni...NAVER Engineering
발표자: 류봉균(EpiSys Science, Inc. 대표)
발표일: 2018.3.
In this talk, I will introduce Surprise Based Learning (SBL), a novel machine learning algorithm founded on the concepts of Complementary Discrimination Learning first introduced by Dr. Wei-Min Shen and Nobel laureate professor Herbert Simon. In contrast to most competitive learning algorithms which focus on structured learning (e.g., Bayesian Networks, Hidden Markov Models) or parameter learning (e.g., Neural Networks, Deep Learning) SBL offers the best of both worlds, meaning that it is capable of learning both the structure (i.e., number of states) and the parameters (i.e., input/output correlation) of a system.
EpiSys Science (EpiSci) is adapting SBL for several application domains. One of them is SafeguardAI, which identify when and what the DL model observes is unfamiliar, and communicate to a human supervisor, ‘I’m not sure what to do. The key insight is to embed a set of well-positioned intelligent agents inside the neural nets during the DL training process. These agents will continuously live inside a trained DL model during runtime and report out-of-distribution inputs as “surprises” or unusual behaviors. DL solutions can use these intelligent agents to safeguard its decision-making process.
I will present several experimental results that demonstrate the effectiveness of the SafeguardAI, and discuss other areas of applications.
High Performance Computing and the Opportunity with Cognitive TechnologyIBM Watson
With the ability to reduce “time to insight” and accelerate research breakthroughs by providing immense computational power, high performance computing is becoming increasingly important in the marketplace. Meanwhile, cognitive technology has risen to prominence, similarly accelerating new insight, but through a very different approach - by analyzing previously ignored unstructured data, which accounts for 80% of new data created today.
By combining the powerful computing power of the HPC market, along with the machine learning, natural language processing, and even computer vision techniques found within cognitive technology, there is a huge opportunity to accelerate breakthroughs and enable better decision making than ever before.
Watch the replay of the webinar: https://www.youtube.com/watch?v=Hxgieboj3W0
Master's Thesis - deep genomics: harnessing the power of deep neural networks...Enrico Busto
The human genome project [1], an international scientific research project with the goal of determining the sequence of nucleotide base pairs that make up human DNA, lasted roughly 15 years and cost $5 billion (adjusted for inflation). With the recent advances in genome sequencing technology, that cost has now reduced to a few hundreds dollars [2] and can be done overnight.
Being able to access this kind of information may have a deep impact on the way complex diseases are treated: physicians will shift from general-purpose treatments to specific ones, tailored on the individual patient’s genomic features.This approach is referred to as precision medicine.
There are however several caveats: first of all, due to the nature of the problem, knowledge of both the biomedical and the computer science domain are required in order to correctly approach it; second, unlike more classical scenarios such as image classification or object detection, it is much more difficult to determine the accuracy of the system due to the complex and multifactorial nature of complex diseases such as cancer and neurodegenerative diseases.
Moreover, a black box kind of solution is unlikely to be of any use, due to legal and ethical reasons: interpretability of the model is crucial more than ever.
The goal of this thesis is to explore the possibilities and the limits of techniques based on deep neural networks for the analysis of biomolecular data, experimenting with publicly available datasets.
User Behaviour Modelling - Online and Offline Methods, Metrics, and ChallengesTelefonica Research
Network flows, social networking, smart devices, the Internet-of-Things. These innovations carry no deep value in themselves. Value invariably comes from understanding, obtaining accurate measurements, predicting, and controlling. This vision, however, rests on the application of machine intelligence and data mining techniques that can tackle large and diverse data collections, but also on our capacity to operationalise an interdisciplinary research in the intersection of many domains, such as statistics, signal processing, neuroscience, privacy and security, to name a few. In this talk, through the narration of my involvement in past and recent projects, I share my experience in the domain of user behaviour analysis and predictive modelling. I discuss offline and online experimental methods (and how they can be brought together), present current practices in measuring human behaviour in the online world, and highlight research challenges and opportunities that I have encountered.
Similar to BIOMAG2018 - Darren Price - CamCAN (20)
This is a presentation for the Erwin Hahn Instiutute in Essen, explaining the background, functional design and technical architecture of the Donders Repository. Furthermore, it explains how it aligns with the DCCN project management and with the researchers workflow
The Brain Imaging Data Structure and its use for fNIRSRobert Oostenveld
These slides were prepared for the NIRS toolkit course at the Donders, which due to the Corona crisis has been postponed. The slides present BIDS, explain how fNIRS often involves multiple signals, and relates the two to synchronization and data management
These are the slides presented by Denis Engemann in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. You can find the original version on https://speakerdeck.com/dengemann/mne-hcp-pitch-biomag-2018
BIOMAG2018 - Tzvetan Popov - HCP from a user's perspectiveRobert Oostenveld
These are the slides presented by Tzvetan Popov in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://www.humanconnectome.org/study/hcp-young-adult
These are the slides presented by Vladimir Litvak in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://www.frontiersin.org/research-topics/5158
These are the slides presented by Jan-Mathijs Schoffelen in the Open Science Panel discussion at the BIOMAG 2018 meeting in Philadelphia. See also https://cobidas.wordpress.com
CuttingEEG - Open Science, Open Data and BIDS for EEGRobert Oostenveld
Starting with education, inception of research questions, planning, acquisition, analysis and reporting, there are multiple points where Open Science should play a role. In my presentation at the CuttingEEG conference in Paris, I argue that we should not only be sharing primary outcomes as Open Access publications, but that openness involves the full research cycle. Specifically, I will be sharing my experience with Open Data, privacy challenges and possibilities under the GDPR, Open Source for sharing analysis methods, dealing with imperfections in science and versioning of data, code and results. Finally, I will introduce BIDS for EEG, a new effort to increase the impact of shared and well-documented EEG data.
Using Open Science to accelerate advancements in auditory EEG signal processingRobert Oostenveld
In this presentation at the AESoP conference in Leuven, I will provide arguments for more open research methods. Open Science and Open Data is not only expected from us by our funding agencies, but actually starts making more and more sense from the perspective of the individual researchers. Specifically, I will introduce BIDS as new initiative to organize and share EEG data.
Donders Repository - removing barriers for management and sharing of research...Robert Oostenveld
This is the presentation I gave at the monthly meeting of the Donders Institute PhD council. It shortly explains the Donders Repository, but mainly addresses how to deal with direct and indirectly identifying personal data, with anonymization, pseudomimization and de-identification, and with blurring of research data prior to sharing.
This presentation is for the data stewards of the Radboud University. It explains the design and daily usage of the Data Repository of the Donders Institute.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
4. CC700 (~100 per
decade 18-88,
selected from
CC3000)
~7 hours of cognitive
tests, 1 hour MRI,
30mins MEG
Shafto et al.
(2014). BMC
Neurology, 14(204)
http://www.cam-can.org/
Invitation Letters to 18yrs+ 10,525
3000
700
280
Refusals/
Exclusions
2300
Removed
420
excluded
END?
Stage 3. Three fMRI and MEG
sessions
Attention Language, Motor
Learning, Memory and Emotion
Stage 2. Three sessions
MRI, MEG, cognitive tasks
outside scanner. Saliva sample
Home Interview. Health and
Lifestyle, Demographics, etc.
Available
Now
2010
6. MEG Data Pipeline
MRI Anatomicals MRI FunctionalMEG
MaxFilter 2.2CONVERT SPM
NO MOVECOMPMOVECOMP
CONVERTCONVERT
Artefacts not removed
ICA ComponentsICA Components
N=~650
See website
11. Application Step 1. Browse the website
Imaging Data
N Data Sets
Unprocessed data (de-faced, converted to NIFTI format, Gzipped 4D NIfTI)
Structural Data
T1 653
T2 653
Diffusion Weighted Imaging (DWI) 650
Functional
Field Map 649
Resting State 649
Movie Watching 649
Sensori-motor task 649
Processed (DARTEL warped to sample template and then to MNI space in SPM12)
Segmented images (GM, WM) from T1 + T2 (NIFTI / ROI values)
Diffusion Metrics (NIFTI images / ROI values)
Fractional Anisotropy (FA)
Mean Diffusivity (MD)
Diffusion Kurtosis (DKI)
Magnetisation Transfer Ratio (MTR)
Functional (movement and slice-time corrected, smoothed)
Resting State
Movie Watching
Sensori-motor task
Magnetoencephalography (MEG)
Resting State 652
Sensori-Motor task 652
Sensori (passive) task 652
13. Application Step 2. Develop Proposal
• Do
– Develop a well thought out research question
– Include a hypothesis
– Request only what is needed
– Refer to the data requested in the proposal
– Read Terms and Conditions
• Don’t
– Data mining discouraged
– Apply for “all variables”
16. Good Example
• We request access to subsets of the neuroimaging, behavioral and home interview data to
undertake two retrospective studies:
• 1. Analysis of T1w and T2w volumes using radiomic feature analysis
(http://www.radiomics.io/) and correlation of derived features with behavioral, memory, and
cardiovascular risk factor scores, correcting for educational and social economic status. Here
we aim to decode age-related radiographic changes in the brain from basic MRI images, in
particular subtle changes in image texture that can not be detected using standard 'atrophy-
based' techniques (i.e. Voxel Based Morphometry). Observed changes will be compared
against those occurring in small vessel disease, vascular dementia, and Alzheimer's disease
cohorts to build a radiographic signature of age- and dementia related cognitive decline.
• 2. Apply microstructural modeling to the multi-angle diffusion MRI data. Three models will
be applied and compared for sensitivity: a) NODDI (Neurite Orientation and Dispersion
Imaging; https://www.ncbi.nlm.nih.gov/pubmed/22484410) b) ODI (Orientation Dispersion
Index) c) Parametric Spherical Deconvolution These techniques will provide highly specific
microstructural measurements and possibly enable more subtle age-related GM and WM
changes to be detected. As in the radiomic study above, we will correlate measures against
behavioral, memory, and cardiovascular risk factor scores, correcting for educational and
social economic status.
• We have recently validated such microstructural models against immunohistochemistry in
animal models and have shown strong relationships of derived measures with cell density.
Therefore, by applying such models to this dataset we hope to gain insight into the causes of
atrophy in the aging brain.
17. • Unfortunately, this takes some time.
• Applications must be approved by the CamCAN PIs, which
takes at least two weeks.
• If rejected, don’t be dejected.
– Normally revise and resubmit
Application Step 4. Wait
18. Application Step 5. Access
• Access details supplied by email (password sent
separately)
• Instructions are in the email
– Rsync
– WinSCP
• 99% of problems are with the user’s university
firewall. Check on another computer (i.e. at home)
first.
19. Disclaimer
• Data comes without a warranty
• It is the user’s responsibility to check data
• Please report back any problems
• Home interview forms will be made available on
request
20. Future Data Releases
FreeSurfer – recon-all
Genetic data coming soon, thanks to
Max Plank research centre
Hypothesis especially important here
Stage 3 Data with cognitive tasks