The document summarizes the CBS tools, which are image processing algorithms designed for high resolution brain data up to 7T. The tools are built as plug-ins for the MIPAV and JIST software and provide functions such as segmentation of cortical and sub-cortical structures, cortical surface extraction and normalization to MNI space. MIPAV is a medical image processing software developed at NIH, while JIST provides a pipeline interface and has been developed collaboratively. The tools have been tested and validated in studies comparing scan-rescan data, and are freely available online along with documentation and user support.
This document discusses neuroinformatics, which combines neuroscience and information science. It provides an agenda for the topics to be covered, including an introduction to neuroinformatics, database development and management, an overview of neuroimaging techniques, computational neuroscience modeling, current research applications, and challenges. Single neuron modeling approaches like Hodgkin-Huxley and cable theory are explained. Current areas of research discussed are brain-gene ontology, human brain mapping atlases, and brain-computer interfaces.
Thank you for the insightful discussion. Mobile technology offers great promise to enhance learning when implemented thoughtfully. Its impact will depend on how well resources address real needs and how support helps build skills and change habits over time.
The Human Brain Project aims to build advanced informatics and modeling technologies to simulate and understand the human brain through establishing multidisciplinary programs and facilities for gathering and analyzing brain data, developing exascale supercomputing capabilities, deriving novel technologies, and addressing related ethical issues. The goal is to gain insights into brain function and diseases, develop new clinical tools, and create a new generation of intelligent technologies by gaining a deeper understanding of the brain's organizing principles through highly detailed brain simulations and models.
Efficient implementations of machine vision algorithms using a dynamically ty...Jan Wedekind
This document presents an approach to implementing machine vision algorithms using a dynamically typed programming language (Ruby). It discusses developing a Ruby library for efficient I/O and array operations to facilitate implementations of common machine vision algorithms. The author demonstrates that with a just-in-time compiler for real-time performance and integrated I/O devices, Ruby preserves expressiveness while providing good run-time performance for machine vision applications.
Supporting image-based meta-analysis with NIDM: Standardized reporting of neu...Camille Maumet
Due to the lack of data shared when reporting neuroimaging results, most neuroimaging meta-analyses are based on peak coordinate data. However, the best practice is an image-based meta-analysis that combines full image data of the effect estimates and standard errors derived from each study.
The Neuroimaging Data Model (NIDM) is an ongoing effort, supported by the INCF, to provide a domain-specific extension of the W3C PROV-DM.
In this talk, I will review our recent progress in extending NIDM to share the statistical results of a neuroimaging study and our interactions with existing software packages (SPM, FSL, AFNI, Neurovault.org).
This document discusses the top 10 signs in gastroenterology. It describes each sign, providing details on:
1. McBurney's sign - tenderness at a specific point indicating appendicitis.
2. Rovsing's sign - tenderness in the right lower quadrant upon palpation of the left lower quadrant, also indicating appendicitis.
3. Additional signs include Murphy's sign for cholecystitis, Blumberg's sign for peritonitis, Kehr's sign for splenic injury, and Dance sign for intussusception.
Each sign is named after the physician who identified and described it, with background provided on the historical figures.
This document discusses neuroinformatics, which combines neuroscience and information science. It provides an agenda for the topics to be covered, including an introduction to neuroinformatics, database development and management, an overview of neuroimaging techniques, computational neuroscience modeling, current research applications, and challenges. Single neuron modeling approaches like Hodgkin-Huxley and cable theory are explained. Current areas of research discussed are brain-gene ontology, human brain mapping atlases, and brain-computer interfaces.
Thank you for the insightful discussion. Mobile technology offers great promise to enhance learning when implemented thoughtfully. Its impact will depend on how well resources address real needs and how support helps build skills and change habits over time.
The Human Brain Project aims to build advanced informatics and modeling technologies to simulate and understand the human brain through establishing multidisciplinary programs and facilities for gathering and analyzing brain data, developing exascale supercomputing capabilities, deriving novel technologies, and addressing related ethical issues. The goal is to gain insights into brain function and diseases, develop new clinical tools, and create a new generation of intelligent technologies by gaining a deeper understanding of the brain's organizing principles through highly detailed brain simulations and models.
Efficient implementations of machine vision algorithms using a dynamically ty...Jan Wedekind
This document presents an approach to implementing machine vision algorithms using a dynamically typed programming language (Ruby). It discusses developing a Ruby library for efficient I/O and array operations to facilitate implementations of common machine vision algorithms. The author demonstrates that with a just-in-time compiler for real-time performance and integrated I/O devices, Ruby preserves expressiveness while providing good run-time performance for machine vision applications.
Supporting image-based meta-analysis with NIDM: Standardized reporting of neu...Camille Maumet
Due to the lack of data shared when reporting neuroimaging results, most neuroimaging meta-analyses are based on peak coordinate data. However, the best practice is an image-based meta-analysis that combines full image data of the effect estimates and standard errors derived from each study.
The Neuroimaging Data Model (NIDM) is an ongoing effort, supported by the INCF, to provide a domain-specific extension of the W3C PROV-DM.
In this talk, I will review our recent progress in extending NIDM to share the statistical results of a neuroimaging study and our interactions with existing software packages (SPM, FSL, AFNI, Neurovault.org).
This document discusses the top 10 signs in gastroenterology. It describes each sign, providing details on:
1. McBurney's sign - tenderness at a specific point indicating appendicitis.
2. Rovsing's sign - tenderness in the right lower quadrant upon palpation of the left lower quadrant, also indicating appendicitis.
3. Additional signs include Murphy's sign for cholecystitis, Blumberg's sign for peritonitis, Kehr's sign for splenic injury, and Dance sign for intussusception.
Each sign is named after the physician who identified and described it, with background provided on the historical figures.
This talk provides a review of the current status of research related to self-assembling DNA nanotechnology (particularly DNA nanostructures, synthetic biology, and DNA origami scaffolding structures) and how the self-assembly of artificial systems might be applied in the context of neuro-nanomedicine. One application of self-assembling DNA nanotechnology might be new forms of brain-computer interfaces (BCIs) that are less invasive than current computer chip-based hardware solutions. Another application of self-assembling DNA nanotechnology might be high-resolution neocortical recording devices where synthetic molecules would assemble a DNA signature every time a neuron was fired.
Fireside chat: Newton Howard, Director of the MIT Synthetic Intelligence Lab ...Codiax
This document summarizes the work and background of Dr. Athanasios Tsanas. It discusses two of his projects: 1) using sensors and machine learning to remotely assess motor symptoms in Parkinson's disease patients, and 2) using smartphone sensors and self-reports to continuously monitor mental health conditions. It then outlines his vision for the future of telemedicine, including integrating smartphones with biosensors, cloud computing, and deep learning to enable personalized remote health monitoring. Other areas discussed include the potential for minimally invasive brain implants to both read and stimulate neurons using optogenetics for various neurological conditions.
2pg Biomedical Eng Resume - Trevor DavisTrevor Davis
Trevor Davis is a biomedical engineer and data scientist with a M.S. in Biomedical Engineering from UCLA. His research and work experience have given him skills in signal processing, machine learning, data science, and medical imaging. He has led multiple research projects involving computational modeling, medical devices, and neuroengineering. Davis is proficient in programming languages, software, and lab techniques relevant to engineering and data analysis.
Presented at International Workshop on
Frontiers of Neuroengineering,
Brain-machine Interfaces
& Neural Prostheses
Zhejiang University, Hangzhou, China
March 29, 2011
MseqDR consortium: a grass-roots effort to establish a global resource aimed ...Human Variome Project
The success of whole exome sequencing (WES) for highly heterogeneous disorders, such as mitochondrial disease, is limited by substantial technical and bioinformatics challenges to correctly identify and prioritize the extensive number of sequence variants present in each patient. The likelihood of success can be greatly improved if a large cohort of patient data is assembled in which sequence variants can be systematically analysed, annotated, and interpreted relative to known phenotype. This effort has engaged and united more than 100 international mitochondrial clinicians, researchers, and bioinformaticians in the Mitochondrial Disease Sequence Data Resource (MSeqDR) consortium that formed in June 2012 to identify and prioritize the specific WES data analysis needs of the global mitochondrial disease community. Through regular web-based meetings, we have familiarized ourselves with existing strengths and gaps facing integration of MSeqDR with public resources, as well as the major practical, technical, and ethical challenges that must be overcome to create a sustainable data resource. We have now moved forward toward our common goal by establishing a central data resource (http://mseqdr.org/) that has both public access and secure web-based features that allow the coherent compilation, organization, annotation, and analysis of WES and mtDNA genome data sets generated in both clinical- and research-based settings of suspected mitochondrial disease patients. The most important aims of the MSeqDR consortium are summarized in the MSeqDR portal within the Consortium overview sections. Consortium participants are organized in 3 working groups that include (1) Technology and Bioinformatics; (2) Phenotyping, databasing, IRB concerns and access; and (3) Mitochondrial DNA specific concerns. The online MSeqDR resource is organized into discrete sections to facilitate data deposition and common reannotation, data visualization, data set mining, and access management. With the support of the United Mitochondrial Disease Foundation (UMDF) and the NINDS/NICHD U54 supported North American Mitochondrial Disease Consortium (NAMDC), the MSeqDR prototype has been built. Current major components include common data upload and reannotation using a novel HBCR based annotation tool that has also been made publicly available through the website, MSeqDR GBrowse that allows ready visualization of all public and MSeqDR specific data including labspecific aggregate data visualization tracks, MSeqDR-LSDB instance of nearly 1250 mitochondrial disease and mitochodnrial localized genes that is based on the Locus Specific Database model, exome data set mining in individuals or families using the GEM.app tool, and Account & Access Management. Within MSeqDR GBrowse it is now possible to explore data derived from MitoMap, HmtDB, ClinVar, UCSC-NumtS, ENCODE, 1000 genomes, and many other resources that bioinformaticians recruited to the project are organizing.
Cognitive Computing at University OsnabrückSteven Miller
This document discusses cognitive computing from the Institute of Cognitive Science. It describes how cognitive computing uses social media analysis, data science methods, and IBM's Watson AI to better predict disease spread, such as influenza. By fusing real-time social media data with slower but more reliable CDC data, cognitive systems can improve predictions. The institute also researches neuromorphic hardware and reservoir computing techniques inspired by the brain to enable new kinds of fault-tolerant computing.
Univ of Miami CTSI: Citizen science seminar; Oct 2014Richard Bookman
The University of Miami's Clinical & Translational Science Institute runs a seminar course for MS students.
This talk surveys 8 citizen science projects, reviews NIH's current activities, and identifies issues for attention, particularly with ethical, legal and social implications.
1) The study investigated the role of dopamine in initial memory consolidation and two distinct novelty systems in rodents.
2) It was found that non-canonical release of dopamine from locus coeruleus neurons to the hippocampus may be responsible for enhancing memory of distinct novel experiences.
3) Optogenetic activation of locus coeruleus neurons mimicked the effect of novelty on memory persistence in a dopamine D1/D5 receptor-dependent manner in the hippocampus.
The document discusses the Blue Brain project, which aims to create a virtual human brain through supercomputer-based digital reconstruction and simulation. It seeks to upload the complete information from a human brain into a computer in order to preserve knowledge and intelligence even after death. The project involves creating software to integrate building and simulating digital brain models, as well as systematically searching for basic brain principles and behaviors. A comparison is provided between natural human brains and simulated virtual brains in terms of input, interpretation, output, memory, and processing.
The Virtual brain or machine which can function like human brain, which would work even after death of the human is called the blue brain.
Under this topic the functionalities of the blue brain, its advantages and disadvantages, what actually a virtual brain is etc is being covered.
The Virtual brain or machine which can function like human brain, which would work even after death of the human is called the blue brain.
Under this topic I would basically cover the functionalities of the blue brain, its advantages and disadvantages, what actually a virtual brain is etc.
Many research work is going under this field and expected to release after 2020 approx. This is a new science technology. For this, we should have a good knowledge of the brain and its internal parts along with their functions. Basically, this is being done to upload human brain into machine so that we need to take no effort for thinking or remembering. Even after death, the virtual brain will act as a man.
The document discusses the Blue Brain project, which aims to create a virtual model of the human brain through detailed biological reconstruction and simulation of the neocortex on supercomputers. The project involves collecting data about neurons from brain tissue samples, developing computational models of neurons and networks, and running large-scale simulations involving millions of neurons on IBM's Blue Gene supercomputer. The goal is to gain a complete understanding of the brain and enable faster development of treatments for brain diseases.
The document provides information about the SPIE Medical Imaging conference to be held February 15-20, 2014 in San Diego, California. It calls for submissions of abstracts by August 12, 2013 on topics related to medical imaging technologies and their biomedical applications. The conference will cover all aspects of medical imaging including physics, image processing, computer-aided diagnosis, image-guided procedures, and various imaging modalities. Authors are encouraged to present their latest research on imaging physics, systems, applications, and image analysis.
Automated Analysis of Microscopy Images using Deep Convolutional Neural NetworkAdetayoOkunoye
This document summarizes research on using deep convolutional neural networks to automatically analyze microscopy images. The goals are to expedite the analysis of high-content microscopy data and automate tasks like cell counting and classification. The researchers trained and tested models using TensorFlow on microscopy images to classify cells, achieving over 75% accuracy. This level of automation could benefit biological research by reducing human errors and speeding up analysis of large image datasets.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
In its first year, the HBP achieved significant progress across its subprojects. Key accomplishments included: developing the initial architecture and specifications for the HBP platforms; generating strategic mouse and human brain data through techniques like single-cell transcriptomics and electron microscopy; studying brain regions and circuits through models and experiments; applying mathematical techniques to produce simplified neuron models; and beginning construction of two neuromorphic computing systems inspired by the brain's circuitry. Reporting to the EC was completed on time and new partner institutions joined many subprojects.
How can we harness the Human Brain Project to maximize its future health a...SharpBrains
In early 2013, the European Union selected the Human Brain Project, coordinated by Lausanne’s Federal Institute of Technology (EPFL), as the recipient of over 1 billion euros/ 1.3 billion dollars over the next ten years. How can the research agenda of this major initiative, and closely related ones, be organized and augmented with partnerships with the private sector and cross-sector stakeholders? How can we start building brain heath innovation platforms and delivery systems at the intersection of neuroscience, IT, and engineering?
- Chair: Hilal Lashuel, Associate Professor at the Swiss Federal Institute of Technology-Lausanne (EPFL), YGL Class of 2012
- Sean Hill, co-Director of the Blue Brain Project and co-Director of Neuroinformatics in the Human Brain Project (HBP) at the Swiss Federal Institute of Technology-Lausanne (EPFL)
This session took place at the 2013 SharpBrains Virtual Summit: http://sharpbrains.com/summit-2013/agenda/
This document outlines a research proposal on medical image fusion. It discusses radiotherapy treatment planning which involves target volume delineation using fused images from modalities like PET, CT and MRI. The proposal discusses techniques for image decomposition, fusion and reconstruction. It reviews literature on various fusion methods like multi-resolution analysis, multi-scale geometric analysis and color based methods. It identifies research gaps in appropriate decomposition levels and contouring. The proposal discusses implementing a fusion method using soft computing techniques to differentiate between edge and non-edge regions.
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
This talk provides a review of the current status of research related to self-assembling DNA nanotechnology (particularly DNA nanostructures, synthetic biology, and DNA origami scaffolding structures) and how the self-assembly of artificial systems might be applied in the context of neuro-nanomedicine. One application of self-assembling DNA nanotechnology might be new forms of brain-computer interfaces (BCIs) that are less invasive than current computer chip-based hardware solutions. Another application of self-assembling DNA nanotechnology might be high-resolution neocortical recording devices where synthetic molecules would assemble a DNA signature every time a neuron was fired.
Fireside chat: Newton Howard, Director of the MIT Synthetic Intelligence Lab ...Codiax
This document summarizes the work and background of Dr. Athanasios Tsanas. It discusses two of his projects: 1) using sensors and machine learning to remotely assess motor symptoms in Parkinson's disease patients, and 2) using smartphone sensors and self-reports to continuously monitor mental health conditions. It then outlines his vision for the future of telemedicine, including integrating smartphones with biosensors, cloud computing, and deep learning to enable personalized remote health monitoring. Other areas discussed include the potential for minimally invasive brain implants to both read and stimulate neurons using optogenetics for various neurological conditions.
2pg Biomedical Eng Resume - Trevor DavisTrevor Davis
Trevor Davis is a biomedical engineer and data scientist with a M.S. in Biomedical Engineering from UCLA. His research and work experience have given him skills in signal processing, machine learning, data science, and medical imaging. He has led multiple research projects involving computational modeling, medical devices, and neuroengineering. Davis is proficient in programming languages, software, and lab techniques relevant to engineering and data analysis.
Presented at International Workshop on
Frontiers of Neuroengineering,
Brain-machine Interfaces
& Neural Prostheses
Zhejiang University, Hangzhou, China
March 29, 2011
MseqDR consortium: a grass-roots effort to establish a global resource aimed ...Human Variome Project
The success of whole exome sequencing (WES) for highly heterogeneous disorders, such as mitochondrial disease, is limited by substantial technical and bioinformatics challenges to correctly identify and prioritize the extensive number of sequence variants present in each patient. The likelihood of success can be greatly improved if a large cohort of patient data is assembled in which sequence variants can be systematically analysed, annotated, and interpreted relative to known phenotype. This effort has engaged and united more than 100 international mitochondrial clinicians, researchers, and bioinformaticians in the Mitochondrial Disease Sequence Data Resource (MSeqDR) consortium that formed in June 2012 to identify and prioritize the specific WES data analysis needs of the global mitochondrial disease community. Through regular web-based meetings, we have familiarized ourselves with existing strengths and gaps facing integration of MSeqDR with public resources, as well as the major practical, technical, and ethical challenges that must be overcome to create a sustainable data resource. We have now moved forward toward our common goal by establishing a central data resource (http://mseqdr.org/) that has both public access and secure web-based features that allow the coherent compilation, organization, annotation, and analysis of WES and mtDNA genome data sets generated in both clinical- and research-based settings of suspected mitochondrial disease patients. The most important aims of the MSeqDR consortium are summarized in the MSeqDR portal within the Consortium overview sections. Consortium participants are organized in 3 working groups that include (1) Technology and Bioinformatics; (2) Phenotyping, databasing, IRB concerns and access; and (3) Mitochondrial DNA specific concerns. The online MSeqDR resource is organized into discrete sections to facilitate data deposition and common reannotation, data visualization, data set mining, and access management. With the support of the United Mitochondrial Disease Foundation (UMDF) and the NINDS/NICHD U54 supported North American Mitochondrial Disease Consortium (NAMDC), the MSeqDR prototype has been built. Current major components include common data upload and reannotation using a novel HBCR based annotation tool that has also been made publicly available through the website, MSeqDR GBrowse that allows ready visualization of all public and MSeqDR specific data including labspecific aggregate data visualization tracks, MSeqDR-LSDB instance of nearly 1250 mitochondrial disease and mitochodnrial localized genes that is based on the Locus Specific Database model, exome data set mining in individuals or families using the GEM.app tool, and Account & Access Management. Within MSeqDR GBrowse it is now possible to explore data derived from MitoMap, HmtDB, ClinVar, UCSC-NumtS, ENCODE, 1000 genomes, and many other resources that bioinformaticians recruited to the project are organizing.
Cognitive Computing at University OsnabrückSteven Miller
This document discusses cognitive computing from the Institute of Cognitive Science. It describes how cognitive computing uses social media analysis, data science methods, and IBM's Watson AI to better predict disease spread, such as influenza. By fusing real-time social media data with slower but more reliable CDC data, cognitive systems can improve predictions. The institute also researches neuromorphic hardware and reservoir computing techniques inspired by the brain to enable new kinds of fault-tolerant computing.
Univ of Miami CTSI: Citizen science seminar; Oct 2014Richard Bookman
The University of Miami's Clinical & Translational Science Institute runs a seminar course for MS students.
This talk surveys 8 citizen science projects, reviews NIH's current activities, and identifies issues for attention, particularly with ethical, legal and social implications.
1) The study investigated the role of dopamine in initial memory consolidation and two distinct novelty systems in rodents.
2) It was found that non-canonical release of dopamine from locus coeruleus neurons to the hippocampus may be responsible for enhancing memory of distinct novel experiences.
3) Optogenetic activation of locus coeruleus neurons mimicked the effect of novelty on memory persistence in a dopamine D1/D5 receptor-dependent manner in the hippocampus.
The document discusses the Blue Brain project, which aims to create a virtual human brain through supercomputer-based digital reconstruction and simulation. It seeks to upload the complete information from a human brain into a computer in order to preserve knowledge and intelligence even after death. The project involves creating software to integrate building and simulating digital brain models, as well as systematically searching for basic brain principles and behaviors. A comparison is provided between natural human brains and simulated virtual brains in terms of input, interpretation, output, memory, and processing.
The Virtual brain or machine which can function like human brain, which would work even after death of the human is called the blue brain.
Under this topic the functionalities of the blue brain, its advantages and disadvantages, what actually a virtual brain is etc is being covered.
The Virtual brain or machine which can function like human brain, which would work even after death of the human is called the blue brain.
Under this topic I would basically cover the functionalities of the blue brain, its advantages and disadvantages, what actually a virtual brain is etc.
Many research work is going under this field and expected to release after 2020 approx. This is a new science technology. For this, we should have a good knowledge of the brain and its internal parts along with their functions. Basically, this is being done to upload human brain into machine so that we need to take no effort for thinking or remembering. Even after death, the virtual brain will act as a man.
The document discusses the Blue Brain project, which aims to create a virtual model of the human brain through detailed biological reconstruction and simulation of the neocortex on supercomputers. The project involves collecting data about neurons from brain tissue samples, developing computational models of neurons and networks, and running large-scale simulations involving millions of neurons on IBM's Blue Gene supercomputer. The goal is to gain a complete understanding of the brain and enable faster development of treatments for brain diseases.
The document provides information about the SPIE Medical Imaging conference to be held February 15-20, 2014 in San Diego, California. It calls for submissions of abstracts by August 12, 2013 on topics related to medical imaging technologies and their biomedical applications. The conference will cover all aspects of medical imaging including physics, image processing, computer-aided diagnosis, image-guided procedures, and various imaging modalities. Authors are encouraged to present their latest research on imaging physics, systems, applications, and image analysis.
Automated Analysis of Microscopy Images using Deep Convolutional Neural NetworkAdetayoOkunoye
This document summarizes research on using deep convolutional neural networks to automatically analyze microscopy images. The goals are to expedite the analysis of high-content microscopy data and automate tasks like cell counting and classification. The researchers trained and tested models using TensorFlow on microscopy images to classify cells, achieving over 75% accuracy. This level of automation could benefit biological research by reducing human errors and speeding up analysis of large image datasets.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
In its first year, the HBP achieved significant progress across its subprojects. Key accomplishments included: developing the initial architecture and specifications for the HBP platforms; generating strategic mouse and human brain data through techniques like single-cell transcriptomics and electron microscopy; studying brain regions and circuits through models and experiments; applying mathematical techniques to produce simplified neuron models; and beginning construction of two neuromorphic computing systems inspired by the brain's circuitry. Reporting to the EC was completed on time and new partner institutions joined many subprojects.
How can we harness the Human Brain Project to maximize its future health a...SharpBrains
In early 2013, the European Union selected the Human Brain Project, coordinated by Lausanne’s Federal Institute of Technology (EPFL), as the recipient of over 1 billion euros/ 1.3 billion dollars over the next ten years. How can the research agenda of this major initiative, and closely related ones, be organized and augmented with partnerships with the private sector and cross-sector stakeholders? How can we start building brain heath innovation platforms and delivery systems at the intersection of neuroscience, IT, and engineering?
- Chair: Hilal Lashuel, Associate Professor at the Swiss Federal Institute of Technology-Lausanne (EPFL), YGL Class of 2012
- Sean Hill, co-Director of the Blue Brain Project and co-Director of Neuroinformatics in the Human Brain Project (HBP) at the Swiss Federal Institute of Technology-Lausanne (EPFL)
This session took place at the 2013 SharpBrains Virtual Summit: http://sharpbrains.com/summit-2013/agenda/
This document outlines a research proposal on medical image fusion. It discusses radiotherapy treatment planning which involves target volume delineation using fused images from modalities like PET, CT and MRI. The proposal discusses techniques for image decomposition, fusion and reconstruction. It reviews literature on various fusion methods like multi-resolution analysis, multi-scale geometric analysis and color based methods. It identifies research gaps in appropriate decomposition levels and contouring. The proposal discusses implementing a fusion method using soft computing techniques to differentiate between edge and non-edge regions.
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
1. Software presentation: the CBS tools
for high-res brain processing up to 7T
Pierre-Louis Bazin
Department of Neurophysics
Max Planck Institute for Human Cognitive and Brain Sciences
Leipzig, Germany
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
2. What are the CBS Tools?
High resolution image processing
algorithms designed for 7T data
● Handles MP2RAGE data
● Segmentation: cortical and sub-cortical
structures
● Cortical surface extraction and
flattening (cerebral and cerebellar)
● Normalization to MNI space
routinely at 0.4 mm
● Highly accurate cortical
layering, profiling and
thickness measurements
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
3. What's the software?
All algorithms are built as plug-ins for the MIPAV and JIST software:
Support for most medical image formats
User-friendly interface
Intuitive pipeline system for batch processing
Integration with many additional software tools
Ongoing development and support at participating institutions
(NIH, MPI, JHU, Vanderbilt, …)
Freely available in NITRC, NeuroDebian
[McAuliffe et al., CBMS 2001]
[Lucas et al., Neuroinf. 2010]
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
4. What is MIPAV ?
A user-friendly, general medical image processing
software freely available for academics, MIPAV
developed at NIH MEDICAL IMAGE PROCESSING, ANALYSIS AND
VISUALIZATION
Advantages:
• read/write most medical image formats
• features many visualization tools (triplanar view, sequence view, 3D volume
and surface rendering)
• many standard and advanced algorithms built-in (from image smoothing to
brain stripping, or registration)
• full-time development team at NIH
led by Dr. McAuliffe
[McAuliffe et al., CBMS 2001]
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
5. What is JIST ?
A user-friendly, pipelining interface for large data / complex
processing, open source, developed at JHU, Vanderbilt, CBS, NIH
Principle: LONI pipeline + MIPAV infrastructure
Advantages:
• Inherits many MIPAV properties (file support, algorithms)
• many standard and advanced algorithms built-in (including DWI
processing, cortical reconstruction, surface processing)
• Multi-processor task manager
• Hierarchically organized data output
• External tool encapsulation
• Command-line scripting
• Automated build testing
• Support from multiple labs
[Lucas et al., Neuroinf. 2010]
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
6. This looks complicated ! How does it work ?
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
7. This looks complicated ! How does it work ?
1. Input your data
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
8. This looks complicated ! How does it work ?
1. Input your data
2. Press play
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
9. My study is complicated ! Can the software adapt ?
Standard pipelines
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
10. My study is complicated ! Can the software adapt ?
Standard pipelines
Customized pipelines
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
11. Where do I get it ?
CBS High-Res Brain Tools:
http://www.cbs.mpg.de/institute/software/cbs-hrt/
The latest MIPAV software available for download:
http://mipav.cit.nih.gov/download/
JIST package
http://www.nitrc.org/projects/JIST/
TOADS-CRUISE package
http://www.nitrc.org/projects/TOADS-CRUISE/
MIPAV website NITRC website (JIST)
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
12. How do I learn to use it ?
CBS High-Res Brain Tools:
http://www.cbs.mpg.de/institute/software/cbs-hrt/documentation.html
MIPAV help files:
MIPAV -> Help -> Help Topics
JIST documentation on NITRC:
http://www.nitrc.org/plugins/mwiki/index.php/jist:MainPage
Mipav help files NITRC wiki (JIST)
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
13. Does anyone use it already ?
In Neurophysics:
PL Bazin
J Dinse
M Waehnert
C Tardiff
M Weiss
J Schulz
C Leuze
R Trampel
B Dhital
C Stueber
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
14. Does anyone use it already ?
In Neurophysics:
PL Bazin
J Dinse
M Waehnert
C Tardiff
M Weiss
J Schulz
C Leuze
R Trampel
B Dhital
C Stueber
In the institute:
A Mestres-Missé
J Kipping
D Margulies
R Cafiero
HA Jeon
S Kharabian
C Steele
J Böttger
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
15. Does anyone use it already ?
In Neurophysics: In other institutions:
PL Bazin M McAuliffe, NIH
J Dinse J Prince, JHU
M Waehnert D Pham, CNRM
C Tardiff B Landman, VU
M Weiss
J Schulz D Reich, NIH
C Leuze S Resnick, NIH
R Trampel P Calabresi, JHU
B Dhital S Mostofski, JHU
C Stueber S Ying, JHU
In the institute:
B Forstmann, UAmsterdam
A Mestres-Missé P Schönknecht, ULeipzig
J Kipping A Evans, MNI
D Margulies S Schwarzkopf, UCL
R Cafiero R Bowtell, UNottingham
HA Jeon G Rees, UCL
S Kharabian
C Steele
J Böttger
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
16. Does anyone use it already ?
In Neurophysics: In other institutions: Current download activity
PL Bazin on NITRC:
M McAuliffe, NIH
J Dinse J Prince, JHU
M Waehnert CBS-Tools: 77 downloads
D Pham, CNRM since June 2012
C Tardiff B Landman, VU
M Weiss JIST: 8374 downloads
J Schulz D Reich, NIH since June 2009
C Leuze S Resnick, NIH
R Trampel P Calabresi, JHU
B Dhital S Mostofski, JHU
C Stueber S Ying, JHU
In the institute:
B Forstmann, UAmsterdam
A Mestres-Missé P Schönknecht, ULeipzig
J Kipping A Evans, MNI
D Margulies S Schwarzkopf, UCL
R Cafiero R Bowtell, UNottingham
HA Jeon G Rees, UCL
S Kharabian
C Steele
J Böttger
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
17. Does anyone use it already ?
In Neurophysics: In other institutions: Current download activity
PL Bazin on NITRC:
M McAuliffe, NIH
J Dinse J Prince, JHU
M Waehnert CBS-Tools: 77 downloads
D Pham, CNRM since June 2012
C Tardiff B Landman, VU
M Weiss JIST: 8374 downloads
J Schulz D Reich, NIH since June 2009
C Leuze S Resnick, NIH
R Trampel P Calabresi, JHU
B Dhital S Mostofski, JHU CBS support, tutorials,
C Stueber S Ying, JHU troubleshooting, extensions:
In the institute: PL Bazin
B Forstmann, UAmsterdam
A Mestres-Missé P Schönknecht, ULeipzig
J Kipping A Evans, MNI
D Margulies S Schwarzkopf, UCL
R Cafiero R Bowtell, UNottingham
HA Jeon G Rees, UCL
S Kharabian
C Steele
J Böttger
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
18. Has it been tested and validated ?
Latest results: scan-rescan experiment at 7T
Freesurfer
CSF cortex WM
CBS tools
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
19. Has it been tested and validated ?
Main scientific publications:
Waehnert, Dinse, Weiss, Streicher, Geyer, Turner & Bazin Bazin, Weiss , Dinse, Schäfer, Trampel, Turner
Realistic Modelling of Cortical Contours A computational framework for ultra-high resolution
In prep. For Neuroimage cortical analysis at 7T. In prep. For Neuroimage
Landman, Bogovic, Carass, Chen, Roy, Shiee, Yang, Bazin, Ye, Bogovic, Shiee, Reich, Prince, & Pham
Kishore, Pham, Bazin, Resnick & Prince. Direct segmentation of the major white matter tracts
System for Integrated Neuroimaging Analysis and in diffusion tensor images.
Processing of Structure. Neuroinformatics, 2012 Neuroimage, 2011.
Shiee, Bazin, Ozturk, Calabresi, Reich & Pham Carass, Cuzzocreo, Wheeler, Bazin, Resnick & Prince
A Topology-Preserving Approach to the Segmentation Simple paradigm for extra-cerebral tissue removal:
of Brain Images with Multiple Sclerosis Lesions. Algorithm and analysis, NeuroImage, 2011
NeuroImage, 2010.
Bazin & Pham
Lucas, Bogovic, Carass, Bazin, Prince, Pham & Landman
Homeomorphic brain image segmentation with
The Java Image Science Toolkit (JIST) for Rapid
topological and statistical atlases.
Prototyping and Publishing of Neuroimaging Software.
Medical Image Analysis, 2008.
Neuroinformatics, 2010.
Tosun, Rettmann, Naiman, Resnick, Kraut & Prince Han, Pham, Tosun, Rettmann, Xu & Prince
Cortical Reconstruction Using Implicit Surface Evolution: CRUISE: Cortical Reconstruction Using Implicit Surface
Accuracy and Precision Analysis. Evolution. NeuroImage, 2004.
Neuroimage, 2005.
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
20. My data is at 3T, can I use it too ?
● Many JIST tools developed for 3T data
(see Landman et al. Neuroinformatics 2012)
● Most 7T processing tools are 3T compatible
● Encapsulation of external software
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences
21. Any other questions ?
10/22/12 Max Planck Institute for Human Cognitive and Brain Sciences