SlideShare a Scribd company logo
1 of 227
Download to read offline
Two-Photon
Microscopy
Vasculature
Segmentation
Petteri Teikari, PhD
PhD in Neuroscience
M.Sc Electrical Engineering
https://www.linkedin.com/in/petteriteikari/
Version August 2019
(Cleaned and simplified
in January 2024, see original)
Executive Summary #1/2
Highlighting relevant literature for:
●
Automating the 3D voxel-level vasculature segmentation (mainly) for
multiphoton vasculature stacks
●
Focus on semi-supervised U-Net based architectures that can
exploit both unlabeled data and costly-to-annotate labeled data.
●
Make sure that “tricks” for thin structure preservation, long-term
spatial correlations and uncertainty estimation are incorporated
Executive Summary #2/2
The lack of automated robust tools do not go well with large-size
datasets and volumes
●
See Electron Microscopy segmentation community for inspiration
who are having even larger stacks to analyze
●
Gamified segmentation annotation tool EyeWire has led for
example to this Nature paper, and slot at the AI: More than Human
exhibition at Barbican
Aboutthe Presentation
Aboutthe Presentation #1
“Quick intro” about vasculature segmentation using deep
learning
●
Assumed that multiphoton (two-photon mainly) techniques
are familiar to you and you want to know what you could do
with your data using more robust “measuring tapes” for your
vasculature, i.e. data-drivenvascularsegmentation
Link coloring for articles, for Github/available code,
and for video demos
Aboutthe Presentation #2
Inspiration for providing “seeds for all sorts of directions” would be
for the reader/person implementing this, finding new avenues and
not having to start from scratch.
Especially targeted for people coming outside medical image
segmentation that might have something to contribute and avoid
“the group think” of deep learning community.
Also it helps for the neuroscientist to have an idea how to gather the
data and design experiments to address both neuroscientific
questions and “auxiliary methodology” challenges solvable by deep
learning. Domainknowledgestillvaluable.
Aboutthe Presentation #3:Why solengthy?
If you are puzzled by some slides on non-specifically
“vasculature segmentation”, remember that this was
designed to be “high school project” friendly or good
for tech/computation-savvy neuroscientists not
necessarily knowing all the different aspects that could be
beneficial for development of successful vasculature
network instead of narrowly-focused slideshow
Aboutthe Presentation #4:Textbookdefs?
A lot of the basic concepts are “easily googled” from
Stackoverflow/Medium/etc., thus focus here is on
recent papers that are published in overwhelming
numbers.
Some ideas picked from these papers that might or might
not be helpful in thinking of your own project tech
specifications
Aboutthe Presentation #5:“History”ofIdeas
In arXiv and in peer-published papers, the various approaches taken by
team before their winning idea(s) {“history of ideas, and all the possible choices you could have made”}
,
are hardly ever discussed in detail. So an attempt of “possibility space” is
outlined here
Towards EffectiveForagingby DataScientiststoFindPast
AnalysisChoices
Mary Beth Kery,BonnieE. John,PatrickO'Flaherty, AmberHorvath, Brad A.Myers Carnegie
MellonUniversity/ Bloomberg L.P., NewYork
https://doi.org/10.1101/650259https://github.com/mkery/Verdant
Data scientists are responsible for the analysis decisions they make, but it is hard
for them to track the process by which they achieved a result. Even when data
scientists keep logs, it is onerous to make sense of the resulting large number of
history records full of overlapping variants of code, output, plots, etc. We developed
algorithmic and visualization techniques for notebook code environments to help
data scientists forage for information in their history. To test these interventions,
we conducted a think-aloud evaluation with 15 data scientists, where participants
were asked to find specific information from the history of another person's data
science project. The participants succeed on a median of 80% of the tasks they
performed. The quantitative results suggest promising aspects of our design, while
qualitative results motivated a number of design improvements. The resulting
system, called Verdant, is released as an open-source extension for JupyterLab.
Summary: “All thestuff”youwishyouknewbefore
startingtheprojectwith“seeds”forcross-
disciplinarycollaboration
TheSecretsofMachineLearning:Ten
ThingsYouWishYouHadKnownEarlierto
beMoreEffectiveatDataAnalysis
CynthiaRudin, David Carlson
Electrical and ComputerEngineering,and Statistical Science, Duke University /
Civil and Environmental Engineering, Biostatistics and Bioinformatics,Electrical and Computer Engineering,and Computer Science, Duke University
(Submitted on 4 Jun 2019)https://arxiv.org/abs/1906.01998
Curated Literature
If you are overwhelmed by all the slides, you could start with these articles
●
Haft-Javaherian et al. (2019). Deepconvolutionalneuralnetworksfor segmenting 3Dinvivo
multiphotonimagesofvasculatureinAlzheimerdiseasemousemodels.
https://doi.org/10.1371/journal.pone.0213539
●
Kisuk Lee et al. (2019) Convolutional netsfor
reconstructing neural circuits from brainimages
acquired by serialsection electron microscopy
https://doi.org/10.1016/j.conb.2019.04.001
●
Amy Zhao et al. (2019) Dataaugmentationusing learned
transformations forone-shotmedical image
segmentation https://arxiv.org/abs/1902.09383https://github.com/xamyzhao/brainstorm Keras
●
Dai et al. (2019) Deep Reinforcement Learningfor
SubpixelNeuralTracking https://openreview.net/forum?id=HJxrNvv0JN
●
Simon Kohl et al. (2018) A ProbabilisticU-Net for
SegmentationofAmbiguousImages https://arxiv.org/abs/1806.05034+
followup https://arxiv.org/abs/1905.13077 https://github.com/SimonKohl/probabilistic_unet
●
Hoel Kervadec et al. (2018) Boundary lossforhighly
unbalanced segmentation https://arxiv.org/abs/1812.07032 https://github.com/LIVIAETS/surface-loss PyTorch
●
Jörg Sander et al. (2018) Towards increased
trustworthiness of deep learning segmentation methods
on cardiacMRI https://doi.org/10.1117/12.2511699
●
Hongda Wang et al. (2018) Deep learning achievessuper-
resolution influorescence microscopy
http://dx.doi.org/10.1038/s41592-018-0239-0
●
Yide Zhang et al. (2019) A Poisson-Gaussian Denoising
DatasetwithRealFluorescence Microscopy Images
https://doi.org/10.1117/12.2511699
●
Trevor Standley et al. (2019) Which TasksShould Be Learned
Together inMulti-task Learning? https://arxiv.org/abs/1905.07553
WhatImagesarewe talkingnow
whenwe talkaboutcerebralvasculature
stacks?
Imaging brainvasculaturethroughtheskullof a mouse/rat
MICROSCOPE SET-UP AT THE SKULL AND EXAMPLES OF TWO-PHOTON MICROSCOPYIMAGES ACQUIRED DURINGLIVE IMAGING.BOTH
EXAMPLES SHOWNEURONS (GREEN)ANDVASCULATURE (RED).BOTTOMEXAMPLE USES AN ADDITIONAL AMYLOID-TARGETING DYE (BLUE)
IN AN ALZHEIMER’S DISEASE MOUSE MODEL. IMAGE CREDIT: ELIZABETH HILLMAN. LICENSED UNDER CC-BY-2.0.
http://www.signaltonoisemag.com/allarticles/2018/9/17/dissecting-two-photon-microscopy
Penetrationdepth dependsonthetheexcitation/emission
wavelengths,numberof “nonlinearphotons”,andtheanimal
model
DeFelipe etal. (2011)
http://dx.doi.org/10.3389/fnana.2011.00029
Tischbireketal.(2015):
Cal-590, .. improved our ability
to image calcium signals ... down
to layers 5 and 6 at depths of up
to −900 μm below the pia.
3-PM depth = 601 μm
2-PM depth = 429 μm
Wang et al. (2015)
Better image
deeper
penetration
Dyeless vasculatureimaging in “deeplearningsense” nottoo different
Third-Harmony Generation (THG)
image of blood vessels in the top layer
of the cerebralcortex of a live,
anesthetized mouse.
Emission wavelength = 1/3 of excitation wavelength
Witte et al. (2011)
Optoacoustic ultrasound bio-microscopy
Imaging of skull and brain vasculature (B) was
performed by focusing nanosecond laser
pulses with a custom-designed gradient index
(GRIN) lens and detecting the generated
optoacoustic responses by the same
transducer used for the US reflection-mode
imaging. (C) Irradiation of half of the skull
resulted in inhibited angiogenesis in the
calvarium microvasculature (blue) of the
irradiated hemisphere, but not the non-
irradiated one. - prelights.biologists.com
(Mariana De Niz)
- https://doi.org/10.1101/500017
Third harmonic generation microscopy of
cells andtissue organization
http://doi.org/10.1242/jcs.152272
Model as cross-vendor or cross-modal problem? As you are imaging the “same vasculature” but it looks a bit different with different techniques
“Cross-Modal” 3DVasculatureNetworkseventually wouldbe very nice
Imaging the microarchitecture of the rodent
cerebral vasculature. (A) Wide-field epi-fluorescence
image of a C57Bl/6 mouse brain perfused with a
fluorescein-conjugated gel and extracted from the skull (
Tsai et al, 2009). Pial vessels are visible on the dorsal
surface, although some surface vessels, particularly those
that were immediately contiguous to the sagittal sinus, were
lost during the brain extraction process. (B) Three-
dimensional reconstruction of a block of tissue collected
by in vivo two-photon laser scanning microscopy (TPLSM)
from the upper layers of mouse cortex. Penetrating vessels
plunge into the depth of the cortex, bridging flow from
surface vascular networks to capillary beds. (C) In
vivo image of a cortical capillary, 200 μm below the pial
surface, collected using TPLSM through a cranial window
in a rat. The blood serum (green) was labeled by
intravenous injection with fluorescein-dextran conjugate (
Table 2) and astrocytes (red) were labeled by topical
application of SR101 (Nimmerjahn et al, 2004). (D) A
plot of lateral imaging resolution vs. range of depths
accessible for common in vivo blood flow imaging
techniques. The panels to the right show a cartoon of
cortical angioarchitecture for mouse, and cortical layers for
mouse and rat in relation to imaging depth. BOLD fMRI,
blood-oxygenation level-dependent functional magnetic
resonance imaging.
Network learns to disentangle the
‘vesselness’ from image formation i.e.
how the vascularity looks like when viewed
with different modalities
Compare this to ‘clinical networks’ e.g. Jeffrey De Fauw et al. 2018
that need to handle cross-vendor differences (e.g.
different OCT or MRI machines from different vendors
produce slightly different images of the same anatomical
structures)
Shih et al. (2012)https://dx.doi.org/10.1038%2Fjcbfm.2011.196
e.g. FunctionalUltrasoundImaging fasterthantypical2Pmicroscopes
Alan Urban etal. (2017) Pablo Blinder’s lab
https://doi.org/10.1016/j.addr.2017.07.018
Alan Urban et al. (2017) Pablo Blinder’s lab
https://doi.org/10.1016/j.addr.2017.07.018
Brunner et al. (2018)
https://doi.org/10.1177%2F0271678X18786359
And keep in mind when going through the slides, the development of “cross-
discipline” networks. e.g. 2-PM as “ground truth” for lower quality modalities such
as OCT (OCT angiography for retinal microvasculature) or photoacoustic imaging
thatarepossibleinclinicalworkforhumans
Two-photonmicroscopic imagingofcapillaryred bloodcellfluxinmouse
brainreveals vulnerabilityofcerebralwhitemattertohypoperfusion
Baoqiang Li, Ryo Ohtomo, Martin Thunemann,Stephen R Adams, Jing Yang,Buyin Fu, Mohammad AYaseen
, Chongzhao Ran, Jonathan R Polimeni, David A Boas, Anna Devor,Eng H Lo, Ken Arai,Sava SakadžićFirst
Published March 4,2019 https://doi.org/10.1177%2F0271678X19831016
This imaging system integrates photoacoustic microscopy (PAM),
optical coherence tomography (OCT), optical Doppler tomography
(ODT) and fluorescence microscopy in one platform. - DOI:
10.1117/12.2289211
SimultaneouslyacquiredPAM,FLM,OCTandODTimagesofamouse ear.(a)PA image (average contrast-to-
noise ratio 34dB);(b)OCTB-scan at the location marked in panel (e) by the solid line (displayed dynamic range,40
dB); (c)ODT B-scanatthe locationmarked in panel (e)bythe solid line; (d)FLMimage (average contrast-to-noise
ratio 14dB);(e)OCT2Dprojection images generated from the acquired 3D OCT datasets; SG: Sebaceous glands;
bar, 100μm.
Vasculature Biomarkers
‘Traditional’StructuralVascularBiomarkers #1
i.e. You want to analyze the changes in vascular morphology in disease, in response totreatment, etc limited by the imagination of your in-house biologist,
e.g. Artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratioand wall-to-lumen length
FEMmesh ofthevasculaturedisplaying
arteries, capillaries,and veins.
Gagnon etal. (2015)doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by 93
“We created the graphs and performed
image processing using a suite of custom-
designed tools in MATLAB”
Classical vascular analysis reveals a decrease in the
number of junctions and total vessel length following
TBI. (A) An axial AngioTool image where vessels (red)
and junctions (blue) are displayed. Whole cortex and
specific concentric radial ROIs projecting outward from
the injury site (circles 1–3), were analyzed to quantify
vascular alterations. (B) Analysis of the entire whole
cortex demonstrated a significant reduction in the both
number of junctions and in the total vessel length in TBI
animals compared to sham animals. (C) TBIanimals also
exhibited a significant decline in the number vascular
junctions moving radially outward from the injury site
(ROIs 1 to 3).
Fractal analysis reveals a quantitative reduction
in both vascular complexityand frequency in TBI
animals. (A) A binary image of the axial vascular
network of a representative sham animal with
radial ROIs radiating outward from the injury or
sham surgery site (ROI1–3). The right panel
illustrates the complexity changes in the
vasculature from the concentric circles as you
move radially outward from the injury site. These
fractal images are colorized based on the resultant
fractal dimension with a gradient from lower local
fractal dimension (LFD) in red (less complex
network) to higher LFD in purple (more complex
network).
Traumaticbraininjuryresultsinacuterareficationof
thevascularnetwork.
http://doi.org/10.1038/s41598-017-00161-4
Tortuous Microvessels Contribute to Wound Healing
via SproutingAngiogenesis (2017)
https://doi.org/10.1161/ATVBAHA.117.309993
Multifractal and Lacunarity Analysis of
Microvascular Morphology and Remodeling
https://doi.org/10.1111/j.1549-8719.2010.00075.x
see “Fractal and multifractal analysis: a review”
‘Traditional’StructuralVascularBiomarkers #2
Schemeillustratingtheprincipleofvascularcorrosion
casts
Scheme depicting the definition of vascularbranchpoints. Each voxel of the vessel center line (black) with more than two
neighboring voxels was defined as a vascular branchpoint. This results in branchpoint degrees (number of vessels joining in
a certain branchpoint) of minimally three. In addition, two branchpoints were considered as a single one if the distance
between them was below 2 mm. Of note, nearly all branchpoints had a degree of 3. Branchpoint degrees of four or even
higher accounted together for far less than 1% of all branchpoints
Scheme showing the definition of vessel diameter (a), vessel length
(a), and vessel tortuosity (b). The segment diameter is defined as the
average diameter of all single elements of a segment (a). The segment
length is defined as the sum of the length of all single elements between
two branchpoints. The segment tortuosity is the ratio between the
effective distance le
and the shortest distance ls
between the two
branchpoints associated to this segment.
Schematic displaying the parameter extravascular distance, being defined as the shortest distance of any given voxel
in the tissue to the next vessel structure. (b) Color map indicating the extravascular distance in the cortex of a P10 WT
mouse. Each voxel outside a vessel structure is assigned a color to depict its shortest distance to the nearest vessel
structure.
‘Traditional’StructuralVascularBiomarkers #3:
InClinical context,you cansee that incertaindisease (by vascularpathologies, or by yourpathology Xthatyou are
interestedin), the connectivity oftextbook case mightget altered.Andthenyouwantto quantify thischange asa
function ofdisease severity,pharmacological treatment,otherintervention.
RelationshipbetweenVariations in
theCircleofWillis andFlowRates
inInternalCarotidandBasilar
Arteries DeterminedbyMeans of
MagneticResonanceImagingwith
SemiautomatedLumen
Segmentation:ReferenceData
from125 Healthy Volunteers
H. Tanaka, N. Fujita, T. Enoki, K.
Matsumoto, Y. Watanabe, K.
Murase and H. NakamuraAmerican
Journal of Neuroradiology September
2006, 27 (8) 1770-1775;
https://www.ncbi.nlm.nih.g
ov/pubmed/16971634
Cited by 124 -
Related articles
‘Traditional’FunctionalVascularBiomarkers #1
Blood flow -based biomarkers spatiotemporal (graph) deep learning model needed.See forsome
→ fMRI literatureorpoach someone from Über.
C, Blood flow distribution simulated across the vascular network assuming a global perfusion
value of 100 ml/min/100 g. D, Distribution of the partial pressure of oxygen (pO2
) simulated
across the vascular network using the finite element method model. E, TPM experimental
measurements of pO2
in vivo using PtP-C343 dye. F, Quantitative comparison of simulated
and experimental pO2
and SO2
distributions across the vascular network for a single animal.
Traces represent arterioles and capillaries (red) and venules and capillaries (blue) as a
function of the branching order from pial arterioles and venules, respectively.
doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by93
F, Vessel type. G, Spatiotemporal evolution of simulated SO2
changes following forepaw stimulus.
‘Traditional’FunctionalVascularBiomarkers #2
Time-averaged velocity magnitudes of a measurement region are shown,
together with with the corresponding skeleton (black line), branch points (white
circles), and end points (gray circles). The flow enters the measurement region from
theright. Notethat anon-linearcolor scalewas used forthevelocity magnitude.
Multiple parabolic fits at several locations on the vessel centerline
were performed to obtain a single characteristic velocity and
diameter for each vessel segment. The time-averaged flow rate is assumed
constant throughout the vessel segment. The valid region is bounded by 0.5 and 1.5×the median
flow rate, and the red-encircled data points were not incorporated, due to a strongly deviating flow
rate. Note that the fitted diameters and flow rates for the two data points on the far rightare too large
to be visible in the graph.
QuantificationofBloodFlowandTopologyinDevelopingVascularNetworks
Astrid Kloosterman, Beerend Hierck, Jerry Westerweel, Christian Poelma
Published: May 13, 2014 https://doi.org/10.1371/journal.pone.0096856
Vasculatureimagingandvideooximetry
Methods forcalculating retinal bloodvesseloxygen saturation (sO2)
by(a)thetraditional LSF,and (b) ourneuralnetwork-based DSLwith
uncertainty quantification.
Deep spectrallearningfor label-freeopticalimaging
oximetrywithuncertaintyquantification
RongrongLiu,ShiyiCheng,Lei Tian,Ji Yi
https://doi.org/10.1101/650259
Traditional approaches for quantifying sO2
often rely on analytical models that are fitted by the spectral
measurements. These approaches in practice suffer from uncertainties due to biological variability,
tissue geometry, light scattering, systemic spectral bias, and variations in experimental conditions.
Here, we propose a new data-driven approach, termed deep spectral learning (DSL) for oximetry to be
highly robust to experimental variations, and more importantly to provide uncertainty quantification for
each sO2prediction.
Two-photon phosphorescence lifetime
microscopyofretinalcapillaryplexus
oxygenation in mice
IkbalSencan; Tatiana V. Esipova;MohammadA. Yaseen;Buyin
Fu;DavidA. Boas; Sergei A. Vinogradov; MahnazShahidi; Sava
Sakadžic
https://doi.org/10.1117/1.JBO.23.12.126501
NeurovascularDiseaseResearch functioningofthe“neurovascularunit”(NVU) is ofinterest
Example of two-photon microscopy (TPM).
The TPM provides high spatial resolution
images such as angiogram (left, scale bar:
100 lm) and multi-channel images, such as
endothelial glycocalyx (green) with
bloodflow(red,scalebar: 10lm)
Intermsofdeep learning, you might think of multimodal/channel models and “context dependent” localization of dye signals
Yoon and Yong Jeong (2019) https://doi.org/10.1007/s12272-019-01128-x
Whatdowewantoutfrom
multiphotonvasculaturestacks?
- Voxel masks?
- Graph Networks?
- Meshes?
Computationalhemodynamicanalysis requiresegmentationswithnogaps
Towardsaglaucomariskindexbasedonsimulatedhemodynamics
fromfundusimages
José IgnacioOrlando, JoãoBarbosa Breda, Karelvan Keer, Matthew B. Blaschko, PabloJ. Blanco, CarlosA.
Bulant
https://arxiv.org/abs/1805.10273 (revised27 Jun 2018)
https://ignaciorlando.github.io./
It has been recently observed that glaucoma induces changes in the ocular hemodynamics (
Harris et al. 2013; Abegão Pinto et al. 2016). However, its effects on the functional behavior
of the retinal arterioles have not been studied yet. In this paper we propose a first approach
for characterizing those changes using computational hemodynamics. The retinal blood flow
is simulated using a 0D model for a steady, incompressible non Newtonian fluid in rigid
domains.
Finally, our MATLAB/C++/python code and the LES-AV database are publicly
released. To the best of our knowledge, our data set is the first in providing not only
the segmentations of the arterio-venous structures but also diagnostics and
clinical parameters at an image level.
(a)Multiscaledescriptionofneurovascular coupling in theretina. The modelinputsatthe Macroscale (A)
are the bloodpressuresatthe inletand outletof the retinalcirculation, Pin andPout. The Mesoscale (B) focuses
on arterioles, whosewalls comprise endotheliumandsmooth muscle cells.The Microscale (C) entails the
biochemistryatthe cellular levelthatgoverns the change in smooth muscle shape.(b)
Voxel Mesh
→ conversion“trivial”withcorrectsegmentation/graph model
DeepMarchingCubes:LearningExplicitSurface
Representations
Yiyi Liao, Simon Donńe, Andreas Geiger (2018)
https://avg.is.tue.mpg.de/research_projects/deep-marching-cubes
http://www.cvlibs.net/publications/Liao2018CVPR.pdf
https://www.youtube.com/watch?v=vhrvl9qOSKM
Moreover, we showed that surface-based supervision results in better
predictions in case the ground truth 3D model is incomplete. In future
work, we plan to adapt our method to higher resolution outputs using
octrees techniques [Häne et al. 2017; Riegler et al. 2017; Tatarchenko et al. 2017]
and integrate
our approach with other input modalities
Learning3DShapeCompletionfromLaserScanDatawithWeakSupervision
David Stutz, Andreas Geiger (2018)
http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1708.pdf
Deep-learning-assistedVolumeVisualization
Hsueh-Chien Cheng, Antonio Cardone, Somay Jain, Eric Krokos, Kedar Narayan, Sriram Subramaniam,
Amitabh Varshney
IEEE Transactions on Visualization and Computer Graphics ( 2018)
https://doi.org/10.1109/TVCG.2018.2796085
Although modern rendering techniques and hardware can now render volumetric data
interactively, we still need a suitablefeaturespace that facilitates naturaldifferentiationof
target structures andan intuitive and interactive way of designing visualizations
Motivation
Some scriptability available for ImageJ in many languages
https://imagej.net/Scripti
ng
Imaris had to listen to their customers
but still closed-source with poor
→
integration to 3rd
party code
ITK does someone still
use?
Howabout‘scaling’allyourandothers’
manualworkforanautomaticsolution?
→ data-driven vascularsegmentation
‘Downstream uncertainty’ reduced with near-perfectvoxelsegmentation
Influenceofimagesegmentationonone-dimensional
fluiddynamicspredictionsinthemousepulmonary
arteries
Mitchel J. Colebank, L. Mihaela Paun, M. Umar Qureshi, Naomi Chesler, Dirk
Husmeier, Mette S. Olufsen, Laura Ellwein Fix
NC State University, UniversityofGlasgow, University of Wisconsin-Madison, Virginia Commonwealth University,
(Submitted on 14 Jan 2019 https://arxiv.org/abs/1901.04116
Computational fluid dynamics (CFD) models are emerging as tools
for assisting in diagnostic assessment of cardiovascular disease.
Recent advances in image segmentation has made subject-specific
modelling of the cardiovascular system a feasible task, which is
particularly important in the case of pulmonary hypertension (PH),
which requires a combination of invasive and non-invasive
procedures for diagnosis. Uncertainty in image segmentation
can easily propagate to CFD model predictions, making
uncertainty quantification crucial for subject-specific models.
This study quantifies the variability of one-dimensional (1D) CFD
predictions by propagating the uncertainty of network
geometry and connectivity to blood pressure and flow
predictions. We analyse multiple segmentations of an image of an
excised mouse lung using different pre-segmentation parameters. A
custom algorithm extracts vessel length, vessel radii, and network
connectivity for each segmented pulmonary network. We quantify
uncertainty in geometric features by constructing probability
densities for vessel radius and length, and then sample from these
distributions and propagate uncertainties of haemodynamic
predictions using a 1D CFD model. Results show that variation in
network connectivity is a larger contributor to haemodynamic
uncertainty than vessel radius and length.
‘Measurement uncertainties’ propagatetoyourdeeplearningmodelsaswell
Arnold et al. (2017) Uncertainty Quantification in a Patient-Specific One-
Dimensional Arterial Network Model: ensemble Kalman filter (EnKF)-Based
Inflow Estimator http://doi.org/10.1115/1.4035918
Marquis et al. (2018) Practical identifiability and uncertainty quantification of
a pulsatile cardiovascular model https://doi.org/10.1016/j.mbs.2018.07.001
Mathematical models are essential tools to study how the cardiovascular system maintains
homeostasis. The utility of such models is limited by the accuracy of their predictions,
which can be determined by uncertainty quantification (UQ). A challenge associated with
the use of UQ is that many published methods assume that the underlying model is
identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model
output).
Păun et al. (2018) MCMC methods for inference in a mathematical model of
pulmonary circulation https://doi.org/10.1111/stan.12132
The Delayed Rejection Adaptive Metropolis (DRAM) algorithm, coupled with constraint non‐
linear optimization, is successfully used to learn the parameter values and quantify the
uncertaintyin the parameter estimates
Schiavazzi et al. (2017) A generalized multi-resolution expansion for uncertainty
propagation with application to cardiovascular modeling
https://dx.doi.org/10.1016%2Fj.cma.2016.09.024
A general stochastic system may be characterized by a large number of arbitrarily distributed
and correlated random inputs, and a limited support response with sharp gradients or event
discontinuities. This motivates continued research into novel adaptive algorithms for
uncertainty propagation, particularly those handling high dimensional, arbitrarily distributed
random inputs and non-smoothstochasticresponses.
Sankaran and Marsdenal. (2011) A stochastic collocation method for uncertainty
quantification and propagation in cardiovascular simulations.
http://doi.org/10.1115/1.4003259
In this work, we develop a general set of tools to evaluate the sensitivity of output parameters
to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary
conditions, geometrical parameters, or clinical data. These uncertainties result in a range of
possible outputs which are quantified using probabilitydensity functions (PDFs).
Tran et al. (2019) Uncertainty quantification of simulated biomechanical stimuli
in coronary artery bypass grafts https://doi.org/10.1016/j.cma.2018.10.024
Prior studies have primarily focused on deterministic evaluations, without reporting variability
in the model parameters due to uncertainty. This study aims to assess confidence in multi-
scale predictions of wall shear stress and wall strain while accounting for uncertainty in
peripheral hemodynamics and material properties. Boundary condition distributions are
computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness
are obtained through approximation by a random field. We developed a stochastic
submodeling approach to mitigate the computational burden of repeated multi-scale model
evaluations to focus exclusively on the bypass grafts.
Yin et al. (2019) One-dimensional modeling of fractional flow reserve in coronary
artery disease: Uncertainty quantification and Bayesian optimization
https://doi.org/10.1016/j.cma.2019.05.005
The computational cost to perform three-dimensional (3D) simulations has limited the use of
CFD in most clinical settings. This could become more restrictive if one aims to quantify the
uncertainty associated with fractional flow reserve (FFR) calculations due to the uncertainty in
anatomic and physiologic properties as a significant number of 3D simulations is required to
sample a relatively large parametric space. We have developed a predictive probabilistic
model of FFR, which quantifies the uncertainty of the predicted values with significantly lower
computational costs. Based on global sensitivity analysis, we first identify the important
physiologic and anatomic parameters thatimpact the predictions of FFR
Dendrograms
Usedassymbolic
(“grammar”)abstraction
ofneuronaltrees
Neuronalbranching graphs #1
Explicit representation of a neuron model. (left) The network can be represented as a graph
structure, where nodes are end points and branch points. Each fiber is represented by a single
edge. (right) The same networkisshown withseveral commonerrorsintroduced.
Dendrograms
Representation of brain vasculature using
circular dendrograms
A Method for the Symbolic Representation of Neurons
Maraver et al. (2018) https://doi.org/10.3389/fnana.2018.00106
NetMets: Software for quantifying and visualizing errors in biological network
segmentation Mayerich et al. (2012) http://doi.org/10.1186/1471-2105-13-S8-S7
Neuronalbranching graphs #2
Topological characterization of neuronal arbor morphology via sequence representation: I - motif analysis Todd A Gillette and Giorgio A Ascoli
BMC Bioinformatics 2015 https://doi.org/10.1186/s12859-015-0604-2 “Grammar model” for deep learning?
Tree size and complexity. a. Complexity of trees is
limited by tree size. Here are shown the set of possible
tree shapes for trees with 1 to 6 bifurcations. Additionally,
the number of T nodes (red dots in sample trees) is
always 1 more than A nodes (green dots). Thus, size and
number or percent of C nodes (yellow dots) fully captures
node-type statistics.
Neuronalbranching graphs #3
NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery
Shinn-Ying Ho et al. BMC Bioinformatics201112:230 https://doi.org/10.1186/1471-2105-12-230
Image enhancement process of NeurphologyJ
does not remove thin and dim neurites. Shown here is an
example image of mouse hippocampal neurons analyzed by NeurphologyJ. Notice that
both thick neurites and thin/dim neurites (arrowheads) are preserved after the image
enhancement process.The scale bar represents 50 μm.
Neuritecomplexity
can bededucedfrom
neurite attachment
pointandending
point.Examples of
neuronswithdifferent
levelsofneurite
complexityare shown
NeuronalCircuitTracing Similartoourchallenges#1
FlexibleLearning-FreeSegmentationand ReconstructionforSparseNeuronalCircuitTracing
Ali Shahbazi, Jeffery Kinnison, Rafael Vescovi, Ming Du, Robert Hill, Maximilian Joesch, Marc Takeno, Hongkui Zeng, Nuno Macarico da
Costa, Jaime Grutzendler, Narayanan Kasthuri, Walter J. Scheirer
July 06, 2018 https://doi.org/10.1101/278515
FLoRIN reconstructions of the Standard Rodent Brain (SRB) (top) and APEX2-labeled
Rodent Brain sample (ALRB) (bottom) µCT X-ray volumes. (A) Within the SRB volume,
cells and vasculature are visually distinct in the raw images, with vasculature
appearing darker than cells. (B) Individualstructuresmay be extremely close(such
as the cells and vasculature in this example), making reconstruction efforts prone to
mergeerrors.
NeuronalCircuitTracing Similartoourchallenges#2
DenseneuronalreconstructionthroughX-rayholographicnano-tomography
Alexandra Pacureanu, Jasper Maniates-Selvin, Aaron T. Kuan, Logan A. Thomas, Chiao-Lin Chen, Peter Cloetens, Wei-Chung Allen Lee
May 30, 2019. https://doi.org/10.1101/653188
3D U-NET
everywhere
Howto“deploy” to
scientificworkflow
then?
Deeplearningiscool
butdoesthattranslate
to productivitygains?
Thinkintermsofsystems
The machine learning model is just a part of all this in your labs
Atonof stacksjust sitting
on your hard drives
Takesalot of workto
annotate the vasculature
voxel-by-voxel
“AI”
buzzword
MODEL
The following slides will
showcase variouswaysof
how thisbuzzhas been
done “in practice”
Aspoiler:We would
like to have a semi-
supervised model.
doi: 10.1038/s41592-018-0115-y
doi: 10.1038/s41592-018-0115-y
We want to
predict the
vessel / non-
vessel mask*
for each voxel
* (i.e. foreground-
background, binary
segmentation)
PracticalSystemsParts
Highlighted later on as well: Active Learning
Atonof stacksjust sitting
on your hard drives
Takesalot of workto
annotate the vasculature
voxel-by-voxel
“AI”
buzzword
MODEL
doi: 10.1038/s41592-018-0115-y
doi: 10.1038/s41592-018-0115-y
Youwould liketo keep
researchersintheloopwith
thesystem and make It better
as you do more experiments
and acquire more data.
But you have so many stacks on
your hard drive that howdo
youselectthestacks/slices
thatyoushouldselectin
orderto improvethemodel
themost?
Check the ActiveLearning
slides later
PracticalSystemsParts
Highlighted later on as well: Proofreading
We want to
predict the
vessel / non-
vessel mask*
for each voxel
* (i.e. foreground-
background, binary
segmentation)
“AI”
buzzword
MODEL
Yoursegmentationmodel willmake100%some
erroneouspredictions and you would like to “show the
errors” to the system so it can learn from them and predict
better next time
Proof-
reading
Labelling
Thinkingintermsof a product
If you would like to release this all as an open-source software/toolbox or a as a
spin-off startup, instead of just sharing your network on Github
“AI”
buzzword
MODEL
Active
Learning
TheFinalMask
You could now expose APIs to the
parts needed, and get a modular
system where you can focus on
segmentation and maybe your
collaborators are really into building
good front-ends for proofreading
and labelling?
Annotateddataasthebottleneck
Even with the semi-supervised approach, you won’t most likely face a situation
where you have too many volumes with vasculature ground truths
Thus
The faster and more
intuitive your proofreader /
annotator / labelling tool is,
The faster you can make
progress with your model
performance.
→UX Matters
UX as in User Experience, as most likely your professor has never used this word.
https://hackernoon.com/why-ux-design-must-be-the-foundation-of-your-software-product-f66e431cc7b4
‘Stealideas’fornicetousesystemsaroundyou
Voxeleron OrionWorkflow Advantages
https://www.voxeleron.com/orion-workflow-advantages/
Click on your
inliers/outliers
interactively and
Orion updates the
spline fittings for
you in real-time
Polygon-RNN
https://youtu.be/S1UUR4FlJ84
https://github.com/topics
/annotation-tool
●
wkentaro/ labelme
●
Labelbox / Labelbox
●
microsoft /VoTT
●
opencv /cvat
Makebiologyagamegamification tickedofffromthebuzzwordbingohere
https://doi.org/10.1016/j.chb.2016.12.0
74
Eyewire Elite Gameplay |
https://eyewire.org/explore
https://phys.org/news/2019-06-video-gamers-brand-proteins.html
Theslidesetto follow willallowmultiplewaystosolvethe
segmentationchallenge,and aswellto startbuilding the
“product”inmodules “ablation study friendly”
,sononeedto tryto
makeitallatonce... necessarily
Bio/neuroscientists,
can have a look of this
classic
Can a biologist fix a radio?—Or, what I learned while studying apoptosis
https://doi.org/10.1016/S1535-6108(02)00133-2- Citedby 371
Integratetosomethingand exploittheexistingopen-sourcecode
USIDandPycroscopy--Openframeworksforstoringand
analyzingspectroscopicandimagingdataSuhas Somnath,Chris R. Smith,
Nouamane Laanait, Rama K. Vasudevan,Anton Ievlev,Alex Belianinov,AndrewR. Lupini, Mallikarjun Shankar,
Sergei V.Kalinin, Stephen JesseOak Ridge National Laboratory
(Submitted on 22 Mar 2019)
https://arxiv.org/abs/1903.09515
https://www.youtube.com/channel/UCyh-7XlL-BuymJD7vdoNOvw
pycroscopy
https://pycroscopy.github.io/pycroscopy/about.html
pycroscopy is a python package for image processing and scientific
analysis of imaging modalities such as multi-frequency scanning probe
microscopy, scanning tunneling spectroscopy, x-ray diffraction
microscopy, and transmission electron microscopy.
pycroscopy uses a data-centric model wherein the raw data collected
from the microscope, results from analysis and processing routines are
all written to standardized hierarchical data format (HDF5) files for
traceability, reproducibility,and provenance.
OME
https://www.openmicroscopy.org/
Har-Gil, H., Golgher, L., Israel, S., Kain, D., Cheshnovsky, O., Parnas, M., & Blinder, P.
(2018). PySight: plug and play photon counting for fast continuous volumetric
intravital microscopy. Optica, 5(9), 1104-1112. https://doi.org/10.1364/OPTICA.5.001104
Integratetosomethingand exploittheexistingopen-sourcecode
VMTK
http://www.vmtk.org/ by Orobix
https://github.com/vmtk/vmtk Python3
VMTKADD-ON FORBLENDER November 13, 2017 EPFL has
developed an add-on for Blender that loads centerlines generated by
VMTKinto Blender,and writes meshes from Blender.
http://www.vmtk.org/tutorials/
Youcould for example improvethesegmentation tobeused
with VMTKletVMTK/Blenderstillvisualizethestacks
For example, we could start by
doing this the “deep learning”
way outlined on this slideshow
If you feel that this do not really work well for
your needs, you can work on this, or ask for
improvements from Orobix team
Blender integrationwithmeshes?
BioBlender is a software package built on the open-source 3D modeling software Blender.
BioBlender is the result of a collaboration, driven by the SciVis group at the CNR in Pisa (Italy),
between scientists of different disciplines (biology, chemistry, physics, computer sciences) and
artists, using Blender in a rigorous but at the same time creative way. http://www.bioblender.org/
https://github.com/mcellteam/cellblen
der
https://github.com/NeuroMorph-EPFL/NeuroM
orph/tree/master/NeuroMorph_CenterLines_Cr
ossSections
Processes center lines generated by the Vascular Modeling Toolkit
(VMTK), perform calculations in Blender using these center lines.
Includes tools to clean meshes, export meshes to VMTK, and import
center lines generated by VMTK. Also includes tools to generate
cross-sectional surfaces, calculate surface areas of the mesh along
the center line, and project spherical objects (such as vesicles) or
surface areas onto the center line. Tools are also provided for
detectingbouton swellings. Data can be exportedfor analysis.
Howdoesour “deep
learning”demands
affect
”The Neuroscientific
Experiment”design
FluorescenceMicroscopy networksexistfor“smallerblobs”
DeepFLaSH,adeeplearningpipelineforsegmentationof
fluorescentlabelsinmicroscopyimages
Dennis Segebarth et al. November 2018
https://doi.org/10.1101/473199
Here we present and evaluate DeepFLaSH, a unique deep learning
pipeline to automatize the segmentation of fluorescent labels in
microscopy images. The pipeline allows training and validation of label-
specific convolutional neural network (CNN) models that can be
uploaded to an open-source CNN-model library. As there is no ground
truth for fluorescent signal segmentation tasks, we evaluated the CNN
with respect to inter-coding reliability. Similarity analysis showed that
CNN-predictions highly correlated with segmentations by human
experts.
DeepFLaSH runs as a guided, hassle-free open-source tool
on a cloud-based virtual notebook (Google Colab
http://colab.research.google.com, in a Jupyter Notebook)
with free access to high computing power and requires no
machinelearningexpertise.
Label-specific CNN-models, validated on base of inter-coding approaches may
become a new benchmark for feature segmentation in neuroscience. These
models will allow transferring expert performance in image feature analysis from
one lab to any other. Deep segmentation can better interpret feature-to-noise
borders, can work on the whole dynamic range of bit-values and exhibits
consistent performance. This should increase both, objectivity and
reproducibility of image feature analysis. DeepFLaSH is suited to create CNN-
models for high-throughput microscopy techniques and allows automatic
analysis of large image datasets with expert-like performance and at super-
human speed.
With a nice notebook deployment example
VasculatureNetworks Multimodali.e.“multidye”
3DCNNsifpossible
HyperDense-Net:A hyper-densely connected
CNN formulti-modal image segmentation
Jose Dolz
https://arxiv.org/abs/1804.02967(9 April 2018)
https://www.github.com/josedolz/HyperDenseNe
t
We propose HyperDenseNet, a 3D fully convolutional
neural network that extends the definition of dense
connectivity to multi-modal segmentation problems
[MRI Modalities: MR-T1, PD MR-T2,
FLAIR]. Each imaging modality has a path, and dense
connections occur not only between the pairs of
layers within the same path, but also between those
across different paths.
A multimodal imaging platform with integrated
simultaneousphotoacousticmicroscopy, optical
coherencetomography,optical Doppler tomography
and fluorescence microscopy
Arash Dadkhah; Jun Zhou; Nusrat Yeasmin; Shuliang Jiao
https://sci-hub.tw/https://doi.org/10.1117/12.2289211
(2018)
Here, we developed a multimodal optical imaging system with
the capability of providing comprehensive structural,
functional and molecular information of living tissue in
micrometer scale.
An artery-specificfluorescent dye for studying
neurovascularcoupling
Zhiming Shen, Zhongyang Lu, Pratik Y Chhatbar, Philip
O’Herron, and Prakash Kara
https://dx.doi.org/10.1038%2Fnmeth.1857(2012)
Here, we developed a multimodal optical imaging system with
the capability of providing comprehensive structural,
functional and molecular information of living tissue in
micrometer scale.
Astrocytes are intimatelylinked to the function
of the inner retinalvasculature. A flat-mounted
retina labelled for astrocytes (green) and retinal
vasculature (pink). - from Prof Erica Fletcher
Multimodalsegmentation glialcells,A fibrils,etc.provide
β
‘context’ for vasculatureandviceversa
Diffuse and vascular A deposits induce astrocyte endfeet retraction and swelling in
β
TG arcA mice, starting at early-stage pathology.
β Triple-stained for GFAP, laminin
and A /APP.
β https://doi.org/10.1007/s00401-011-0834-y
In vivo imagingof theneurovascular unit in Stroke,Multiple
Sclerosis (MS) and Alzheimer’s Disease.
InvivoimagingoftheneurovascularunitinCNS disease
https://www.researchgate.net/publication/265418103_In_vivo_i
maging_of_the_neurovascular_unit_in_CNS_disease
NeurovascularUnit(NVU) astrocyte /neuron/vasculatureinterplay
(A)Immunostaining depiction of
components of the neurovascularunit
(NVU). The astrocytes (stained with
rhodamine labeled GFAP)shown in red.
The neurons are stained withfluorescein
tagged NSE shown in green and the blood
vessels are stained with PECAM shown in
blue.Note the location of the foot
processes around the vasculature.
(B)Histochemical localization of -
β
galactosidase expression in rat brain
following lateral ventricular infusion of
Ad5/CMV- -galactosidase (magnification
β
× 1000).Note staining of astrocytes and
astrocytic footprocesses surrounding
blood vessel emulating the exploded
section of the immunostained brain slice
A B
Schematicrepresentation ofaneurovascular unit
with astrocytes being thecentral processorof
neuronalsignals as depicted in both panels A and
panel B.
Harder et al. (2018) Regulationof Cerebral
BloodFlow:ResponsetoCytochrome
P450LipidMetabolites
http://doi.org/10.1002/cphy.c170025
NVU examplesof dyes/labelsinvolved#1
CALCIUM OGB-1
Neuron
CA2+
ASTROCYTE SR-101
Astrocytic
CA2+
ARTERY AlexaFluor 633
or FITC/TexasRed
Vessel
diameter
Neuron (OGB-1)
and arteriole
response (Alexa
Fluor 633) to
drifting grating in
cat visual cortex.
https://dx.doi.org/10
.1038%2Fnmeth.185
7
Low-intensity afferent neural activity caused vasodilation
in the absence of astrocyte Ca2+ transients.
https://dx.doi.org/10.1038%2Fjcbfm.2015.141
Astrocytes trigger rapid vasodilation
following photolysis of caged Ca+.
https:/
/dx.doi.org/10.3389%2Ffnins
.2014.00103
NVU“PhysicalCheating”forartery-veinclassification
https://doi.org/10.1016/j.rmed.2013.02.004
https://doi.org/10.1182/blood-2018-01-824185
https://doi.org/10.1364/BOE.9.002056
http://doi.org/10.5772/intechopen.80888
Traces of relative Hb and HbO2
concentrations for a human
subject during three consecutive
cycles of cuff inflation and
deflation.
http://doi.org/10.1063/1.3398450
sO2
and blood flow on four orders of artery-vein pairs
http://doi.org/10.1117/1.3594786
NVUOxygenProbesforMultiphotonMicroscopy
Examples of in vivo two-photon PLIM oxygen sensing of platinum porphyrin-coumarin-
343 a Maximum intensity projection image montage of a blood vessel entering the bone marrow
(BM) from the bone. Bone (blue) and blood vessels (yellow) are delineated with collagen second
harmonic generation signal and Rhodamine B-dextran fluorescence, respectively. b
Measurement of pO2
in cortical microvasculature. Left: measured pO 2 values in
microvasculature at various depths (colored dots), overlaid on the maximum intensity projection
image of vasculature structure (grayscale). Right: composite image showing a projection of the
imaged vasculature stack. Red arrows mark pO2
measurement locations in the capillary vessels
at 240 μm depth. Orange arrows point to the consecutive branches of the vascular tree, from pial
arteriole (bottom left arrow) to the capillary and then to the connection with ascending venule
(topright arrow). Scale bars: 200 μm.
Chelushkin and Tunik (2019) 10.1007/978-3-030-05974-3_6
Devor et al. (2012) Frontiersin opticalimagingof
cerebralbloodflowandmetabolism
http://doi.org/10.1038/jcbfm.2011.195
Optical imaging of oxygen availability and metabolism. ( A ) Two-photon
partial pressure of oxygen (pO2
) imaging in cerebral tissue. Each plot
shows baseline pO2
as a function of the radial distance from the center of
the blood vessel—diving arteriole (left) or surfacing venule (right)—for a
specific cortical depth range
DyeEngineering afieldofitsown,andcheckfornewexcitingdyes
BrightAIEgen–ProteinHybrid
NanocompositeforDeepandHigh‐
ResolutionIn VivoTwo PhotonBrainImaging
‐
Shaowei Wang Fang Hu Yutong Pan Lai Guan Ng Bin
Liu
Department ofChemical and Biomolecular Engineering,National University ofSingapore
Advanced Functional Materials 24 May 2019 https://doi.org/10.1002/adfm.201902717
NIR IIExcitableConjugated PolymerDotswith
‐
BrightNIR IEmissionforDeepInVivoTwo
‐ ‐
PhotonBrainImagingThroughIntactSkull
Shaowei Wang Jie Liu Guangxue Feng Lai Guan Ng Bin Liu
Department of Chemical and Biomolecular Engineering,National University ofSingapore
Advanced Functional Materials 21 January 2019 https://doi.org/10.1002/adfm.201808365
When Quantum Dots gets old, enter PolymerDots
In vivo vascularimaging in miceafterlabellingwith polymerdots(CNPPV, PFBT, PFPV), fluorescein and
QD605 semiconductorquantum dots; scalebars =100µm. (Biomed.Opt.Express
10.1364/BOE.10.000584, Universityof Texas, Ahmed M.Hassan etal. (2019))
https://physicsworld.com/a/polymer-dots-image-deep-into-the-brain/
Furthermore, we justify the
use of pdotsover
conventional fluorophores
for multiphoton imaging
experiments inthe 800 –
900 nm excitationrange
due to their increased
brightness relativeto
quantumdots,organic
dyes,andfluorescent
proteins.
An important caveat toconsider,
however, is that pdots were delivered
intravenously in our studies, and
labelingneuralstructureslocatedin
high-density extravascular brain tissue
couldposeachallenge due to the
relativelylargediametersofpdots
(~20-30nm). Recent efforts have
producedpdot nanoparticles with sub-5
nm diameters, yet the yield from these
preparations is still quite low
Whatif youhavethe ‘dyelabels’ from differentexperiments
And you would like to combine them into a training of a single network?
LearningwithMultitaskAdversariesusingWeaklyLabelled
DataforSemanticSegmentationinRetinalImages
Oindrila Saha, Rachana Sathish, Debdoot Sheet
13 Dec 2018 (modified: 15 Apr 2019)
https://openreview.net/forum?id=HJe6f0BexN
In case of retinal images, data driven learning-based algorithms have been
developed for segmenting anatomical landmarks like vessels and optic
disc as well as pathologies like microaneurysms, hemorrhages, hard
exudatesand soft exudates.
The aspiration is to learn to segment all such classes using only a single fully
convolutional neural network (FCN), while the challenge being that there is
no single training dataset with all classes annotated. We solve this problem
by training a single network using separate weakly labelled datasets.
Essentially we use an adversarial learning approach in addition to the
classically employed objective of distortion loss minimization for semantic
segmentation using FCN, where the objectives of discriminators are to
learn to (a) predict which of the classes are actually present in the input
fundus image, and (b) distinguish between manual annotations vs.
segmentedresults for each of the classes.
The first discriminator works to enforce the network to segment those
classes which are present in the fundus image although may not have been
annotated i.e. all retinal images have vessels while pathology datasets may
not have annotated them in the dataset. The second discriminator
contributes to making the segmentation result as realistic as possible. We
experimentally demonstrate using weakly labelled datasets of DRIVE
containing only annotations of vessels and IDRiD containing annotations for
lesions and optic disc.
2D Vasculature
networksexist
benchmarked
mainlyon retinal
microvasculature
datasets
OverviewoftheMethods
blood vessels as special example of curvilinear structure object segmentation
Bloodvesselsegmentationalgorithms—
Reviewof methods,datasetsand
evaluationmetrics
Sara Moccia, Elena De Momi, Sara El Hadji, Leonardo
S.Mattos
Computer Methods and Programs in Biomedicine May 2018
https://doi.org/10.1016/j.cmpb.2018.02.001
No single segmentation approach is suitable for all
the different anatomical region or imaging modalities,
thus the primary goal of this review was to provide an
up to date source of information about the state of
the art of the vessel segmentation algorithms so
that the most suitable methods can be chosen
accordingtothespecifictask.
U-Netyouwillsee thisrepeatedmanytimes
U-Net:Convolutional
Networksfor Biomedical
Image Segmentation
Olaf Ronneberger, Philipp Fischer, Thomas
Brox
(Submitted on 18 May 2015)
https://arxiv.org/abs/1505.04597
Cited by 77,660
U-Net:deeplearningforcell
counting,detection,and
morphometry
Thorsten Falk et al. (2019)
Nature Methods 16, 67–70 (2019)
https://doi.org/10.1038/s41592-018-0261-2
Citedby1,496
The ‘vanilla U-Net’ Is typically the baseline to beat in many articles, and its modified
version is being proposed as the novel state-of-the-art network
https://towardsdatascience.com/u-net-b229b32b4
a71
The architecture looks like a ‘U’
which justifies its name. This
architecture consists of three
sections: The contraction
(encoder, downsampling part),
The bottleneck, and the
expansion (decoder,
upsampling part) section.
contraction
encoder
downsampling
expansion
decoder
upsampling
BOTTLENECK
Skipconnections
U-Net 2D Example
Image
Size
noFeatureMaps
4x
Downsampling
”Stages”With 2x2
Max Pooling
572 x572 px
32 x32px
4xUpsampling
”Stages”
ENCODER DECODER
First stage decoder
filter outputs (activation
maps) are passedtothe
final4th
decoder stage
2ndstage decoder filter
outputs (activation
maps) are passedtothe
3rd
decoder stage
3rd - 2nd
4th- 1st
2Dretinalvasculature 2DU-Netasthe “baseline”
Retinabloodvessel
segmentationwith a
convolutionneural
network(U-net)
orobix/retina-unet Keras
http://vmtklab.orobix.com/
https://orobix.com/ as in the
company from Italy behind the
VMTKLab
Jointsegmentationandvascular reconstruction
Marry CNNs with graph (non-euclidean) CNNs, “grammar models” or something even better
DeepVesselSegmentationByLearningGraphicalConnectivity
Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee
https://arxiv.org/abs/1806.02279 (Submitted on 6 Jun 2018)
We incorporate a graph convolutional network into a unified CNN architecture,
where the final segmentation is inferred by combining the different types of features.
The proposed method can be applied to expand any type of CNN-based vessel
segmentation methodtoenhance the performance.
Learning about the strong
relationship that exists
between neighborhoods is not
guaranteed in existing CNN-
based vessel segmentation
methods.The proposed
vesselgraph network
(VGN) utilizes a GCN together
with a CNN to address this
issue.
Overall networkarchitecture
of VGN comprising the CNN,
graph convolutional network,
and inference modules.
“Grammar” as in if you know how molecules are
composed (e.g. SMILES model), you can
constrain the model to have only physically
possible connections. Well we donotexactly
havethatluxury and we need to learn the graph
constraints from data (but have noannotations
at the moment for edge nodes)
Automatic Chemical Design Using a Data-Driven Continuous Representation of
Molecules http://doi.org/10.1021/acscentsci.7b00572 some authors from Toronto,
including David Duvenaud
“Grammarmodels”possibletocertainextent
Remember that healthy and pathological vasculature might be be “quite different” (highly quantitative term)
Mitchell G. Newberry et al. Self-Similar Processes
Follow a Power Law in Discrete Logarithmic
Space, Physical Review Letters (2019).
DOI: 10.1103/PhysRevLett.122.158303
Although blood vessels also branch dichotomously, random asymmetry in branching disperses vessel
diameters from any specific ratios. On a database of 1569 blood vessel radii measured from a single
mouse lung, αc
and αd
produced statistically indistinguishable estimates (Table I), independent of the chosen
, and are therefore both likely accurate. The mutual consistency between the estimators suggests that the
λ
distribution ofbloodvesselmeasurementsiseffectivelyscaleinvariant despitetheunderlyingbranching.
Quantitating the Subtleties of
Microglial Morphology with Fractal
Analysis
Frontiers in Cellular
Neuroscience 7(3):3
http://doi.org/10.3389/fncel.2013.00003
Grammar as you can guess, used in languagemodeling
Kim Martineau | MIT Quest for Intelligence May 29, 2019
http://news.mit.edu/2019/teaching-language-models-grammar-makes-them-smarter-0529
NeuralLanguage ModelsasPsycholinguisticSubjects:
RepresentationsofSyntacticState
Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy
(Submitted on 8 Mar 2019) https://arxiv.org/abs/1903.03260
We deploy the methods of controlled psycholinguistic experimentation to shed light
on the extent to which the behavior of neural network language models reflects
incremental representations of syntactic state. To do so, we examine model
behavior on artificial sentences containing a variety of syntactically complex
structures. We find evidence that the LSTMs trained on large datasets represent
syntactic state over large spans of text in a way that is comparable to the Recurrent
Neural Network Grammars (RNNG, Dyer et al. 2016 Cited by157
), while the LSTM trained
on the small dataset does not or does so only weakly.
StructuralSupervisionImprovesLearningof Non-Local
GrammaticalDependencies
Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy
(Submitted on 3 Mar 2019) https://arxiv.org/abs/1903.00943
Using controlled experimental methods from psycholinguistics, we compare the
performance of word-based LSTM models versus two models that represent
hierarchical structure and deploy it in left-to-right processing: Recurrent Neural
Network Grammars (RNNGs) (Dyer et al. 2016 Cited by157
) and a incrementalized version
of the Parsing-as-Language-Modeling configuration from Chariak et al., (2016).
Structural supervision thus provides data efficiency advantages over purely
string-based training of neural language models in acquiring human-like
generalizationsabout non-local grammatical dependencies.
VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#1
TowardsEnd-to-EndImage-to-Treefor
VasculatureModeling
YunlongHuo and Ghassan S. Kassab
Journal of the Royal SocietyInterface
Published:15 June2011
https://doi.org/10.1098/rsif.2011.0270- Citedby 87
A fundamental physics-based derivation of
intraspecific scaling laws of vascular trees has
not been previously realized. Here, we provide such a
theoretical derivation for the volume–diameter
and flow–length scaling laws of intraspecific vascular
trees. In conjunction with the minimum energy
hypothesis, this formulation also results in
diameter–length, flow–diameter and flow–volume
scaling laws.
The intraspecific scaling predicts the volume–
diameter power relation with a theoretical exponent
of 3, which is validated by the experimental
measurements for the three major coronary
arterial trees in swine. This scaling law as well as
others agrees very well with the measured
morphometric data of vascular trees in various other
organs and species. This study is fundamental to
the understanding of morphological and
haemodynamic features in a biological vascular
treeand has implications forvasculardisease.
Relation between normalized stem diameter (Ds
/(Ds
)max
)
and normalized crown volume (Vc
/(Vc)max
) for vascular
trees of various organs and species corresponding to those
trees in table 1. The solid line represents the least-squares fit
of all the experimental measurements (exponent of
2.91, r2
= 0.966).
VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#2
BranchingPatternoftheCerebral
ArterialTree
Jasper H. G. Helthuis Tristan P. C. van Doormaal Berend Hillen Ronald L. A. W. Bleys
Anita A. Harteveld Jeroen Hendrikse Annette van der Toorn Mariana Brozici Jaco J. M.
Zwanenburg Albert van der Zwan
TheAnatomical Record(17 October2018)
https://doi.org/10.1002/ar.23994
Quantitative data on branching patterns of the human
cerebral arterial tree are lacking in the 1.0–0.1 mm
radius range. We aimed to collect quantitative data in this
range, and to study if the cerebral artery tree complies with
the principle of minimal work (Lawof Murray).
Data showed a large variation in branching pattern
parameters (asymmetry ratio, area ratio, length radius
‐ ‐ ‐ ‐
ratio, tapering). Part of the variation may be explained by
the variation in measurement techniques, number of
measurements and location of measurement in the
vascular tree. This study confirms that the cerebral arterial
tree complies with the principle of minimum work.
These data are essential in the future development of
more accuratemathematicalbloodflow models.
Relative frequencies of (A) asymmetry ratio, (B) area ratio,(C) length to radius ratio,
‐ ‐ ‐ (D)tapering.
Branch-basedfunctionalmeasures?
Changsi Cai et al. (2018) Stimulation-inducedincreases in
cerebral bloodflowandlocalcapillary vasoconstriction
dependonconductedvascularresponses
https://doi.org/10.1073/pnas.1707702115
Functional vessel dilation in the mouse barrel cortex. (A) A two-photon image of the barrel cortex of a NG2-DsRed
mouse at 150 µm depth. The p.a.s branch out a capillary horizontally (
∼ first order). Further branches are defined as
second- and third-order capillaries. Pericytes are labeled with a red fluorophore (NG2-DsRed) and the vessel
lumen with FITC-dextran (green). ROIs are placed across the vessel to allow measurement of the vessel diameter
(colored bars). (Scale bar: 10 µm.)
Changsi Cai et al. (2018) Stimulation-induced increases in
cerebralbloodflowandlocal capillaryvasoconstriction
dependonconducted vascularresponses
https://doi.org/10.1073/pnas.1707702115
Measurement of blood vessel diameter and red blood cell (RBC) flux in the retina.A, Confocal
image of a whole-mount retina labeled for the blood vessel marker isolectin (blue), the contractile
protein -SMA (red), and the pericyte marker NG2 (green). Blood vessel order in the superficial vascular
α
layer is indicated. First-order arterioles (1) branch from the central retinal artery. Each subsequent
branch (2-5)has a higher order.Venules (V)connect with the central retinal vein. Scale bar,100 μm.
2Dretinalvasculature datasetsavailable
Highlights also how availability of freely-available databases DRIVE and STARE with a lot of annotations lead to a lot of
methodological papers from “non-retina” researchers
De et al. (2016) A Graph-Theoretical Approach for Tracing Filamentary Structures in Neuronal and Retinal Images https://dx.doi.org/10.1109/TMI.2015.2465962
2DMicrovasculatureCNNswithGraphs
TowardsEnd-to-EndImage-to-TreeforVasculatureModeling
Manish Sharma, Matthew C.H.Lee,James Batten,Michiel Schaap, Ben
Glocker
Google, ImperialCollege, Heartflow
MIDL2019Conference https://openreview.net/forum?id=ByxVpY5htN
This work explores an end-to-end image-to-tree approach for extracting
accurate representations of vessel structures which may be beneficial for
diagnosis of stenosis (blockages) and modeling of blood flow. Current image
segmentation approaches capture only an implicit representation, while this
work utilizes a subscale U-Net to extract explicit tree representations
from vascularscans.
Check othermodalities
alsoforfurther
inspiration
SS-OCTVasculatureSegmentation
Robustdeeplearningmethodforchoroidalvesselsegmentationon
sweptsourceopticalcoherencetomographyimages
Xiaoxiao Liu, Lei Bi,Yupeng Xu, Dagan Feng, Jinman Kim, and Xun Xu
Department of Ophthalmology, Shanghai General Hospital, ShanghaiJiaoTong UniversitySchool ofMedicine
BiomedicalOpticsExpressVol. 10, Issue 4, pp.1601-1612(2019)
https://doi.org/10.1364/BOE.10.001601
Motivated by the leading segmentation performance in medical images from the
use of deep learning methods, in this study, we proposed the adoption of a deep
learning method, RefineNet, to segment the choroidal vessels from SS-OCT
images. We quantitatively evaluated the RefineNet on 40 SS-OCT images
consisting of ~3,900 manually annotated choroidal vessels regions. We
achieved a segmentation agreement (SA) of 0.840 ± 0.035 with clinician 1
(C1) and 0.823 ± 0.027 with clinician 2 (C2). These results were higher than
inter-observervariability measurein SA between C1 and C2of 0.821 ±0.037.
Currently, researchers have limited imaging modalities to obtain information
about the choroidal vessels. Traditional indocyanine green angiography (ICGA) is
the goldstandard in clinical practice for detecting abnormality in the choroidal
vessels. ICGA provide 2D images of the choroid vasculature, which can show the
exudation or filling defects. However, ICGA does not provide 3D choroidal
structure or the volume of the whole choroidal vessel networks, and the ICGA
images overlap retinal vessels and choroidal vessels together, thereby
making it hard to independently observe and analyze the choroidal vessels
quantitatively. OCT Angiography (OCTA) can clearly show the blood flow from
superficial and deep retinal capillary network, as well as retinalpigment epithelium
to superficial choroidal vascular network; however, it cannot show the blood
flowindeepchoroidalvessels.
https://arxiv.org/abs/1806.05034
Fundus/OCT/OCTA multimodal quality enhancement
Generatingretinalflowmapsfromstructuralopticalcoherencetomography
withartificialintelligence Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem, Nicolaas
P. DeRuyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang & Aaron Y. Lee
Department of Ophthalmology, Universityof Washington, Seattle, WA, USA; eScience Institute, Universityof Washington, Seattle, WA, USA
ScientificReportsArticle number: 5694(2019)
https://doi.org/10.1038/s41598-019-42042-y
Using the human generated annotations as the ground truth limits the
learning ability of the AI, given that it is problematic for AI to surpass the accuracy
of humans, by definition. In addition, expert-generated labels suffer from inherent
inter-rater variability, thereby limiting the accuracy of the AI to at most variable
human discriminative abilities. Thus, the use of more accurate, objectively-generated
annotations would be a key advance in machine learning algorithms in diverse areas
ofmedicine.
Given the relationship of OCT and OCTA, we sought to explore the deep learning’s
ability to first infer between structure and retinal vascular function, then
generate an OCTA-like en-face image from structural OCT image alone. By
taking OCT as input and using the more cumbersome, expensive modality, OCTA,
as an objective training target, deep learning could overcome limitations with the
second modality and circumvent theneedforgeneratinglabels.
Unlike current AI models which are primarily targeted towards classification or
segmentation of images, to our knowledge, this is the first application of artificial
neural networks in ophthalmic imaging to generate a new image based on a
different imaging modality data. In addition, this is the first example in medical
imaging, to our knowledge, where expert annotations for training deep learning
modelsare bypassedbyusingobjective,functional flow measurements.
“FITC” in 2-PMContext
“QD” in
2-PM Context
Learn the mapping from FITC QD
→ (with QD as supervision)
to improve the quality of already acquired FITC stacks
unsupervised conditional image-to-image translation possible
also, but probably trickier
Electronmicroscopy similarreconstructionpipeline forvasculature
High-precisionautomatedreconstructionof
neuronswithflood-fillingnetworksMichałJanuszewski,
Jörgen Kornfeld,PeterH. Li,Art Pope,Tim Blakely,Larry Lindsey,Jeremy Maitin-Shepard,Mike
Tyka,Winfried Denk & Viren Jain
Nature Methods volume 15, pages 605–610 (2018)
https://doi.org/10.1038/s41592-018-0049-4
e introduce a CNN architecture, which is linearly equivariant (a
generalization of invariance defined in the next section) to 3D
rotations about patch centers. To the best of our knowledge, this
paper provides the first example of a CNN with linear
equivariance to 3Drotations and 3Dtranslations of voxelized
data. By exploiting the symmetries of the classification task, we
are able to reduce the numberof trainable parameters using
judicious weight tying. We also need less training and test time
data augmentation, since some aspects of 3D geometry are
already ‘hard-baked’ into the network.
As a proof of concept we try segmentation as a 3D problem,
feeding 3D image chunks into a 3D network. We use an
architecture based on Weiler et al. (2017)’s steerable version of
the FusionNet. It is a UNet with added skip connections within
the encoder and decoder paths to encourage better gradient
flow.
Effectiveautomatedpipelinefor3Dreconstructionofsynapsesbasedondeeplearning
Chi Xiao, Weifu Li, Hao Deng, Xi Chen, Yang Yang, Qiwei Xie and Hua Han
https://doi.org/10.1186/s12859-018-2232-0BMC Bioinformatics (13 July 2018) 19:263
Five basic steps implemented bythe authors
1) Imageregistration,
e.g. An Unsupervised Learning Model for Deformable Medical Image Registration
2)ROIDetection,
e.g. Weighted Hausdorff Distance: A Loss Function For Object Localization
3)3DCNNs,
e.g. DeepMedic for brain tumor segmentation
4a)Dijkstra shortestpath,
e.g. shiluyuan/Reinforcement-Learning-in-Path-Finding
4b)Oldschoolalgorithm refinement,
e.g. 3D CRF, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
5)MeshReconstruction,
e.g. Robust Surface Reconstruction via Dictionary Learning
Deep-learning-assisted Volume Visualization
Deep Marching Cubes: Learning Explicit Surface Representations
Problemsspecific to
multiphoton microscopy
VasculatureImagingArtifacts Movement artifact
00
In vivoMPMimagesofacapillary.
Because MPM images are acquire by raster scanning, images at different depths (z) are
acquired with a time lag (t). Unlabeled red blood cells moving through the lumen cause dark
spots and streaks and result in variable patterns within a single vessel.
Haft-Javaherian et al.(2019) https://doi.org/10.1371/journal.pone.0213539
VasculatureImagingArtifacts”Vessel Breakage” / Intensity inhomogeneity
Anovelmethodforidentifyingagraph-basedrepresentationof
3-Dmicrovascularnetworksfromfluorescencemicroscopy
imagestacks
S. Almasi, X. Xu, A.Ben-Zvi, B. Lacoste, C. Guet al.
MedicalImage Analysis, 20(1):208–223, February2015.
http://dx.doi.org/10.1016/j.media.2014.11.007
Vasculature Image Quality. An example of false fractions in the
structure caused by imaging imperfections and an area of more
artifacts in a maximum-intensity projection (MIP) slice of a 3-D
fluorescent microscopy image of microvasculature
Jointvolumetricextractionandenhancementof vasculaturefrom
low-SNR3-Dfluorescencemicroscopyimages
Sepideh Almasi, AyalBen-Zvi, Baptiste Lacoste, Chenghua Gu, Eric L.Miller, Xiaoyin Xu
Pattern Recognition Volume 63, March 2017, Pages710-718
https://doi.org/10.1016/j.patcog.2016.09.031
Highlights
*We introduce intensity-based features to directlysegmentartifactedimages ofvasculature.
*The segmentation method isshown to be robust tonon-uniformillumination and noise ofmixed type.
*This methodis free of apriori statisticalandgeometricalassumptions.
For fluorescence signals, adaptive optics, quantum dots and three-photon microscopy not always feasible
In this maximum intensity projection of 3-
D fluorescence microscopy image of
murine cranial tissue, miscellaneous
imaging artifacts are visible: uneven
illumination (upper vs. lower parts),
non-homogenous intensity
distribution inside the vessels (visible in
the larger vessels located at top right
corner), low SNR regions (lower areas),
high spatial density or closeness of
vessels (majorly in the center-upper
parts), reduced contrast at edges
(visible as blurs mostly for the central
vessels), brokenor faint vessels (lower
vessels), and low frequency
background variations caused by
scattered light (at higher density
regions).
MultidyeExperimentsfor ‘self-supervisedtraining’
CAMvesselfluorescence followed overtime for
Q705PEGaand 500kDaFITC–dextran.500kDa
FITC–dextran(A) and Q705PEGa(B)were
coinjected and images weretaken at the designated
times.
Theuseof quantumdots for ana
lysisofchickCAMvasculature
JD Smith,GW Fisher, AS Waggoner… -
Microvascularresearch, 2007 - Elsevier
Citedby69
Intravitally injected QDs were found to
be biocompatible and were kept in circulation
over the course of 4days without any observed
deleterious effects. QD vascular residence time
was tunable through QD surface chemistry
modification. We also found that use of QDs
with higher emission wavelengths
(> 655nm) virtually eliminated all chick-
derived autofluorescence andimproved depth-
of-field imaging. QDs were compared to FITC–
dextrans, a fluorescent dye commonly used for
imaging CAM vessels. QDs were found to
image vessels as well as or better than FITC–
dextrans at 2–3 orders of magnitude lower
concentration. We also demonstrated that QDs
are fixable with low fluorescence loss and thus
can be used in conjunction with histological
processing for further sample analysis.
i.e. which would give you a nicer
mask with Otsu’s thresholding for
example?
Easier to obtain ground truth labels from QD stacks and
use thoseto train forFITC stacks or multimodal FITC+QD
networks ifthere arecomplimentary information available?
Inpaintingmasks (‘vessel breakage’) from
differencebetween theQD and FITC stacks?
Quantum dots vs. Fluorescein Dextran (FITC)
MultidyeExperimentsfor OptimizedSNRfor allvesselsizes
Todorov et al. (2019) Automated analysis of whole brain vasculature using machinelearning https://doi.org/10.1101/613257
A-C, Maximum
intensity
projections of
the
automatically
reconstructed
tiling scans of
WGA (A) and
Evans blue (B)
signal in the
same sample
reveal all details
of the perfused
vascular
network in the
merged view
(C). D-F:
Zoom-ins from
marked region
in (C) showing
fine details. G-
L, Confocal
microscopy
confirms that
WGA and EB
dyes stain the
vascular wall
(G-I, maximum
intensity
projections of
112 µm) and
that the vessels
retain their
tubular shape
(J-L, single slice
of 1 µm).
Furthermore, owing to the dual labeling, we maximized the signal to
noise ratio (SNR) for each dye independently to avoid saturation of
differently sized vessels when only a single channel is used. We achieved
this by independently optimizing the excitation and emission power. For
WGA, we reached a higher SNR for small capillaries; bigger vessels,
however, were barely visible (Supporting Fig. 3). For EB, he SNR for small
capillaries was substantially lower but larger vessels reached a high SNR
(Supporting Fig. 3). Thus, integrating the information from both channels
allows homogenous staining of the entire vasculature throughout the
whole brain, and results in a high SNR for highquality segmentations and
analysis.
Play withyour DextranDaltons?
An eNOStag-GFP mouse was injected with two dextransof different sizes (red = Dextran 2 MDa; purple = dextran10 KDa) and Hoechst (blue = 615
Da), and single-plane images are presented here. 10 min after the injection, presence in the blood and extravasation are seen in the same image. Hoechst
extravasates almost immediately out of the blood vessels and is taken up by the surrounding cells (CI). Dextran 10 KDa (CII) can be seen in vessels
and in the tumor interstitium. Dextran 2 MDa (CIII) can be found in the vessels. 40 min after injection (CIV), Dextran 10 KDa disappears from the blood
(CV), and the fluorescent intensity of Dextran 2 MDa was also diminished (CVI). Scale bar = 100 µm - https://dx.doi.org/10.3791%2F55115
(2018)
If you have extra channels, and normally you would like to use 10 KDa Dextran, and for some reason cannot use something with stronger
fluorescence that stays better inside the vesselness. You could acquire stacks just for the vasculature segmentation, with the higher
molecular weights as the “physical labels” for vasculature?
z / Depthcrosstalk duetosuboptimalopticalsectioning
Invivothree-photonmicroscopy ofsubcortical
structureswithinanintactmousebrain
Nicholas G.Horton,Ke Wang,Demirhan Kobat,Catharine G.Clark, FrankW. Wise,Chris B.Schaffer &Chris Xu
NaturePhotonics volume7,pages 205–209(2013)
https://doi.org/10.1038/nphoton.2012.336
The fluorescence of three-photon excitation (3PE) falls off as 1/z4
(where z
is the distance from the focal plane), whereas the fluorescence of two-
photon excitation (2PE) falls off as 1/z2
. Therefore, 3PE dramatically
reduces the out-of-focus background in regions far from the focal plane,
improving the signal-to-background ratio (SBR) by orders of magnitude
when compared to 2PE
http://biomicroscopy.bu.edu/research/nonlinear-microsc
opy
http://parkerlab.bio.uci.edu/microscopy_construction/build_your_own_twophoton_microscope.ht
m
“Background
vasculature”
is seen in layers
in “front of it”, i.e.
the z-crosstalk
Nonlinear 2-PM
reduces this, and
3-PM even more.
When you get the binary mask, how to in the end reconstruct your mesh? From 1-PM, your vessels would most likely look very thick in z-dimension? i.e.
way too anistropic reconstruction?
Depth resolution We still have labeled in 2D so some boundary ambiguity exists
Cannyedge
radius = 1
Canny on the
ground truth
Gamma-corrected of version of
the input slice. Now you see
better the dimmer vessels
The upper part of the slice is clearly
behind(on z axis), as it is dimmer, but
it has been annotated to be a vessel
alsoon this plane. This is not
necessarily a problem if some sort of
consistency exists in labeling, which
isnot thecasenecessarily
betweendifferent annotators.
Then you might need the label
noisesolutions outlinedlater on this
slideset.
Volumerendering of
the ground truth of
courselooks now
thickerthan the
original
unsegmented
volume
Multiplying the input
volume with this
groundtruth mask
gives anice
rendering ofcourse.
We wantto suppress
thebackground
noise,andmake the
voxel mesh
→
conversion easier
with clean
segmentations
Single-photonconfocalmicroscopesectioning worse than 2-PM but still quite good
Images captured by confocal microscopy, showing FITC-dextran (green) and DiI-
labeledRBCs(red) in a retinalflatmount. (A, C) Merged green/red images from the
superficial section of the retina. (B, D) Red RBC fluorescence in the deeper capillary
layers of the retina. The arrow in (A) points to an arteriole that branches down from
the superficial layerinto the capillarylayersshown in (B)
Comparisonof
the
Fluorescence
Microscopy
Techniques
(widefield,
confocal, two-
photon)
http://candle.am/
microscopy/
Measurement ofRetinal Blood Flow Ratein DiabeticRats: Disparity Between
Techniques Dueto Redistribution of Flow Leskova et al (2013)
http://doi.org/10.1167/iovs.13-11915
RatRetina
SUPERFICIAL
Layers
RatRetina
CAPILLARY
Layer
Kornfield andNewman (2014)
10.1523/JNEUROSCI.1971-14.2014
Vessel density in the three
vascular layers. Schematic
of the trilaminar vascular
network showing the first-
order arteriole (1) and
venule (V) and the
connectivity of the superficial
(S), intermediate (I), and deep
(D) vascular layers and their
locations within the retina.
GCL, Ganglion cell layer; IPL, inner
plexiform layer; INL, inner nuclear layer;
OPL, outer plexiform layer; ONL, outer
nuclear layer; PR,photoreceptors.
z / DepthAttenuationnoiseasfunctionofdepth
Effects of depth-dependent noise on line-scanning particle image velocimetry (LS-PIV) analysis. A , Three-
dimensional rendering of cortical vessels imaged with TPLSM demonstrating depth-dependent decrease in
SNR. The blood plasma was labeled with Texas Red-dextran and an image stack over the top 1000 µm was
acquired at 1 µm spacing along the z-axis starting from the brain surface. B, 100 µm-thick projections of
regions 1–4 in panel (A). RBC velocities were measured along the central axis of vessels shown in red boxes,
with redarrows representing orientation offlow. The raw line-scan data (L/S) are depicted tothe right ofeach
fieldandlabeledwith their respective SNR. CorrespondingLS-PIV analyses aredepictedto the far right.
Accuracy of LS-PIV analysis with noise and increasing speed. Top, simulation line-scan data with a
low level of normally distributed noise with SNR of 8 ( A ), 1 ( B ), 0.5 ( C ), and 0.33 ( D ). Middle, LS-PIV
analysis of the line-scan data (blue dots). The red line represents actual particle speed. Bottom,
percent error of LS-PIV compared with actual velocity.
Tyson N Kim et al. (2012) http://doi.org/10.1371/journal.pone.0038590 - Cited by 46
‘Intersectingvesselsin2-PM’ even though the centerlines actual vessels
in 3D do not intersect the vessel masks might #1
Calivá et al. (2015) A new tool to connect blood vessels in fundus retinal images
https://doi.org/10.1109/EMBC.2015.7319356 - Cited by 8
In 2D case, the
vessel
crossings are
harder to
resolve than in
our 3D case
Slice#10/26Seems like that theBig and Smaller
vessel are going tojoin?
Slice#19/26Seems like Smallvessel actually was
touching the Biggerone?
Cross-channelspectralcrosstalk
Newred-fluorescent calciumindicatorsfor
optogenetics,photoactivationand multi-colorimaging
Oheim M, van ’tHoff M, FeltzA,ZamaleevaA,Mallet J-M,Collot M. 2014
Biochimica et Biophysica Acta (BBA) - Molecular Cell Research 1843. Calcium Signaling in Health and
Disease:2284–2306. http://dx.doi.org/10.1016/j.bbamcr.2014.03.010
https://github.com/petteriTeikari/mixedImageSeparation
https://github.com/petteriTeikari/spectralSeparability/wiki
Color Preprocessing: SpectralUnmixing formicroscopy
See “spectral crosstalk” slide above. Or in more general terms you want to do (blind) source separation, “the
cocktail party problem” for 2-PM microscopy data, i.e. you might have some astrocyte/calcium/etc signal on your
“vasculature channel”. You could just apply ICA here and hope for perfect unmixing or think of something more
advanced. Again, seek inspiration from elsewhere. Hyperspectral imaging field is having the same
challenge to solve.
ImprovedDeepSpectralConvolution
NetworkForHyperspectralUnmixingWith
MultinomialMixtureKernelandEndmember
Uncertainty
Savas Ozkan, and Gozde Bozdagi Akar
(Submitted on 27 Mar 2019) https://arxiv.org/abs/1904.00815
https://github.com/savasozkan/dscn
We propose a novel framework for hyperspectral unmixing by using
an improved deep spectral convolution network (DSCN++) combined
with endmember uncertainty. DSCN++ is used to compute high-level
representations which are further modeled with Multinomial Mixture
Model to estimate abundance maps. In the reconstruction step, a new
trainable uncertainty term based on a nonlinear neural network
model is introduced to provide robustness to endmember uncertainty.
For the optimization of the coefficients of the multinomial model and the
uncertainty term, Wasserstein Generative Adversarial Network (WGAN)
is exploited to improve stability.
AnisotropicVolumes z-resolutionnotasgoodas xy
3DAnisotropic HybridNetwork:TransferringConvolutional
Features from2DImages to3DAnisotropicVolumes
Siqi Liu, Daguang Xu, S. Kevin Zhou, Thomas Mertelmeier, Julia Wicklein, Anna Jerebko, Sasa
Grbic, Olivier Pauly, Weidong Cai, Dorin Comaniciu (Submitted on 23 Nov 2017
https://arxiv.org/abs/1711.08580
Elastic Boundary Projection for 3D
Medical Image Segmentation
Tianwei Ni etal. (CVPR 2019)
http://victorni.me/pdf/EBP_CVPR2019/1070.pdf
In this paper, we bridge the gap between 2D and 3D using a novel approach named
Elastic Boundary Projection (EBP). The key observation is that, although the object
is a 3D volume, what we really need in segmentation is to find its boundary which is a
2D surface. Therefore, we place a number of pivot points in the 3D space, and for each
pivot, we determine its distance to the object boundary along a dense set of directions.
This creates an elastic shell around each pivot which is initialized as a perfect sphere.
We train a 2D deep network to determine whether each ending point falls within the
object, andgradually adjust the shellsothatit graduallyconverges tothe actualshape of
the boundaryand thus achievesthe goalofsegmentation
From voxel-based tricks NURBS -like parametrization for “subvoxel” MESH/CFD Analysis?
→
Not a lot of papers
addressingspecifically
(multiphoton)microscopy
(micro)vasculature
thus most of the slides are outside
vasculature processing but relevant if
you want to work on “next
generation” vascular segmentation
networks
Non-DL ‘classical approaches’
Segmentationof VasculatureFromFluorescentlyLabeledEndothelial
CellsinMulti-PhotonMicroscopyImages
Russell Bates ; Benjamin Irving ; Bostjan Markelc ; JakobKaeppler ; Graham Brown ; Ruth J.
Muschel ; et al. Department of EngineeringScience, Institute of BiomedicalEngineering, University of Oxford, Oxford,U.K.
IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 1 , Jan. 2019 )
https://doi.org/10.1109/TMI.2017.2725639
Here, we present a method for the segmentation of tumor vasculature in 3D
fluorescence microscopic images using signals from the endothelial and
surrounding cells. We show that our method can provide complete and
semantically meaningful segmentations of complex vasculature using a
supervoxel-Markovrandom fieldapproach.
A potential area for future improvement is the limitations imposed by our edge
potentials in the MRF which are tuned rather than learned. The expectation of the
existenceof fully annotated training sets formany applications is unrealistic.Future
work will focus on the suitability of semi-supervised methods to achieve fully
supervised levels of performance on sparse annotations. It is possible that this
may be donein thecurrentframework using label-transduction methods.
Interesting work in the transduction and interactive learning for sparsely labeled
superpixel microscopy images has also been undertaken by Suetal.(2016). A
method that can take sparse image annotations and use them to leverage
information from large set of unlabeled parts of the image to create high quality
segmentations would be an extremely powerful tool. This would have very broad
applications in novel imaging experiments where large training sets are not readily
availableandwherethereis ahigh time-cost in producingsuch atrainingset.
InitialEffort with hybrid “2D/3D ZNN” with CPU acceleration
DeepLearningConvolutionalNetworksforMultiphotonMicroscopy
VasculatureSegmentation
Petteri Teikari, Marc Santos, Charissa Poon, Kullervo Hynynen (Submitted on 8 Jun 2016)
https://arxiv.org/abs/1606.02382
MicrovasculatureCNNs #1
MicrovasculaturesegmentationofarteriolesusingdeepCNN
Y. M.Kassimet al. (2017)
ComputationalImaging and Vis Analysis (CIVA) Lab
https://doi.org/10.1109/ICIP.2017.8296347
Accurate segmentation for separating microvasculature
structures is important in quantifying remodeling process.
In this work, we utilize a deep convolutional neural
network (CNN) framework for obtaining robust
segmentations of microvasculature from epifluorescence
microscopy imagery of mice dura mater. Due to the
inhomogeneous staining of the microvasculature,
different binding properties of vessels under fluorescence
dye, uneven contrast and low texture content, traditional
vessel segmentation approaches obtain sub-optimal
accuracy.
We proposed an architecture of CNN which is adapted to
obtaining robust segmentation of microvasculature
structures. By considering overlapping patches along
with multiple convolutional layers, our method obtains good
vessel differentiation for accurate segmentations.
MicrovasculatureCNNs #2
Extracting3DVascularStructuresfromMicroscopy
ImagesusingConvolutionalRecurrentNetworks
Russell Bates,Benjamin Irving, Bostjan Markelc,Jakob
Kaeppler, Ruth Muschel,VicenteGrau, JuliaA. Schnabel
Institute of BiomedicalEngineering, Department of EngineeringScience, University of Oxford, United Kingdom
CRUK/MRCOxford Centre for Radiation Oncology, Department of Oncology, Universityof Oxford, United
Kingdom
Division of ImagingSciences and BiomedicalEngineering, Kings College London, United Kingdom.
PerspectumDiagnostics, Oxford, United Kingdom
(Submitted on 26May 2017)
https://arxiv.org/abs/1705.09597
In tumors in particular, the vascularnetworksmaybe
extremelyirregularand theappearanceofthe individual
vesselsmaynotconformto classicaldescriptionsof
vascularappearance. Typically, vessels areextracted by
eitherasegmentation and thinningpipeline,or bydirect
tracking. Neitherof these methods are wellsuited to
microscopy images of tumorvasculature.
In order to address this we propose a method to directly
extract a medial representation of the vessels using
Convolutional Neural Networks. We then show that
these two-dimensional centerlines can be meaningfully
extended into 3D in anisotropic and complex microscopy
images using the recently popularized Convolutional Long
Short-Term Memory units (ConvLSTM). We demonstrate
the effectiveness of this hybrid convolutional-recurrent
architecture over both 2D and 3D convolutional
comparators.
MicrovasculatureCNNs #3
AutomaticGraph-basedModelingof Brain
MicrovesselsCapturedwithTwo-PhotonMicroscopy
RafatDamseh; PhilippePouliot ;Louis Gagnon ; Sava
Sakadzic; David Boas ; FaridaCheriet et al. (2018)
Institute of Biomedical Engineering, Ecole Polytechnique de Montreal
https://doi.org/10.1109/JBHI.2018.2884678
Graph models of cerebral vasculature derived from 2-
photon microscopy have shown to be relevant to study
brain microphysiology. Automatic graphing of these
microvessels remain problematic due to the vascular
network complexity and 2-photon sensitivity limitations
with depth.
In this work, we propose a fully automatic processing
pipeline to address this issue. The modeling scheme
consists of a fully-convolutional neural network (FCN) to
segment microvessels, a 3D surface model generator
and a geometry contraction algorithm to produce
graphical models with a single connected component.
Quantitative assessment using NetMets metrics, at a
tolerance of 60 μm, false negative and false positive
geometric error rates are 3.8% and 4.2%, respectively,
whereas false negative and false positive topological error
rates are6.1%and 4.5%, respectively.
One important issue that could be addressed in a future
work is related to the difficulty in generating watertight
surface models. The employed contraction algorithm is
not applicable to surfaces lacking such characteristicsin
generating watertight surface models. Introducing a
geometric contraction not restricted to such conditions
on the obtained surface model could be an area of further
investigation.
MicrovasculatureCNNs #4
FullyConvolutionalDenseNetsforSegmentationof
MicrovesselsinTwo-photonMicroscopy
RafatDamseh et al. (2019)
https://doi.org/10.1109/EMBC.2018.8512285
Segmentation of microvessels measured using two-photon
microscopy has been studied in the literature with limited
success due to uneven intensities associated with
optical imaging and shadowing effects. In this work, we
address this problem using a customized version of a
recently developed fully convolutional neural network,
namely, FC-DensNets (see DenseNet Cited by 3527
). To train
and validate the network, manual annotations of 8
angiogramsfrom two-photon microscopy was used.
However, this study suggests that in order to exploit the output of our deep
model in further geometrical and topological analysis, further
investigations might be needed to refine the segmentation. This could
be done by either adding extra processing blocks on the output of the
model orincorporating 3D information in its trainingprocess.
MicrovasculatureCNNs #5
A Deep Learning Approach to 3D Segmentation of Brain Vasculature
Waleed Tahir, Jiabei Zhu, Sreekanth Kura,Xiaojun Cheng,DavidBoas,
and Lei Tian (2019)
Department of Electrical and Computer Engineering, Boston University
https://www.osapublishing.org/abstract.cfm?uri=BRAIN-2019-BT2A.6
The segmentation of blood-vessels is an important preprocessing
step for the quantitative analysis of brain vasculature. We approach
the segmentation task for two-photon brain angiograms using a fully
convolutional 3D deep neural network.
We employ a DNN to learn a statistical model relating the measured
angiograms to the vessel labels. The overall structure is derived from
V-net [Milletari et al.2016] which consists of a 3D encoder-decoder
architecture. The input first passes through the encoder path which
consists of four convolutional layers. Each layer comprises of residual
connections which speed up convergence, and 3D convolutions with
multi-channel convolution kernels which retain 3D context.
Loss functions like mean squared error (MSE) and mean absolute
error (MAE) have been used widely in deep learning, however, they
cannot promote sparsity and are thus unsuitable for sparse objects. In
our case, less than 5% of the total volume in the angiogram comprises
of blood vessels. Thus, the object under study is not only sparse, there
is also a large class-imbalance between the number for foreground vs.
background voxels. Thus we resort to balanced cross entropy as the
loss function [HED, 2015], which not only promotes sparsity, but also
caters fortheclass imbalance
MicrovasculatureCNNs #6: State-of-the-Art (SOTA)?
Deepconvolutionalneuralnetworksforsegmenting3Dinvivo
multiphotonimagesofvasculatureinAlzheimerdiseasemouse
modelsMohammad Haft-Javaherian, Linjing Fang,Victorine Muse, Chris B.
Schaffer, Nozomi Nishimura,Mert R.Sabuncu
Meinig Schoolof BiomedicalEngineering, Cornell University, Ithaca, NY, United States of America
March2019
https://doi.org/10.1371/journal.pone.0213539
https://arxiv.org/abs/1801.00880
Data: https://doi.org/10.7298/X4FJ2F1D (1.141 Gb)
Code: https://github.com/mhaft/DeepVess (Tensorflow / MATLAB)
We explored the use of convolutional neural networks to segment 3D vessels
within volumetric in vivo images acquired by multiphoton microscopy. We
evaluated different network architectures and machine learning
techniques in the context of this segmentation problem. We show that our
optimized convolutional neural network architecture with a customized loss
function, which we call DeepVess, yielded a segmentation accuracy that
was better than state-of-the-art methods, while also being orders of
magnitude fasterthan themanual annotation
While DeepVess offers very high accuracy in the problem we consider, there
is room for further improvement and validation, in particular in the
application to other vasiform structures and modalities. For example, other
types of (e.g., non-convolutional) architectures such as long short-term
memory (LSTM) can be examined for this problem. Likewise, a combined
approach that treats segmentation and centerline extraction methods
together, such as the method proposed by Bates etal. (2017) in a single
complete end-to-end learning framework might achieve higher centerline
accuracy levels.
Comparison ofDeepVess and the state-of-
the-art methods
3D rendering of (A) the expert’s
manual and
(B) DeepVess segmentation results.
Comparison ofDeepVess and the gold standard human
expertsegmentation results
We used 50% dropout during test-time [MCDropout] and
computed Shannon’s entropy for the segmentation
prediction at each voxel to quantify the uncertainty in
the automatedsegmentation.
MicrovasculatureCNNs #7: Dual-Dye Network for vasculature
Automatedanalysisof wholebrain
vasculature usingmachinelearning
Mihail Ivilinov Todorov, Johannes C. Paetzold, Oliver Schoppe, Giles Tetteh, Velizar Efremov,
Katalin Völgyi, Marco Düring, Martin Dichgans, Marie Piraud, Bjoern Menze, Ali Ertürk
(Posted April 18, 2019) https://doi.org/10.1101/613257
http://discotechnologies.org/VesSAP
Tissue clearing methods enable imaging of intact biological
specimens without sectioning. However, reliable and scalable
analysis of such large imaging data in 3D remains a challenge.
Towards this goal, we developed a deep learning-based framework
to quantify and analyze the brain vasculature, named Vessel
Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a
fully convolutional network with a transfer learning approach for
segmentation.
We systematically analyzed vascular features of the whole brains
including their length, bifurcation points and radius at the micrometer
scale by registering them to the Allen mouse brain atlas. We
reported the first evidence of secondary intracranial collateral
vascularization in CD1-Elite mice and found reduced vascularization
in the brainstem as compared to the cerebrum. VesSAP thus enables
unbiased and scalable quantifications for the
angioarchitecture of the cleared intact mouse brain and yields
newbiological insights related to the vascular brain function.
What
next?
2019 ideas
WellwhatabouttheNOVELTYtoADD?
Depends a bit on what the
benchmarks reveal?
The DeepVess does not
seem out from this world in
terms of their specs so
possible to beat it with
“brute force”, by trying
different standard things
proposed in the literature
Keep this in mind, and
have a look on the
following slides
INPUT SEGMENTATION UNCERTAINTYMC Dropout
While DeepVess offers very high accuracy in the problem we consider,
there is room for further improvement and validation, in particular in the
application to other vasiform structures and modalities. For example, other
types of (e.g., non-convolutional) architectures such as long short-term
memory (LSTM) i.e. what the hGRU did
can be examined for this problem.
Likewise, a combined approach that treats segmentation and centerline
extraction methods together multi-task learning (MTL)
, such as the method
proposed by Bates et al. [25] in a single complete end-to-end learning
framework might achieve higher centerline accuracy levels.
VasculatureNetworks Future
While DeepVess offers very high
accuracy in the problem we
consider, there is room for further
improvement and validation, in
particular in the application to other
vasiform structures and modalities.
For example, other types of (e.g.,
non-convolutional) architectures
such as long short-term memory
(LSTM) i.e. what the hGRU did
can be
examined for this problem. Likewise,
a combined approach that treats
segmentation and centerline
extraction methods together multi-task
learning (MTL)
, such as the method
proposed by Bates et al. [25] in a
single complete end-to-end
learning framework might achieve
higher centerline accuracy levels.
FC-DensNets
However, this study
suggests that in order
to exploit the output
of our deep model in
further geometrical
and topological
analysis, further
investigations might
be needed to refine
the segmentation.
This could be done
by either adding extra
processing blocks on
the output of the
model or
incorporating 3D
information in its
training process.
http://sci-hub.tw/10.1109
/jbhi.2018.2884678
One important issue that could be
addressed in a future work is related
to the difficulty in generating
watertight surface models. The
employed contraction algorithm is
not applicable to surfaces lacking
such characteristics.
Generalidea
ofthe
End-to-End
network
High-level GenericArchitecture
IMAGE
RESTORATION
IMAGE
SEGMENTATION
GRAPH
RECONSTRUCTION
Denoisedstack
forVisualization
Voxelmasks
fordiameter
quantification
Graphnetwork/
Watertightmesh
forCFD analysis
y1
y2
y3
End-to-endnetwork(i.e. learn jointlyall 3tasks with deeply supervisedtargets y1-3
)
Composed of different blocks, trained hopefully end-to-end
Image
Restoration
andQuality
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation

More Related Content

Similar to Two-Photon Microscopy Vasculature Segmentation

ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSIAEME Publication
 
Overview of convolutional neural networks architectures for brain tumor segm...
Overview of convolutional neural networks architectures for  brain tumor segm...Overview of convolutional neural networks architectures for  brain tumor segm...
Overview of convolutional neural networks architectures for brain tumor segm...IJECEIAES
 
Understanding the Big Picture of e-Science
Understanding the Big Picture of e-ScienceUnderstanding the Big Picture of e-Science
Understanding the Big Picture of e-ScienceAndrew Sallans
 
The Symbiotic Nature of Provenance and Workflow
The Symbiotic Nature of Provenance and WorkflowThe Symbiotic Nature of Provenance and Workflow
The Symbiotic Nature of Provenance and WorkflowEric Stephan
 
AI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapeAI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapePetteriTeikariPhD
 
Towards Knowledge Graph based Representation, Augmentation and Exploration of...
Towards Knowledge Graph based Representation, Augmentation and Exploration of...Towards Knowledge Graph based Representation, Augmentation and Exploration of...
Towards Knowledge Graph based Representation, Augmentation and Exploration of...Sören Auer
 
Portable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPortable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPetteriTeikariPhD
 
Extreme-scale Identity Management for Scientific Collaborations
Extreme-scale Identity Management for Scientific CollaborationsExtreme-scale Identity Management for Scientific Collaborations
Extreme-scale Identity Management for Scientific CollaborationsVon Welch
 
Talk at OHSU, September 25, 2013
Talk at OHSU, September 25, 2013Talk at OHSU, September 25, 2013
Talk at OHSU, September 25, 2013Anita de Waard
 
Gridforum David De Roure Newe Science 20080402
Gridforum David De Roure Newe Science 20080402Gridforum David De Roure Newe Science 20080402
Gridforum David De Roure Newe Science 20080402vrij
 
A Study of Deep Learning Applications
A Study of Deep Learning ApplicationsA Study of Deep Learning Applications
A Study of Deep Learning Applicationsijtsrd
 
Trends in Information Management
Trends in Information ManagementTrends in Information Management
Trends in Information ManagementAlexander Deucalion
 
Review on Solar Power System with Artificial Intelligence
Review on Solar Power System with Artificial IntelligenceReview on Solar Power System with Artificial Intelligence
Review on Solar Power System with Artificial Intelligenceijtsrd
 
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...Makgopa Gareth Setati
 
AI from the Perspective of a School of Data Science
AI from the Perspective of a School of Data ScienceAI from the Perspective of a School of Data Science
AI from the Perspective of a School of Data SciencePhilip Bourne
 
Morse, Christian - LIBR 293 - Research Paper
Morse, Christian - LIBR 293 - Research PaperMorse, Christian - LIBR 293 - Research Paper
Morse, Christian - LIBR 293 - Research PaperChristian Morse
 

Similar to Two-Photon Microscopy Vasculature Segmentation (20)

ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
 
Overview of convolutional neural networks architectures for brain tumor segm...
Overview of convolutional neural networks architectures for  brain tumor segm...Overview of convolutional neural networks architectures for  brain tumor segm...
Overview of convolutional neural networks architectures for brain tumor segm...
 
Understanding the Big Picture of e-Science
Understanding the Big Picture of e-ScienceUnderstanding the Big Picture of e-Science
Understanding the Big Picture of e-Science
 
The Symbiotic Nature of Provenance and Workflow
The Symbiotic Nature of Provenance and WorkflowThe Symbiotic Nature of Provenance and Workflow
The Symbiotic Nature of Provenance and Workflow
 
AI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapeAI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup Landscape
 
Towards Knowledge Graph based Representation, Augmentation and Exploration of...
Towards Knowledge Graph based Representation, Augmentation and Exploration of...Towards Knowledge Graph based Representation, Augmentation and Exploration of...
Towards Knowledge Graph based Representation, Augmentation and Exploration of...
 
Portable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical DiagnosticsPortable Retinal Imaging and Medical Diagnostics
Portable Retinal Imaging and Medical Diagnostics
 
Extreme-scale Identity Management for Scientific Collaborations
Extreme-scale Identity Management for Scientific CollaborationsExtreme-scale Identity Management for Scientific Collaborations
Extreme-scale Identity Management for Scientific Collaborations
 
Talk at OHSU, September 25, 2013
Talk at OHSU, September 25, 2013Talk at OHSU, September 25, 2013
Talk at OHSU, September 25, 2013
 
Gridforum David De Roure Newe Science 20080402
Gridforum David De Roure Newe Science 20080402Gridforum David De Roure Newe Science 20080402
Gridforum David De Roure Newe Science 20080402
 
A Study of Deep Learning Applications
A Study of Deep Learning ApplicationsA Study of Deep Learning Applications
A Study of Deep Learning Applications
 
From byte to mind
From byte to mindFrom byte to mind
From byte to mind
 
thesis_final.pdf
thesis_final.pdfthesis_final.pdf
thesis_final.pdf
 
Trends in Information Management
Trends in Information ManagementTrends in Information Management
Trends in Information Management
 
Review on Solar Power System with Artificial Intelligence
Review on Solar Power System with Artificial IntelligenceReview on Solar Power System with Artificial Intelligence
Review on Solar Power System with Artificial Intelligence
 
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...
Makgopa Setati_Machine Learning for Decision Support in Distributed Systems_M...
 
AI from the Perspective of a School of Data Science
AI from the Perspective of a School of Data ScienceAI from the Perspective of a School of Data Science
AI from the Perspective of a School of Data Science
 
2019 Triangle Machine Learning Day - Biomedical Image Understanding and EHRs ...
2019 Triangle Machine Learning Day - Biomedical Image Understanding and EHRs ...2019 Triangle Machine Learning Day - Biomedical Image Understanding and EHRs ...
2019 Triangle Machine Learning Day - Biomedical Image Understanding and EHRs ...
 
Morse, Christian - LIBR 293 - Research Paper
Morse, Christian - LIBR 293 - Research PaperMorse, Christian - LIBR 293 - Research Paper
Morse, Christian - LIBR 293 - Research Paper
 
Data-driven Ophthalmology
Data-driven OphthalmologyData-driven Ophthalmology
Data-driven Ophthalmology
 

More from PetteriTeikariPhD

ML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsPetteriTeikariPhD
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsPetteriTeikariPhD
 
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
 
Wearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingWearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingPetteriTeikariPhD
 
Precision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPrecision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPetteriTeikariPhD
 
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phaseSkin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phasePetteriTeikariPhD
 
Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...PetteriTeikariPhD
 
Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...PetteriTeikariPhD
 
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresIntracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresPetteriTeikariPhD
 
Hand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsHand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsPetteriTeikariPhD
 
Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1PetteriTeikariPhD
 
Multimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisMultimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyPetteriTeikariPhD
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
 
Hyperspectral Retinal Imaging
Hyperspectral Retinal ImagingHyperspectral Retinal Imaging
Hyperspectral Retinal ImagingPetteriTeikariPhD
 
Instrumentation for in vivo intravital microscopy
Instrumentation for in vivo intravital microscopyInstrumentation for in vivo intravital microscopy
Instrumentation for in vivo intravital microscopyPetteriTeikariPhD
 
Future of Retinal Diagnostics
Future of Retinal DiagnosticsFuture of Retinal Diagnostics
Future of Retinal DiagnosticsPetteriTeikariPhD
 
OCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningOCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningPetteriTeikariPhD
 
Optical Designs for Fundus Cameras
Optical Designs for Fundus CamerasOptical Designs for Fundus Cameras
Optical Designs for Fundus CamerasPetteriTeikariPhD
 

More from PetteriTeikariPhD (20)

ML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung Sounds
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
 
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
 
Wearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingWearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung Sensing
 
Precision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPrecision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthma
 
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phaseSkin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
 
Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...
 
Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...
 
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresIntracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
 
Hand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsHand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical Applications
 
Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1
 
Multimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysisMultimodal RGB-D+RF-based sensing for human movement analysis
Multimodal RGB-D+RF-based sensing for human movement analysis
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technology
 
Light Treatment Glasses
Light Treatment GlassesLight Treatment Glasses
Light Treatment Glasses
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time Series
 
Hyperspectral Retinal Imaging
Hyperspectral Retinal ImagingHyperspectral Retinal Imaging
Hyperspectral Retinal Imaging
 
Instrumentation for in vivo intravital microscopy
Instrumentation for in vivo intravital microscopyInstrumentation for in vivo intravital microscopy
Instrumentation for in vivo intravital microscopy
 
Future of Retinal Diagnostics
Future of Retinal DiagnosticsFuture of Retinal Diagnostics
Future of Retinal Diagnostics
 
OCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningOCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep Learning
 
Optical Designs for Fundus Cameras
Optical Designs for Fundus CamerasOptical Designs for Fundus Cameras
Optical Designs for Fundus Cameras
 

Recently uploaded

Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...nirzagarg
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.pptibrahimabdi22
 
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...gajnagarg
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Klinik kandungan
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...nirzagarg
 
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...HyderabadDolls
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...nirzagarg
 
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...HyderabadDolls
 
Ranking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRanking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRajesh Mondal
 
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...HyderabadDolls
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangeThinkInnovation
 
Introduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptxIntroduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptxAniqa Zai
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...gajnagarg
 
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...Elaine Werffeli
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...gajnagarg
 
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...HyderabadDolls
 
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptxRESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptxronsairoathenadugay
 
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...HyderabadDolls
 
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...gajnagarg
 

Recently uploaded (20)

Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt
 
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Nandurbar [ 7014168258 ] Call Me For Genuine Models...
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
 
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
 
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
Top profile Call Girls In Bihar Sharif [ 7014168258 ] Call Me For Genuine Mod...
 
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...
Diamond Harbour \ Russian Call Girls Kolkata | Book 8005736733 Extreme Naught...
 
Ranking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRanking and Scoring Exercises for Research
Ranking and Scoring Exercises for Research
 
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...
Sealdah % High Class Call Girls Kolkata - 450+ Call Girl Cash Payment 8005736...
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
Abortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get CytotecAbortion pills in Jeddah | +966572737505 | Get Cytotec
Abortion pills in Jeddah | +966572737505 | Get Cytotec
 
Introduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptxIntroduction to Statistics Presentation.pptx
Introduction to Statistics Presentation.pptx
 
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
Top profile Call Girls In dimapur [ 7014168258 ] Call Me For Genuine Models W...
 
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
SAC 25 Final National, Regional & Local Angel Group Investing Insights 2024 0...
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
 
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...
Lake Town / Independent Kolkata Call Girls Phone No 8005736733 Elite Escort S...
 
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptxRESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
RESEARCH-FINAL-DEFENSE-PPT-TEMPLATE.pptx
 
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
Charbagh + Female Escorts Service in Lucknow | Starting ₹,5K To @25k with A/C...
 
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Indore [ 7014168258 ] Call Me For Genuine Models We...
 

Two-Photon Microscopy Vasculature Segmentation

  • 1. Two-Photon Microscopy Vasculature Segmentation Petteri Teikari, PhD PhD in Neuroscience M.Sc Electrical Engineering https://www.linkedin.com/in/petteriteikari/ Version August 2019 (Cleaned and simplified in January 2024, see original)
  • 2. Executive Summary #1/2 Highlighting relevant literature for: ● Automating the 3D voxel-level vasculature segmentation (mainly) for multiphoton vasculature stacks ● Focus on semi-supervised U-Net based architectures that can exploit both unlabeled data and costly-to-annotate labeled data. ● Make sure that “tricks” for thin structure preservation, long-term spatial correlations and uncertainty estimation are incorporated
  • 3. Executive Summary #2/2 The lack of automated robust tools do not go well with large-size datasets and volumes ● See Electron Microscopy segmentation community for inspiration who are having even larger stacks to analyze ● Gamified segmentation annotation tool EyeWire has led for example to this Nature paper, and slot at the AI: More than Human exhibition at Barbican
  • 5. Aboutthe Presentation #1 “Quick intro” about vasculature segmentation using deep learning ● Assumed that multiphoton (two-photon mainly) techniques are familiar to you and you want to know what you could do with your data using more robust “measuring tapes” for your vasculature, i.e. data-drivenvascularsegmentation Link coloring for articles, for Github/available code, and for video demos
  • 6. Aboutthe Presentation #2 Inspiration for providing “seeds for all sorts of directions” would be for the reader/person implementing this, finding new avenues and not having to start from scratch. Especially targeted for people coming outside medical image segmentation that might have something to contribute and avoid “the group think” of deep learning community. Also it helps for the neuroscientist to have an idea how to gather the data and design experiments to address both neuroscientific questions and “auxiliary methodology” challenges solvable by deep learning. Domainknowledgestillvaluable.
  • 7. Aboutthe Presentation #3:Why solengthy? If you are puzzled by some slides on non-specifically “vasculature segmentation”, remember that this was designed to be “high school project” friendly or good for tech/computation-savvy neuroscientists not necessarily knowing all the different aspects that could be beneficial for development of successful vasculature network instead of narrowly-focused slideshow
  • 8. Aboutthe Presentation #4:Textbookdefs? A lot of the basic concepts are “easily googled” from Stackoverflow/Medium/etc., thus focus here is on recent papers that are published in overwhelming numbers. Some ideas picked from these papers that might or might not be helpful in thinking of your own project tech specifications
  • 9. Aboutthe Presentation #5:“History”ofIdeas In arXiv and in peer-published papers, the various approaches taken by team before their winning idea(s) {“history of ideas, and all the possible choices you could have made”} , are hardly ever discussed in detail. So an attempt of “possibility space” is outlined here Towards EffectiveForagingby DataScientiststoFindPast AnalysisChoices Mary Beth Kery,BonnieE. John,PatrickO'Flaherty, AmberHorvath, Brad A.Myers Carnegie MellonUniversity/ Bloomberg L.P., NewYork https://doi.org/10.1101/650259https://github.com/mkery/Verdant Data scientists are responsible for the analysis decisions they make, but it is hard for them to track the process by which they achieved a result. Even when data scientists keep logs, it is onerous to make sense of the resulting large number of history records full of overlapping variants of code, output, plots, etc. We developed algorithmic and visualization techniques for notebook code environments to help data scientists forage for information in their history. To test these interventions, we conducted a think-aloud evaluation with 15 data scientists, where participants were asked to find specific information from the history of another person's data science project. The participants succeed on a median of 80% of the tasks they performed. The quantitative results suggest promising aspects of our design, while qualitative results motivated a number of design improvements. The resulting system, called Verdant, is released as an open-source extension for JupyterLab.
  • 10. Summary: “All thestuff”youwishyouknewbefore startingtheprojectwith“seeds”forcross- disciplinarycollaboration TheSecretsofMachineLearning:Ten ThingsYouWishYouHadKnownEarlierto beMoreEffectiveatDataAnalysis CynthiaRudin, David Carlson Electrical and ComputerEngineering,and Statistical Science, Duke University / Civil and Environmental Engineering, Biostatistics and Bioinformatics,Electrical and Computer Engineering,and Computer Science, Duke University (Submitted on 4 Jun 2019)https://arxiv.org/abs/1906.01998
  • 11. Curated Literature If you are overwhelmed by all the slides, you could start with these articles ● Haft-Javaherian et al. (2019). Deepconvolutionalneuralnetworksfor segmenting 3Dinvivo multiphotonimagesofvasculatureinAlzheimerdiseasemousemodels. https://doi.org/10.1371/journal.pone.0213539 ● Kisuk Lee et al. (2019) Convolutional netsfor reconstructing neural circuits from brainimages acquired by serialsection electron microscopy https://doi.org/10.1016/j.conb.2019.04.001 ● Amy Zhao et al. (2019) Dataaugmentationusing learned transformations forone-shotmedical image segmentation https://arxiv.org/abs/1902.09383https://github.com/xamyzhao/brainstorm Keras ● Dai et al. (2019) Deep Reinforcement Learningfor SubpixelNeuralTracking https://openreview.net/forum?id=HJxrNvv0JN ● Simon Kohl et al. (2018) A ProbabilisticU-Net for SegmentationofAmbiguousImages https://arxiv.org/abs/1806.05034+ followup https://arxiv.org/abs/1905.13077 https://github.com/SimonKohl/probabilistic_unet ● Hoel Kervadec et al. (2018) Boundary lossforhighly unbalanced segmentation https://arxiv.org/abs/1812.07032 https://github.com/LIVIAETS/surface-loss PyTorch ● Jörg Sander et al. (2018) Towards increased trustworthiness of deep learning segmentation methods on cardiacMRI https://doi.org/10.1117/12.2511699 ● Hongda Wang et al. (2018) Deep learning achievessuper- resolution influorescence microscopy http://dx.doi.org/10.1038/s41592-018-0239-0 ● Yide Zhang et al. (2019) A Poisson-Gaussian Denoising DatasetwithRealFluorescence Microscopy Images https://doi.org/10.1117/12.2511699 ● Trevor Standley et al. (2019) Which TasksShould Be Learned Together inMulti-task Learning? https://arxiv.org/abs/1905.07553
  • 13. Imaging brainvasculaturethroughtheskullof a mouse/rat MICROSCOPE SET-UP AT THE SKULL AND EXAMPLES OF TWO-PHOTON MICROSCOPYIMAGES ACQUIRED DURINGLIVE IMAGING.BOTH EXAMPLES SHOWNEURONS (GREEN)ANDVASCULATURE (RED).BOTTOMEXAMPLE USES AN ADDITIONAL AMYLOID-TARGETING DYE (BLUE) IN AN ALZHEIMER’S DISEASE MOUSE MODEL. IMAGE CREDIT: ELIZABETH HILLMAN. LICENSED UNDER CC-BY-2.0. http://www.signaltonoisemag.com/allarticles/2018/9/17/dissecting-two-photon-microscopy
  • 14. Penetrationdepth dependsonthetheexcitation/emission wavelengths,numberof “nonlinearphotons”,andtheanimal model DeFelipe etal. (2011) http://dx.doi.org/10.3389/fnana.2011.00029 Tischbireketal.(2015): Cal-590, .. improved our ability to image calcium signals ... down to layers 5 and 6 at depths of up to −900 μm below the pia. 3-PM depth = 601 μm 2-PM depth = 429 μm Wang et al. (2015) Better image deeper penetration
  • 15. Dyeless vasculatureimaging in “deeplearningsense” nottoo different Third-Harmony Generation (THG) image of blood vessels in the top layer of the cerebralcortex of a live, anesthetized mouse. Emission wavelength = 1/3 of excitation wavelength Witte et al. (2011) Optoacoustic ultrasound bio-microscopy Imaging of skull and brain vasculature (B) was performed by focusing nanosecond laser pulses with a custom-designed gradient index (GRIN) lens and detecting the generated optoacoustic responses by the same transducer used for the US reflection-mode imaging. (C) Irradiation of half of the skull resulted in inhibited angiogenesis in the calvarium microvasculature (blue) of the irradiated hemisphere, but not the non- irradiated one. - prelights.biologists.com (Mariana De Niz) - https://doi.org/10.1101/500017 Third harmonic generation microscopy of cells andtissue organization http://doi.org/10.1242/jcs.152272 Model as cross-vendor or cross-modal problem? As you are imaging the “same vasculature” but it looks a bit different with different techniques
  • 16. “Cross-Modal” 3DVasculatureNetworkseventually wouldbe very nice Imaging the microarchitecture of the rodent cerebral vasculature. (A) Wide-field epi-fluorescence image of a C57Bl/6 mouse brain perfused with a fluorescein-conjugated gel and extracted from the skull ( Tsai et al, 2009). Pial vessels are visible on the dorsal surface, although some surface vessels, particularly those that were immediately contiguous to the sagittal sinus, were lost during the brain extraction process. (B) Three- dimensional reconstruction of a block of tissue collected by in vivo two-photon laser scanning microscopy (TPLSM) from the upper layers of mouse cortex. Penetrating vessels plunge into the depth of the cortex, bridging flow from surface vascular networks to capillary beds. (C) In vivo image of a cortical capillary, 200 μm below the pial surface, collected using TPLSM through a cranial window in a rat. The blood serum (green) was labeled by intravenous injection with fluorescein-dextran conjugate ( Table 2) and astrocytes (red) were labeled by topical application of SR101 (Nimmerjahn et al, 2004). (D) A plot of lateral imaging resolution vs. range of depths accessible for common in vivo blood flow imaging techniques. The panels to the right show a cartoon of cortical angioarchitecture for mouse, and cortical layers for mouse and rat in relation to imaging depth. BOLD fMRI, blood-oxygenation level-dependent functional magnetic resonance imaging. Network learns to disentangle the ‘vesselness’ from image formation i.e. how the vascularity looks like when viewed with different modalities Compare this to ‘clinical networks’ e.g. Jeffrey De Fauw et al. 2018 that need to handle cross-vendor differences (e.g. different OCT or MRI machines from different vendors produce slightly different images of the same anatomical structures) Shih et al. (2012)https://dx.doi.org/10.1038%2Fjcbfm.2011.196
  • 17. e.g. FunctionalUltrasoundImaging fasterthantypical2Pmicroscopes Alan Urban etal. (2017) Pablo Blinder’s lab https://doi.org/10.1016/j.addr.2017.07.018 Alan Urban et al. (2017) Pablo Blinder’s lab https://doi.org/10.1016/j.addr.2017.07.018 Brunner et al. (2018) https://doi.org/10.1177%2F0271678X18786359
  • 18. And keep in mind when going through the slides, the development of “cross- discipline” networks. e.g. 2-PM as “ground truth” for lower quality modalities such as OCT (OCT angiography for retinal microvasculature) or photoacoustic imaging thatarepossibleinclinicalworkforhumans Two-photonmicroscopic imagingofcapillaryred bloodcellfluxinmouse brainreveals vulnerabilityofcerebralwhitemattertohypoperfusion Baoqiang Li, Ryo Ohtomo, Martin Thunemann,Stephen R Adams, Jing Yang,Buyin Fu, Mohammad AYaseen , Chongzhao Ran, Jonathan R Polimeni, David A Boas, Anna Devor,Eng H Lo, Ken Arai,Sava SakadžićFirst Published March 4,2019 https://doi.org/10.1177%2F0271678X19831016 This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. - DOI: 10.1117/12.2289211 SimultaneouslyacquiredPAM,FLM,OCTandODTimagesofamouse ear.(a)PA image (average contrast-to- noise ratio 34dB);(b)OCTB-scan at the location marked in panel (e) by the solid line (displayed dynamic range,40 dB); (c)ODT B-scanatthe locationmarked in panel (e)bythe solid line; (d)FLMimage (average contrast-to-noise ratio 14dB);(e)OCT2Dprojection images generated from the acquired 3D OCT datasets; SG: Sebaceous glands; bar, 100μm.
  • 20. ‘Traditional’StructuralVascularBiomarkers #1 i.e. You want to analyze the changes in vascular morphology in disease, in response totreatment, etc limited by the imagination of your in-house biologist, e.g. Artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratioand wall-to-lumen length FEMmesh ofthevasculaturedisplaying arteries, capillaries,and veins. Gagnon etal. (2015)doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by 93 “We created the graphs and performed image processing using a suite of custom- designed tools in MATLAB” Classical vascular analysis reveals a decrease in the number of junctions and total vessel length following TBI. (A) An axial AngioTool image where vessels (red) and junctions (blue) are displayed. Whole cortex and specific concentric radial ROIs projecting outward from the injury site (circles 1–3), were analyzed to quantify vascular alterations. (B) Analysis of the entire whole cortex demonstrated a significant reduction in the both number of junctions and in the total vessel length in TBI animals compared to sham animals. (C) TBIanimals also exhibited a significant decline in the number vascular junctions moving radially outward from the injury site (ROIs 1 to 3). Fractal analysis reveals a quantitative reduction in both vascular complexityand frequency in TBI animals. (A) A binary image of the axial vascular network of a representative sham animal with radial ROIs radiating outward from the injury or sham surgery site (ROI1–3). The right panel illustrates the complexity changes in the vasculature from the concentric circles as you move radially outward from the injury site. These fractal images are colorized based on the resultant fractal dimension with a gradient from lower local fractal dimension (LFD) in red (less complex network) to higher LFD in purple (more complex network). Traumaticbraininjuryresultsinacuterareficationof thevascularnetwork. http://doi.org/10.1038/s41598-017-00161-4 Tortuous Microvessels Contribute to Wound Healing via SproutingAngiogenesis (2017) https://doi.org/10.1161/ATVBAHA.117.309993 Multifractal and Lacunarity Analysis of Microvascular Morphology and Remodeling https://doi.org/10.1111/j.1549-8719.2010.00075.x see “Fractal and multifractal analysis: a review”
  • 21. ‘Traditional’StructuralVascularBiomarkers #2 Schemeillustratingtheprincipleofvascularcorrosion casts Scheme depicting the definition of vascularbranchpoints. Each voxel of the vessel center line (black) with more than two neighboring voxels was defined as a vascular branchpoint. This results in branchpoint degrees (number of vessels joining in a certain branchpoint) of minimally three. In addition, two branchpoints were considered as a single one if the distance between them was below 2 mm. Of note, nearly all branchpoints had a degree of 3. Branchpoint degrees of four or even higher accounted together for far less than 1% of all branchpoints Scheme showing the definition of vessel diameter (a), vessel length (a), and vessel tortuosity (b). The segment diameter is defined as the average diameter of all single elements of a segment (a). The segment length is defined as the sum of the length of all single elements between two branchpoints. The segment tortuosity is the ratio between the effective distance le and the shortest distance ls between the two branchpoints associated to this segment. Schematic displaying the parameter extravascular distance, being defined as the shortest distance of any given voxel in the tissue to the next vessel structure. (b) Color map indicating the extravascular distance in the cortex of a P10 WT mouse. Each voxel outside a vessel structure is assigned a color to depict its shortest distance to the nearest vessel structure.
  • 22. ‘Traditional’StructuralVascularBiomarkers #3: InClinical context,you cansee that incertaindisease (by vascularpathologies, or by yourpathology Xthatyou are interestedin), the connectivity oftextbook case mightget altered.Andthenyouwantto quantify thischange asa function ofdisease severity,pharmacological treatment,otherintervention. RelationshipbetweenVariations in theCircleofWillis andFlowRates inInternalCarotidandBasilar Arteries DeterminedbyMeans of MagneticResonanceImagingwith SemiautomatedLumen Segmentation:ReferenceData from125 Healthy Volunteers H. Tanaka, N. Fujita, T. Enoki, K. Matsumoto, Y. Watanabe, K. Murase and H. NakamuraAmerican Journal of Neuroradiology September 2006, 27 (8) 1770-1775; https://www.ncbi.nlm.nih.g ov/pubmed/16971634 Cited by 124 - Related articles
  • 23. ‘Traditional’FunctionalVascularBiomarkers #1 Blood flow -based biomarkers spatiotemporal (graph) deep learning model needed.See forsome → fMRI literatureorpoach someone from Über. C, Blood flow distribution simulated across the vascular network assuming a global perfusion value of 100 ml/min/100 g. D, Distribution of the partial pressure of oxygen (pO2 ) simulated across the vascular network using the finite element method model. E, TPM experimental measurements of pO2 in vivo using PtP-C343 dye. F, Quantitative comparison of simulated and experimental pO2 and SO2 distributions across the vascular network for a single animal. Traces represent arterioles and capillaries (red) and venules and capillaries (blue) as a function of the branching order from pial arterioles and venules, respectively. doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by93 F, Vessel type. G, Spatiotemporal evolution of simulated SO2 changes following forepaw stimulus.
  • 24. ‘Traditional’FunctionalVascularBiomarkers #2 Time-averaged velocity magnitudes of a measurement region are shown, together with with the corresponding skeleton (black line), branch points (white circles), and end points (gray circles). The flow enters the measurement region from theright. Notethat anon-linearcolor scalewas used forthevelocity magnitude. Multiple parabolic fits at several locations on the vessel centerline were performed to obtain a single characteristic velocity and diameter for each vessel segment. The time-averaged flow rate is assumed constant throughout the vessel segment. The valid region is bounded by 0.5 and 1.5×the median flow rate, and the red-encircled data points were not incorporated, due to a strongly deviating flow rate. Note that the fitted diameters and flow rates for the two data points on the far rightare too large to be visible in the graph. QuantificationofBloodFlowandTopologyinDevelopingVascularNetworks Astrid Kloosterman, Beerend Hierck, Jerry Westerweel, Christian Poelma Published: May 13, 2014 https://doi.org/10.1371/journal.pone.0096856
  • 25. Vasculatureimagingandvideooximetry Methods forcalculating retinal bloodvesseloxygen saturation (sO2) by(a)thetraditional LSF,and (b) ourneuralnetwork-based DSLwith uncertainty quantification. Deep spectrallearningfor label-freeopticalimaging oximetrywithuncertaintyquantification RongrongLiu,ShiyiCheng,Lei Tian,Ji Yi https://doi.org/10.1101/650259 Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL) for oximetry to be highly robust to experimental variations, and more importantly to provide uncertainty quantification for each sO2prediction. Two-photon phosphorescence lifetime microscopyofretinalcapillaryplexus oxygenation in mice IkbalSencan; Tatiana V. Esipova;MohammadA. Yaseen;Buyin Fu;DavidA. Boas; Sergei A. Vinogradov; MahnazShahidi; Sava Sakadžic https://doi.org/10.1117/1.JBO.23.12.126501
  • 26. NeurovascularDiseaseResearch functioningofthe“neurovascularunit”(NVU) is ofinterest Example of two-photon microscopy (TPM). The TPM provides high spatial resolution images such as angiogram (left, scale bar: 100 lm) and multi-channel images, such as endothelial glycocalyx (green) with bloodflow(red,scalebar: 10lm) Intermsofdeep learning, you might think of multimodal/channel models and “context dependent” localization of dye signals Yoon and Yong Jeong (2019) https://doi.org/10.1007/s12272-019-01128-x
  • 28. Computationalhemodynamicanalysis requiresegmentationswithnogaps Towardsaglaucomariskindexbasedonsimulatedhemodynamics fromfundusimages José IgnacioOrlando, JoãoBarbosa Breda, Karelvan Keer, Matthew B. Blaschko, PabloJ. Blanco, CarlosA. Bulant https://arxiv.org/abs/1805.10273 (revised27 Jun 2018) https://ignaciorlando.github.io./ It has been recently observed that glaucoma induces changes in the ocular hemodynamics ( Harris et al. 2013; Abegão Pinto et al. 2016). However, its effects on the functional behavior of the retinal arterioles have not been studied yet. In this paper we propose a first approach for characterizing those changes using computational hemodynamics. The retinal blood flow is simulated using a 0D model for a steady, incompressible non Newtonian fluid in rigid domains. Finally, our MATLAB/C++/python code and the LES-AV database are publicly released. To the best of our knowledge, our data set is the first in providing not only the segmentations of the arterio-venous structures but also diagnostics and clinical parameters at an image level. (a)Multiscaledescriptionofneurovascular coupling in theretina. The modelinputsatthe Macroscale (A) are the bloodpressuresatthe inletand outletof the retinalcirculation, Pin andPout. The Mesoscale (B) focuses on arterioles, whosewalls comprise endotheliumandsmooth muscle cells.The Microscale (C) entails the biochemistryatthe cellular levelthatgoverns the change in smooth muscle shape.(b)
  • 29. Voxel Mesh → conversion“trivial”withcorrectsegmentation/graph model DeepMarchingCubes:LearningExplicitSurface Representations Yiyi Liao, Simon Donńe, Andreas Geiger (2018) https://avg.is.tue.mpg.de/research_projects/deep-marching-cubes http://www.cvlibs.net/publications/Liao2018CVPR.pdf https://www.youtube.com/watch?v=vhrvl9qOSKM Moreover, we showed that surface-based supervision results in better predictions in case the ground truth 3D model is incomplete. In future work, we plan to adapt our method to higher resolution outputs using octrees techniques [Häne et al. 2017; Riegler et al. 2017; Tatarchenko et al. 2017] and integrate our approach with other input modalities Learning3DShapeCompletionfromLaserScanDatawithWeakSupervision David Stutz, Andreas Geiger (2018) http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1708.pdf Deep-learning-assistedVolumeVisualization Hsueh-Chien Cheng, Antonio Cardone, Somay Jain, Eric Krokos, Kedar Narayan, Sriram Subramaniam, Amitabh Varshney IEEE Transactions on Visualization and Computer Graphics ( 2018) https://doi.org/10.1109/TVCG.2018.2796085 Although modern rendering techniques and hardware can now render volumetric data interactively, we still need a suitablefeaturespace that facilitates naturaldifferentiationof target structures andan intuitive and interactive way of designing visualizations
  • 30. Motivation Some scriptability available for ImageJ in many languages https://imagej.net/Scripti ng Imaris had to listen to their customers but still closed-source with poor → integration to 3rd party code ITK does someone still use? Howabout‘scaling’allyourandothers’ manualworkforanautomaticsolution? → data-driven vascularsegmentation
  • 31. ‘Downstream uncertainty’ reduced with near-perfectvoxelsegmentation Influenceofimagesegmentationonone-dimensional fluiddynamicspredictionsinthemousepulmonary arteries Mitchel J. Colebank, L. Mihaela Paun, M. Umar Qureshi, Naomi Chesler, Dirk Husmeier, Mette S. Olufsen, Laura Ellwein Fix NC State University, UniversityofGlasgow, University of Wisconsin-Madison, Virginia Commonwealth University, (Submitted on 14 Jan 2019 https://arxiv.org/abs/1901.04116 Computational fluid dynamics (CFD) models are emerging as tools for assisting in diagnostic assessment of cardiovascular disease. Recent advances in image segmentation has made subject-specific modelling of the cardiovascular system a feasible task, which is particularly important in the case of pulmonary hypertension (PH), which requires a combination of invasive and non-invasive procedures for diagnosis. Uncertainty in image segmentation can easily propagate to CFD model predictions, making uncertainty quantification crucial for subject-specific models. This study quantifies the variability of one-dimensional (1D) CFD predictions by propagating the uncertainty of network geometry and connectivity to blood pressure and flow predictions. We analyse multiple segmentations of an image of an excised mouse lung using different pre-segmentation parameters. A custom algorithm extracts vessel length, vessel radii, and network connectivity for each segmented pulmonary network. We quantify uncertainty in geometric features by constructing probability densities for vessel radius and length, and then sample from these distributions and propagate uncertainties of haemodynamic predictions using a 1D CFD model. Results show that variation in network connectivity is a larger contributor to haemodynamic uncertainty than vessel radius and length.
  • 32. ‘Measurement uncertainties’ propagatetoyourdeeplearningmodelsaswell Arnold et al. (2017) Uncertainty Quantification in a Patient-Specific One- Dimensional Arterial Network Model: ensemble Kalman filter (EnKF)-Based Inflow Estimator http://doi.org/10.1115/1.4035918 Marquis et al. (2018) Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model https://doi.org/10.1016/j.mbs.2018.07.001 Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). Păun et al. (2018) MCMC methods for inference in a mathematical model of pulmonary circulation https://doi.org/10.1111/stan.12132 The Delayed Rejection Adaptive Metropolis (DRAM) algorithm, coupled with constraint non‐ linear optimization, is successfully used to learn the parameter values and quantify the uncertaintyin the parameter estimates Schiavazzi et al. (2017) A generalized multi-resolution expansion for uncertainty propagation with application to cardiovascular modeling https://dx.doi.org/10.1016%2Fj.cma.2016.09.024 A general stochastic system may be characterized by a large number of arbitrarily distributed and correlated random inputs, and a limited support response with sharp gradients or event discontinuities. This motivates continued research into novel adaptive algorithms for uncertainty propagation, particularly those handling high dimensional, arbitrarily distributed random inputs and non-smoothstochasticresponses. Sankaran and Marsdenal. (2011) A stochastic collocation method for uncertainty quantification and propagation in cardiovascular simulations. http://doi.org/10.1115/1.4003259 In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probabilitydensity functions (PDFs). Tran et al. (2019) Uncertainty quantification of simulated biomechanical stimuli in coronary artery bypass grafts https://doi.org/10.1016/j.cma.2018.10.024 Prior studies have primarily focused on deterministic evaluations, without reporting variability in the model parameters due to uncertainty. This study aims to assess confidence in multi- scale predictions of wall shear stress and wall strain while accounting for uncertainty in peripheral hemodynamics and material properties. Boundary condition distributions are computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness are obtained through approximation by a random field. We developed a stochastic submodeling approach to mitigate the computational burden of repeated multi-scale model evaluations to focus exclusively on the bypass grafts. Yin et al. (2019) One-dimensional modeling of fractional flow reserve in coronary artery disease: Uncertainty quantification and Bayesian optimization https://doi.org/10.1016/j.cma.2019.05.005 The computational cost to perform three-dimensional (3D) simulations has limited the use of CFD in most clinical settings. This could become more restrictive if one aims to quantify the uncertainty associated with fractional flow reserve (FFR) calculations due to the uncertainty in anatomic and physiologic properties as a significant number of 3D simulations is required to sample a relatively large parametric space. We have developed a predictive probabilistic model of FFR, which quantifies the uncertainty of the predicted values with significantly lower computational costs. Based on global sensitivity analysis, we first identify the important physiologic and anatomic parameters thatimpact the predictions of FFR
  • 34. Neuronalbranching graphs #1 Explicit representation of a neuron model. (left) The network can be represented as a graph structure, where nodes are end points and branch points. Each fiber is represented by a single edge. (right) The same networkisshown withseveral commonerrorsintroduced. Dendrograms Representation of brain vasculature using circular dendrograms A Method for the Symbolic Representation of Neurons Maraver et al. (2018) https://doi.org/10.3389/fnana.2018.00106 NetMets: Software for quantifying and visualizing errors in biological network segmentation Mayerich et al. (2012) http://doi.org/10.1186/1471-2105-13-S8-S7
  • 35. Neuronalbranching graphs #2 Topological characterization of neuronal arbor morphology via sequence representation: I - motif analysis Todd A Gillette and Giorgio A Ascoli BMC Bioinformatics 2015 https://doi.org/10.1186/s12859-015-0604-2 “Grammar model” for deep learning? Tree size and complexity. a. Complexity of trees is limited by tree size. Here are shown the set of possible tree shapes for trees with 1 to 6 bifurcations. Additionally, the number of T nodes (red dots in sample trees) is always 1 more than A nodes (green dots). Thus, size and number or percent of C nodes (yellow dots) fully captures node-type statistics.
  • 36. Neuronalbranching graphs #3 NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery Shinn-Ying Ho et al. BMC Bioinformatics201112:230 https://doi.org/10.1186/1471-2105-12-230 Image enhancement process of NeurphologyJ does not remove thin and dim neurites. Shown here is an example image of mouse hippocampal neurons analyzed by NeurphologyJ. Notice that both thick neurites and thin/dim neurites (arrowheads) are preserved after the image enhancement process.The scale bar represents 50 μm. Neuritecomplexity can bededucedfrom neurite attachment pointandending point.Examples of neuronswithdifferent levelsofneurite complexityare shown
  • 37. NeuronalCircuitTracing Similartoourchallenges#1 FlexibleLearning-FreeSegmentationand ReconstructionforSparseNeuronalCircuitTracing Ali Shahbazi, Jeffery Kinnison, Rafael Vescovi, Ming Du, Robert Hill, Maximilian Joesch, Marc Takeno, Hongkui Zeng, Nuno Macarico da Costa, Jaime Grutzendler, Narayanan Kasthuri, Walter J. Scheirer July 06, 2018 https://doi.org/10.1101/278515 FLoRIN reconstructions of the Standard Rodent Brain (SRB) (top) and APEX2-labeled Rodent Brain sample (ALRB) (bottom) µCT X-ray volumes. (A) Within the SRB volume, cells and vasculature are visually distinct in the raw images, with vasculature appearing darker than cells. (B) Individualstructuresmay be extremely close(such as the cells and vasculature in this example), making reconstruction efforts prone to mergeerrors.
  • 38. NeuronalCircuitTracing Similartoourchallenges#2 DenseneuronalreconstructionthroughX-rayholographicnano-tomography Alexandra Pacureanu, Jasper Maniates-Selvin, Aaron T. Kuan, Logan A. Thomas, Chiao-Lin Chen, Peter Cloetens, Wei-Chung Allen Lee May 30, 2019. https://doi.org/10.1101/653188 3D U-NET everywhere
  • 40. Thinkintermsofsystems The machine learning model is just a part of all this in your labs Atonof stacksjust sitting on your hard drives Takesalot of workto annotate the vasculature voxel-by-voxel “AI” buzzword MODEL The following slides will showcase variouswaysof how thisbuzzhas been done “in practice” Aspoiler:We would like to have a semi- supervised model. doi: 10.1038/s41592-018-0115-y doi: 10.1038/s41592-018-0115-y We want to predict the vessel / non- vessel mask* for each voxel * (i.e. foreground- background, binary segmentation)
  • 41. PracticalSystemsParts Highlighted later on as well: Active Learning Atonof stacksjust sitting on your hard drives Takesalot of workto annotate the vasculature voxel-by-voxel “AI” buzzword MODEL doi: 10.1038/s41592-018-0115-y doi: 10.1038/s41592-018-0115-y Youwould liketo keep researchersintheloopwith thesystem and make It better as you do more experiments and acquire more data. But you have so many stacks on your hard drive that howdo youselectthestacks/slices thatyoushouldselectin orderto improvethemodel themost? Check the ActiveLearning slides later
  • 42. PracticalSystemsParts Highlighted later on as well: Proofreading We want to predict the vessel / non- vessel mask* for each voxel * (i.e. foreground- background, binary segmentation) “AI” buzzword MODEL Yoursegmentationmodel willmake100%some erroneouspredictions and you would like to “show the errors” to the system so it can learn from them and predict better next time
  • 43. Proof- reading Labelling Thinkingintermsof a product If you would like to release this all as an open-source software/toolbox or a as a spin-off startup, instead of just sharing your network on Github “AI” buzzword MODEL Active Learning TheFinalMask You could now expose APIs to the parts needed, and get a modular system where you can focus on segmentation and maybe your collaborators are really into building good front-ends for proofreading and labelling?
  • 44. Annotateddataasthebottleneck Even with the semi-supervised approach, you won’t most likely face a situation where you have too many volumes with vasculature ground truths Thus The faster and more intuitive your proofreader / annotator / labelling tool is, The faster you can make progress with your model performance. →UX Matters UX as in User Experience, as most likely your professor has never used this word. https://hackernoon.com/why-ux-design-must-be-the-foundation-of-your-software-product-f66e431cc7b4
  • 45. ‘Stealideas’fornicetousesystemsaroundyou Voxeleron OrionWorkflow Advantages https://www.voxeleron.com/orion-workflow-advantages/ Click on your inliers/outliers interactively and Orion updates the spline fittings for you in real-time Polygon-RNN https://youtu.be/S1UUR4FlJ84 https://github.com/topics /annotation-tool ● wkentaro/ labelme ● Labelbox / Labelbox ● microsoft /VoTT ● opencv /cvat
  • 46. Makebiologyagamegamification tickedofffromthebuzzwordbingohere https://doi.org/10.1016/j.chb.2016.12.0 74 Eyewire Elite Gameplay | https://eyewire.org/explore https://phys.org/news/2019-06-video-gamers-brand-proteins.html
  • 47. Theslidesetto follow willallowmultiplewaystosolvethe segmentationchallenge,and aswellto startbuilding the “product”inmodules “ablation study friendly” ,sononeedto tryto makeitallatonce... necessarily Bio/neuroscientists, can have a look of this classic Can a biologist fix a radio?—Or, what I learned while studying apoptosis https://doi.org/10.1016/S1535-6108(02)00133-2- Citedby 371
  • 48. Integratetosomethingand exploittheexistingopen-sourcecode USIDandPycroscopy--Openframeworksforstoringand analyzingspectroscopicandimagingdataSuhas Somnath,Chris R. Smith, Nouamane Laanait, Rama K. Vasudevan,Anton Ievlev,Alex Belianinov,AndrewR. Lupini, Mallikarjun Shankar, Sergei V.Kalinin, Stephen JesseOak Ridge National Laboratory (Submitted on 22 Mar 2019) https://arxiv.org/abs/1903.09515 https://www.youtube.com/channel/UCyh-7XlL-BuymJD7vdoNOvw pycroscopy https://pycroscopy.github.io/pycroscopy/about.html pycroscopy is a python package for image processing and scientific analysis of imaging modalities such as multi-frequency scanning probe microscopy, scanning tunneling spectroscopy, x-ray diffraction microscopy, and transmission electron microscopy. pycroscopy uses a data-centric model wherein the raw data collected from the microscope, results from analysis and processing routines are all written to standardized hierarchical data format (HDF5) files for traceability, reproducibility,and provenance. OME https://www.openmicroscopy.org/ Har-Gil, H., Golgher, L., Israel, S., Kain, D., Cheshnovsky, O., Parnas, M., & Blinder, P. (2018). PySight: plug and play photon counting for fast continuous volumetric intravital microscopy. Optica, 5(9), 1104-1112. https://doi.org/10.1364/OPTICA.5.001104
  • 49. Integratetosomethingand exploittheexistingopen-sourcecode VMTK http://www.vmtk.org/ by Orobix https://github.com/vmtk/vmtk Python3 VMTKADD-ON FORBLENDER November 13, 2017 EPFL has developed an add-on for Blender that loads centerlines generated by VMTKinto Blender,and writes meshes from Blender. http://www.vmtk.org/tutorials/
  • 50. Youcould for example improvethesegmentation tobeused with VMTKletVMTK/Blenderstillvisualizethestacks For example, we could start by doing this the “deep learning” way outlined on this slideshow If you feel that this do not really work well for your needs, you can work on this, or ask for improvements from Orobix team
  • 51. Blender integrationwithmeshes? BioBlender is a software package built on the open-source 3D modeling software Blender. BioBlender is the result of a collaboration, driven by the SciVis group at the CNR in Pisa (Italy), between scientists of different disciplines (biology, chemistry, physics, computer sciences) and artists, using Blender in a rigorous but at the same time creative way. http://www.bioblender.org/ https://github.com/mcellteam/cellblen der https://github.com/NeuroMorph-EPFL/NeuroM orph/tree/master/NeuroMorph_CenterLines_Cr ossSections Processes center lines generated by the Vascular Modeling Toolkit (VMTK), perform calculations in Blender using these center lines. Includes tools to clean meshes, export meshes to VMTK, and import center lines generated by VMTK. Also includes tools to generate cross-sectional surfaces, calculate surface areas of the mesh along the center line, and project spherical objects (such as vesicles) or surface areas onto the center line. Tools are also provided for detectingbouton swellings. Data can be exportedfor analysis.
  • 53. FluorescenceMicroscopy networksexistfor“smallerblobs” DeepFLaSH,adeeplearningpipelineforsegmentationof fluorescentlabelsinmicroscopyimages Dennis Segebarth et al. November 2018 https://doi.org/10.1101/473199 Here we present and evaluate DeepFLaSH, a unique deep learning pipeline to automatize the segmentation of fluorescent labels in microscopy images. The pipeline allows training and validation of label- specific convolutional neural network (CNN) models that can be uploaded to an open-source CNN-model library. As there is no ground truth for fluorescent signal segmentation tasks, we evaluated the CNN with respect to inter-coding reliability. Similarity analysis showed that CNN-predictions highly correlated with segmentations by human experts. DeepFLaSH runs as a guided, hassle-free open-source tool on a cloud-based virtual notebook (Google Colab http://colab.research.google.com, in a Jupyter Notebook) with free access to high computing power and requires no machinelearningexpertise. Label-specific CNN-models, validated on base of inter-coding approaches may become a new benchmark for feature segmentation in neuroscience. These models will allow transferring expert performance in image feature analysis from one lab to any other. Deep segmentation can better interpret feature-to-noise borders, can work on the whole dynamic range of bit-values and exhibits consistent performance. This should increase both, objectivity and reproducibility of image feature analysis. DeepFLaSH is suited to create CNN- models for high-throughput microscopy techniques and allows automatic analysis of large image datasets with expert-like performance and at super- human speed. With a nice notebook deployment example
  • 54. VasculatureNetworks Multimodali.e.“multidye” 3DCNNsifpossible HyperDense-Net:A hyper-densely connected CNN formulti-modal image segmentation Jose Dolz https://arxiv.org/abs/1804.02967(9 April 2018) https://www.github.com/josedolz/HyperDenseNe t We propose HyperDenseNet, a 3D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems [MRI Modalities: MR-T1, PD MR-T2, FLAIR]. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path, but also between those across different paths. A multimodal imaging platform with integrated simultaneousphotoacousticmicroscopy, optical coherencetomography,optical Doppler tomography and fluorescence microscopy Arash Dadkhah; Jun Zhou; Nusrat Yeasmin; Shuliang Jiao https://sci-hub.tw/https://doi.org/10.1117/12.2289211 (2018) Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. An artery-specificfluorescent dye for studying neurovascularcoupling Zhiming Shen, Zhongyang Lu, Pratik Y Chhatbar, Philip O’Herron, and Prakash Kara https://dx.doi.org/10.1038%2Fnmeth.1857(2012) Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. Astrocytes are intimatelylinked to the function of the inner retinalvasculature. A flat-mounted retina labelled for astrocytes (green) and retinal vasculature (pink). - from Prof Erica Fletcher
  • 55. Multimodalsegmentation glialcells,A fibrils,etc.provide β ‘context’ for vasculatureandviceversa Diffuse and vascular A deposits induce astrocyte endfeet retraction and swelling in β TG arcA mice, starting at early-stage pathology. β Triple-stained for GFAP, laminin and A /APP. β https://doi.org/10.1007/s00401-011-0834-y In vivo imagingof theneurovascular unit in Stroke,Multiple Sclerosis (MS) and Alzheimer’s Disease. InvivoimagingoftheneurovascularunitinCNS disease https://www.researchgate.net/publication/265418103_In_vivo_i maging_of_the_neurovascular_unit_in_CNS_disease
  • 56. NeurovascularUnit(NVU) astrocyte /neuron/vasculatureinterplay (A)Immunostaining depiction of components of the neurovascularunit (NVU). The astrocytes (stained with rhodamine labeled GFAP)shown in red. The neurons are stained withfluorescein tagged NSE shown in green and the blood vessels are stained with PECAM shown in blue.Note the location of the foot processes around the vasculature. (B)Histochemical localization of - β galactosidase expression in rat brain following lateral ventricular infusion of Ad5/CMV- -galactosidase (magnification β × 1000).Note staining of astrocytes and astrocytic footprocesses surrounding blood vessel emulating the exploded section of the immunostained brain slice A B Schematicrepresentation ofaneurovascular unit with astrocytes being thecentral processorof neuronalsignals as depicted in both panels A and panel B. Harder et al. (2018) Regulationof Cerebral BloodFlow:ResponsetoCytochrome P450LipidMetabolites http://doi.org/10.1002/cphy.c170025
  • 57. NVU examplesof dyes/labelsinvolved#1 CALCIUM OGB-1 Neuron CA2+ ASTROCYTE SR-101 Astrocytic CA2+ ARTERY AlexaFluor 633 or FITC/TexasRed Vessel diameter Neuron (OGB-1) and arteriole response (Alexa Fluor 633) to drifting grating in cat visual cortex. https://dx.doi.org/10 .1038%2Fnmeth.185 7 Low-intensity afferent neural activity caused vasodilation in the absence of astrocyte Ca2+ transients. https://dx.doi.org/10.1038%2Fjcbfm.2015.141 Astrocytes trigger rapid vasodilation following photolysis of caged Ca+. https:/ /dx.doi.org/10.3389%2Ffnins .2014.00103
  • 58. NVU“PhysicalCheating”forartery-veinclassification https://doi.org/10.1016/j.rmed.2013.02.004 https://doi.org/10.1182/blood-2018-01-824185 https://doi.org/10.1364/BOE.9.002056 http://doi.org/10.5772/intechopen.80888 Traces of relative Hb and HbO2 concentrations for a human subject during three consecutive cycles of cuff inflation and deflation. http://doi.org/10.1063/1.3398450 sO2 and blood flow on four orders of artery-vein pairs http://doi.org/10.1117/1.3594786
  • 59. NVUOxygenProbesforMultiphotonMicroscopy Examples of in vivo two-photon PLIM oxygen sensing of platinum porphyrin-coumarin- 343 a Maximum intensity projection image montage of a blood vessel entering the bone marrow (BM) from the bone. Bone (blue) and blood vessels (yellow) are delineated with collagen second harmonic generation signal and Rhodamine B-dextran fluorescence, respectively. b Measurement of pO2 in cortical microvasculature. Left: measured pO 2 values in microvasculature at various depths (colored dots), overlaid on the maximum intensity projection image of vasculature structure (grayscale). Right: composite image showing a projection of the imaged vasculature stack. Red arrows mark pO2 measurement locations in the capillary vessels at 240 μm depth. Orange arrows point to the consecutive branches of the vascular tree, from pial arteriole (bottom left arrow) to the capillary and then to the connection with ascending venule (topright arrow). Scale bars: 200 μm. Chelushkin and Tunik (2019) 10.1007/978-3-030-05974-3_6 Devor et al. (2012) Frontiersin opticalimagingof cerebralbloodflowandmetabolism http://doi.org/10.1038/jcbfm.2011.195 Optical imaging of oxygen availability and metabolism. ( A ) Two-photon partial pressure of oxygen (pO2 ) imaging in cerebral tissue. Each plot shows baseline pO2 as a function of the radial distance from the center of the blood vessel—diving arteriole (left) or surfacing venule (right)—for a specific cortical depth range
  • 60. DyeEngineering afieldofitsown,andcheckfornewexcitingdyes BrightAIEgen–ProteinHybrid NanocompositeforDeepandHigh‐ ResolutionIn VivoTwo PhotonBrainImaging ‐ Shaowei Wang Fang Hu Yutong Pan Lai Guan Ng Bin Liu Department ofChemical and Biomolecular Engineering,National University ofSingapore Advanced Functional Materials 24 May 2019 https://doi.org/10.1002/adfm.201902717 NIR IIExcitableConjugated PolymerDotswith ‐ BrightNIR IEmissionforDeepInVivoTwo ‐ ‐ PhotonBrainImagingThroughIntactSkull Shaowei Wang Jie Liu Guangxue Feng Lai Guan Ng Bin Liu Department of Chemical and Biomolecular Engineering,National University ofSingapore Advanced Functional Materials 21 January 2019 https://doi.org/10.1002/adfm.201808365
  • 61. When Quantum Dots gets old, enter PolymerDots In vivo vascularimaging in miceafterlabellingwith polymerdots(CNPPV, PFBT, PFPV), fluorescein and QD605 semiconductorquantum dots; scalebars =100µm. (Biomed.Opt.Express 10.1364/BOE.10.000584, Universityof Texas, Ahmed M.Hassan etal. (2019)) https://physicsworld.com/a/polymer-dots-image-deep-into-the-brain/ Furthermore, we justify the use of pdotsover conventional fluorophores for multiphoton imaging experiments inthe 800 – 900 nm excitationrange due to their increased brightness relativeto quantumdots,organic dyes,andfluorescent proteins. An important caveat toconsider, however, is that pdots were delivered intravenously in our studies, and labelingneuralstructureslocatedin high-density extravascular brain tissue couldposeachallenge due to the relativelylargediametersofpdots (~20-30nm). Recent efforts have producedpdot nanoparticles with sub-5 nm diameters, yet the yield from these preparations is still quite low
  • 62. Whatif youhavethe ‘dyelabels’ from differentexperiments And you would like to combine them into a training of a single network? LearningwithMultitaskAdversariesusingWeaklyLabelled DataforSemanticSegmentationinRetinalImages Oindrila Saha, Rachana Sathish, Debdoot Sheet 13 Dec 2018 (modified: 15 Apr 2019) https://openreview.net/forum?id=HJe6f0BexN In case of retinal images, data driven learning-based algorithms have been developed for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudatesand soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated. We solve this problem by training a single network using separate weakly labelled datasets. Essentially we use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation using FCN, where the objectives of discriminators are to learn to (a) predict which of the classes are actually present in the input fundus image, and (b) distinguish between manual annotations vs. segmentedresults for each of the classes. The first discriminator works to enforce the network to segment those classes which are present in the fundus image although may not have been annotated i.e. all retinal images have vessels while pathology datasets may not have annotated them in the dataset. The second discriminator contributes to making the segmentation result as realistic as possible. We experimentally demonstrate using weakly labelled datasets of DRIVE containing only annotations of vessels and IDRiD containing annotations for lesions and optic disc.
  • 64. OverviewoftheMethods blood vessels as special example of curvilinear structure object segmentation Bloodvesselsegmentationalgorithms— Reviewof methods,datasetsand evaluationmetrics Sara Moccia, Elena De Momi, Sara El Hadji, Leonardo S.Mattos Computer Methods and Programs in Biomedicine May 2018 https://doi.org/10.1016/j.cmpb.2018.02.001 No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen accordingtothespecifictask.
  • 65. U-Netyouwillsee thisrepeatedmanytimes U-Net:Convolutional Networksfor Biomedical Image Segmentation Olaf Ronneberger, Philipp Fischer, Thomas Brox (Submitted on 18 May 2015) https://arxiv.org/abs/1505.04597 Cited by 77,660 U-Net:deeplearningforcell counting,detection,and morphometry Thorsten Falk et al. (2019) Nature Methods 16, 67–70 (2019) https://doi.org/10.1038/s41592-018-0261-2 Citedby1,496 The ‘vanilla U-Net’ Is typically the baseline to beat in many articles, and its modified version is being proposed as the novel state-of-the-art network https://towardsdatascience.com/u-net-b229b32b4 a71 The architecture looks like a ‘U’ which justifies its name. This architecture consists of three sections: The contraction (encoder, downsampling part), The bottleneck, and the expansion (decoder, upsampling part) section. contraction encoder downsampling expansion decoder upsampling BOTTLENECK Skipconnections
  • 66. U-Net 2D Example Image Size noFeatureMaps 4x Downsampling ”Stages”With 2x2 Max Pooling 572 x572 px 32 x32px 4xUpsampling ”Stages” ENCODER DECODER First stage decoder filter outputs (activation maps) are passedtothe final4th decoder stage 2ndstage decoder filter outputs (activation maps) are passedtothe 3rd decoder stage 3rd - 2nd 4th- 1st
  • 67. 2Dretinalvasculature 2DU-Netasthe “baseline” Retinabloodvessel segmentationwith a convolutionneural network(U-net) orobix/retina-unet Keras http://vmtklab.orobix.com/ https://orobix.com/ as in the company from Italy behind the VMTKLab
  • 68. Jointsegmentationandvascular reconstruction Marry CNNs with graph (non-euclidean) CNNs, “grammar models” or something even better DeepVesselSegmentationByLearningGraphicalConnectivity Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee https://arxiv.org/abs/1806.02279 (Submitted on 6 Jun 2018) We incorporate a graph convolutional network into a unified CNN architecture, where the final segmentation is inferred by combining the different types of features. The proposed method can be applied to expand any type of CNN-based vessel segmentation methodtoenhance the performance. Learning about the strong relationship that exists between neighborhoods is not guaranteed in existing CNN- based vessel segmentation methods.The proposed vesselgraph network (VGN) utilizes a GCN together with a CNN to address this issue. Overall networkarchitecture of VGN comprising the CNN, graph convolutional network, and inference modules. “Grammar” as in if you know how molecules are composed (e.g. SMILES model), you can constrain the model to have only physically possible connections. Well we donotexactly havethatluxury and we need to learn the graph constraints from data (but have noannotations at the moment for edge nodes) Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules http://doi.org/10.1021/acscentsci.7b00572 some authors from Toronto, including David Duvenaud
  • 69. “Grammarmodels”possibletocertainextent Remember that healthy and pathological vasculature might be be “quite different” (highly quantitative term) Mitchell G. Newberry et al. Self-Similar Processes Follow a Power Law in Discrete Logarithmic Space, Physical Review Letters (2019). DOI: 10.1103/PhysRevLett.122.158303 Although blood vessels also branch dichotomously, random asymmetry in branching disperses vessel diameters from any specific ratios. On a database of 1569 blood vessel radii measured from a single mouse lung, αc and αd produced statistically indistinguishable estimates (Table I), independent of the chosen , and are therefore both likely accurate. The mutual consistency between the estimators suggests that the λ distribution ofbloodvesselmeasurementsiseffectivelyscaleinvariant despitetheunderlyingbranching. Quantitating the Subtleties of Microglial Morphology with Fractal Analysis Frontiers in Cellular Neuroscience 7(3):3 http://doi.org/10.3389/fncel.2013.00003
  • 70. Grammar as you can guess, used in languagemodeling Kim Martineau | MIT Quest for Intelligence May 29, 2019 http://news.mit.edu/2019/teaching-language-models-grammar-makes-them-smarter-0529 NeuralLanguage ModelsasPsycholinguisticSubjects: RepresentationsofSyntacticState Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy (Submitted on 8 Mar 2019) https://arxiv.org/abs/1903.03260 We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state. To do so, we examine model behavior on artificial sentences containing a variety of syntactically complex structures. We find evidence that the LSTMs trained on large datasets represent syntactic state over large spans of text in a way that is comparable to the Recurrent Neural Network Grammars (RNNG, Dyer et al. 2016 Cited by157 ), while the LSTM trained on the small dataset does not or does so only weakly. StructuralSupervisionImprovesLearningof Non-Local GrammaticalDependencies Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy (Submitted on 3 Mar 2019) https://arxiv.org/abs/1903.00943 Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus two models that represent hierarchical structure and deploy it in left-to-right processing: Recurrent Neural Network Grammars (RNNGs) (Dyer et al. 2016 Cited by157 ) and a incrementalized version of the Parsing-as-Language-Modeling configuration from Chariak et al., (2016). Structural supervision thus provides data efficiency advantages over purely string-based training of neural language models in acquiring human-like generalizationsabout non-local grammatical dependencies.
  • 71. VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#1 TowardsEnd-to-EndImage-to-Treefor VasculatureModeling YunlongHuo and Ghassan S. Kassab Journal of the Royal SocietyInterface Published:15 June2011 https://doi.org/10.1098/rsif.2011.0270- Citedby 87 A fundamental physics-based derivation of intraspecific scaling laws of vascular trees has not been previously realized. Here, we provide such a theoretical derivation for the volume–diameter and flow–length scaling laws of intraspecific vascular trees. In conjunction with the minimum energy hypothesis, this formulation also results in diameter–length, flow–diameter and flow–volume scaling laws. The intraspecific scaling predicts the volume– diameter power relation with a theoretical exponent of 3, which is validated by the experimental measurements for the three major coronary arterial trees in swine. This scaling law as well as others agrees very well with the measured morphometric data of vascular trees in various other organs and species. This study is fundamental to the understanding of morphological and haemodynamic features in a biological vascular treeand has implications forvasculardisease. Relation between normalized stem diameter (Ds /(Ds )max ) and normalized crown volume (Vc /(Vc)max ) for vascular trees of various organs and species corresponding to those trees in table 1. The solid line represents the least-squares fit of all the experimental measurements (exponent of 2.91, r2 = 0.966).
  • 72. VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#2 BranchingPatternoftheCerebral ArterialTree Jasper H. G. Helthuis Tristan P. C. van Doormaal Berend Hillen Ronald L. A. W. Bleys Anita A. Harteveld Jeroen Hendrikse Annette van der Toorn Mariana Brozici Jaco J. M. Zwanenburg Albert van der Zwan TheAnatomical Record(17 October2018) https://doi.org/10.1002/ar.23994 Quantitative data on branching patterns of the human cerebral arterial tree are lacking in the 1.0–0.1 mm radius range. We aimed to collect quantitative data in this range, and to study if the cerebral artery tree complies with the principle of minimal work (Lawof Murray). Data showed a large variation in branching pattern parameters (asymmetry ratio, area ratio, length radius ‐ ‐ ‐ ‐ ratio, tapering). Part of the variation may be explained by the variation in measurement techniques, number of measurements and location of measurement in the vascular tree. This study confirms that the cerebral arterial tree complies with the principle of minimum work. These data are essential in the future development of more accuratemathematicalbloodflow models. Relative frequencies of (A) asymmetry ratio, (B) area ratio,(C) length to radius ratio, ‐ ‐ ‐ (D)tapering.
  • 73. Branch-basedfunctionalmeasures? Changsi Cai et al. (2018) Stimulation-inducedincreases in cerebral bloodflowandlocalcapillary vasoconstriction dependonconductedvascularresponses https://doi.org/10.1073/pnas.1707702115 Functional vessel dilation in the mouse barrel cortex. (A) A two-photon image of the barrel cortex of a NG2-DsRed mouse at 150 µm depth. The p.a.s branch out a capillary horizontally ( ∼ first order). Further branches are defined as second- and third-order capillaries. Pericytes are labeled with a red fluorophore (NG2-DsRed) and the vessel lumen with FITC-dextran (green). ROIs are placed across the vessel to allow measurement of the vessel diameter (colored bars). (Scale bar: 10 µm.) Changsi Cai et al. (2018) Stimulation-induced increases in cerebralbloodflowandlocal capillaryvasoconstriction dependonconducted vascularresponses https://doi.org/10.1073/pnas.1707702115 Measurement of blood vessel diameter and red blood cell (RBC) flux in the retina.A, Confocal image of a whole-mount retina labeled for the blood vessel marker isolectin (blue), the contractile protein -SMA (red), and the pericyte marker NG2 (green). Blood vessel order in the superficial vascular α layer is indicated. First-order arterioles (1) branch from the central retinal artery. Each subsequent branch (2-5)has a higher order.Venules (V)connect with the central retinal vein. Scale bar,100 μm.
  • 74. 2Dretinalvasculature datasetsavailable Highlights also how availability of freely-available databases DRIVE and STARE with a lot of annotations lead to a lot of methodological papers from “non-retina” researchers De et al. (2016) A Graph-Theoretical Approach for Tracing Filamentary Structures in Neuronal and Retinal Images https://dx.doi.org/10.1109/TMI.2015.2465962
  • 75. 2DMicrovasculatureCNNswithGraphs TowardsEnd-to-EndImage-to-TreeforVasculatureModeling Manish Sharma, Matthew C.H.Lee,James Batten,Michiel Schaap, Ben Glocker Google, ImperialCollege, Heartflow MIDL2019Conference https://openreview.net/forum?id=ByxVpY5htN This work explores an end-to-end image-to-tree approach for extracting accurate representations of vessel structures which may be beneficial for diagnosis of stenosis (blockages) and modeling of blood flow. Current image segmentation approaches capture only an implicit representation, while this work utilizes a subscale U-Net to extract explicit tree representations from vascularscans.
  • 77. SS-OCTVasculatureSegmentation Robustdeeplearningmethodforchoroidalvesselsegmentationon sweptsourceopticalcoherencetomographyimages Xiaoxiao Liu, Lei Bi,Yupeng Xu, Dagan Feng, Jinman Kim, and Xun Xu Department of Ophthalmology, Shanghai General Hospital, ShanghaiJiaoTong UniversitySchool ofMedicine BiomedicalOpticsExpressVol. 10, Issue 4, pp.1601-1612(2019) https://doi.org/10.1364/BOE.10.001601 Motivated by the leading segmentation performance in medical images from the use of deep learning methods, in this study, we proposed the adoption of a deep learning method, RefineNet, to segment the choroidal vessels from SS-OCT images. We quantitatively evaluated the RefineNet on 40 SS-OCT images consisting of ~3,900 manually annotated choroidal vessels regions. We achieved a segmentation agreement (SA) of 0.840 ± 0.035 with clinician 1 (C1) and 0.823 ± 0.027 with clinician 2 (C2). These results were higher than inter-observervariability measurein SA between C1 and C2of 0.821 ±0.037. Currently, researchers have limited imaging modalities to obtain information about the choroidal vessels. Traditional indocyanine green angiography (ICGA) is the goldstandard in clinical practice for detecting abnormality in the choroidal vessels. ICGA provide 2D images of the choroid vasculature, which can show the exudation or filling defects. However, ICGA does not provide 3D choroidal structure or the volume of the whole choroidal vessel networks, and the ICGA images overlap retinal vessels and choroidal vessels together, thereby making it hard to independently observe and analyze the choroidal vessels quantitatively. OCT Angiography (OCTA) can clearly show the blood flow from superficial and deep retinal capillary network, as well as retinalpigment epithelium to superficial choroidal vascular network; however, it cannot show the blood flowindeepchoroidalvessels. https://arxiv.org/abs/1806.05034
  • 78. Fundus/OCT/OCTA multimodal quality enhancement Generatingretinalflowmapsfromstructuralopticalcoherencetomography withartificialintelligence Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem, Nicolaas P. DeRuyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang & Aaron Y. Lee Department of Ophthalmology, Universityof Washington, Seattle, WA, USA; eScience Institute, Universityof Washington, Seattle, WA, USA ScientificReportsArticle number: 5694(2019) https://doi.org/10.1038/s41598-019-42042-y Using the human generated annotations as the ground truth limits the learning ability of the AI, given that it is problematic for AI to surpass the accuracy of humans, by definition. In addition, expert-generated labels suffer from inherent inter-rater variability, thereby limiting the accuracy of the AI to at most variable human discriminative abilities. Thus, the use of more accurate, objectively-generated annotations would be a key advance in machine learning algorithms in diverse areas ofmedicine. Given the relationship of OCT and OCTA, we sought to explore the deep learning’s ability to first infer between structure and retinal vascular function, then generate an OCTA-like en-face image from structural OCT image alone. By taking OCT as input and using the more cumbersome, expensive modality, OCTA, as an objective training target, deep learning could overcome limitations with the second modality and circumvent theneedforgeneratinglabels. Unlike current AI models which are primarily targeted towards classification or segmentation of images, to our knowledge, this is the first application of artificial neural networks in ophthalmic imaging to generate a new image based on a different imaging modality data. In addition, this is the first example in medical imaging, to our knowledge, where expert annotations for training deep learning modelsare bypassedbyusingobjective,functional flow measurements. “FITC” in 2-PMContext “QD” in 2-PM Context Learn the mapping from FITC QD → (with QD as supervision) to improve the quality of already acquired FITC stacks unsupervised conditional image-to-image translation possible also, but probably trickier
  • 79. Electronmicroscopy similarreconstructionpipeline forvasculature High-precisionautomatedreconstructionof neuronswithflood-fillingnetworksMichałJanuszewski, Jörgen Kornfeld,PeterH. Li,Art Pope,Tim Blakely,Larry Lindsey,Jeremy Maitin-Shepard,Mike Tyka,Winfried Denk & Viren Jain Nature Methods volume 15, pages 605–610 (2018) https://doi.org/10.1038/s41592-018-0049-4 e introduce a CNN architecture, which is linearly equivariant (a generalization of invariance defined in the next section) to 3D rotations about patch centers. To the best of our knowledge, this paper provides the first example of a CNN with linear equivariance to 3Drotations and 3Dtranslations of voxelized data. By exploiting the symmetries of the classification task, we are able to reduce the numberof trainable parameters using judicious weight tying. We also need less training and test time data augmentation, since some aspects of 3D geometry are already ‘hard-baked’ into the network. As a proof of concept we try segmentation as a 3D problem, feeding 3D image chunks into a 3D network. We use an architecture based on Weiler et al. (2017)’s steerable version of the FusionNet. It is a UNet with added skip connections within the encoder and decoder paths to encourage better gradient flow. Effectiveautomatedpipelinefor3Dreconstructionofsynapsesbasedondeeplearning Chi Xiao, Weifu Li, Hao Deng, Xi Chen, Yang Yang, Qiwei Xie and Hua Han https://doi.org/10.1186/s12859-018-2232-0BMC Bioinformatics (13 July 2018) 19:263 Five basic steps implemented bythe authors 1) Imageregistration, e.g. An Unsupervised Learning Model for Deformable Medical Image Registration 2)ROIDetection, e.g. Weighted Hausdorff Distance: A Loss Function For Object Localization 3)3DCNNs, e.g. DeepMedic for brain tumor segmentation 4a)Dijkstra shortestpath, e.g. shiluyuan/Reinforcement-Learning-in-Path-Finding 4b)Oldschoolalgorithm refinement, e.g. 3D CRF, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation 5)MeshReconstruction, e.g. Robust Surface Reconstruction via Dictionary Learning Deep-learning-assisted Volume Visualization Deep Marching Cubes: Learning Explicit Surface Representations
  • 81. VasculatureImagingArtifacts Movement artifact 00 In vivoMPMimagesofacapillary. Because MPM images are acquire by raster scanning, images at different depths (z) are acquired with a time lag (t). Unlabeled red blood cells moving through the lumen cause dark spots and streaks and result in variable patterns within a single vessel. Haft-Javaherian et al.(2019) https://doi.org/10.1371/journal.pone.0213539
  • 82. VasculatureImagingArtifacts”Vessel Breakage” / Intensity inhomogeneity Anovelmethodforidentifyingagraph-basedrepresentationof 3-Dmicrovascularnetworksfromfluorescencemicroscopy imagestacks S. Almasi, X. Xu, A.Ben-Zvi, B. Lacoste, C. Guet al. MedicalImage Analysis, 20(1):208–223, February2015. http://dx.doi.org/10.1016/j.media.2014.11.007 Vasculature Image Quality. An example of false fractions in the structure caused by imaging imperfections and an area of more artifacts in a maximum-intensity projection (MIP) slice of a 3-D fluorescent microscopy image of microvasculature Jointvolumetricextractionandenhancementof vasculaturefrom low-SNR3-Dfluorescencemicroscopyimages Sepideh Almasi, AyalBen-Zvi, Baptiste Lacoste, Chenghua Gu, Eric L.Miller, Xiaoyin Xu Pattern Recognition Volume 63, March 2017, Pages710-718 https://doi.org/10.1016/j.patcog.2016.09.031 Highlights *We introduce intensity-based features to directlysegmentartifactedimages ofvasculature. *The segmentation method isshown to be robust tonon-uniformillumination and noise ofmixed type. *This methodis free of apriori statisticalandgeometricalassumptions. For fluorescence signals, adaptive optics, quantum dots and three-photon microscopy not always feasible In this maximum intensity projection of 3- D fluorescence microscopy image of murine cranial tissue, miscellaneous imaging artifacts are visible: uneven illumination (upper vs. lower parts), non-homogenous intensity distribution inside the vessels (visible in the larger vessels located at top right corner), low SNR regions (lower areas), high spatial density or closeness of vessels (majorly in the center-upper parts), reduced contrast at edges (visible as blurs mostly for the central vessels), brokenor faint vessels (lower vessels), and low frequency background variations caused by scattered light (at higher density regions).
  • 83. MultidyeExperimentsfor ‘self-supervisedtraining’ CAMvesselfluorescence followed overtime for Q705PEGaand 500kDaFITC–dextran.500kDa FITC–dextran(A) and Q705PEGa(B)were coinjected and images weretaken at the designated times. Theuseof quantumdots for ana lysisofchickCAMvasculature JD Smith,GW Fisher, AS Waggoner… - Microvascularresearch, 2007 - Elsevier Citedby69 Intravitally injected QDs were found to be biocompatible and were kept in circulation over the course of 4days without any observed deleterious effects. QD vascular residence time was tunable through QD surface chemistry modification. We also found that use of QDs with higher emission wavelengths (> 655nm) virtually eliminated all chick- derived autofluorescence andimproved depth- of-field imaging. QDs were compared to FITC– dextrans, a fluorescent dye commonly used for imaging CAM vessels. QDs were found to image vessels as well as or better than FITC– dextrans at 2–3 orders of magnitude lower concentration. We also demonstrated that QDs are fixable with low fluorescence loss and thus can be used in conjunction with histological processing for further sample analysis. i.e. which would give you a nicer mask with Otsu’s thresholding for example? Easier to obtain ground truth labels from QD stacks and use thoseto train forFITC stacks or multimodal FITC+QD networks ifthere arecomplimentary information available? Inpaintingmasks (‘vessel breakage’) from differencebetween theQD and FITC stacks? Quantum dots vs. Fluorescein Dextran (FITC)
  • 84. MultidyeExperimentsfor OptimizedSNRfor allvesselsizes Todorov et al. (2019) Automated analysis of whole brain vasculature using machinelearning https://doi.org/10.1101/613257 A-C, Maximum intensity projections of the automatically reconstructed tiling scans of WGA (A) and Evans blue (B) signal in the same sample reveal all details of the perfused vascular network in the merged view (C). D-F: Zoom-ins from marked region in (C) showing fine details. G- L, Confocal microscopy confirms that WGA and EB dyes stain the vascular wall (G-I, maximum intensity projections of 112 µm) and that the vessels retain their tubular shape (J-L, single slice of 1 µm). Furthermore, owing to the dual labeling, we maximized the signal to noise ratio (SNR) for each dye independently to avoid saturation of differently sized vessels when only a single channel is used. We achieved this by independently optimizing the excitation and emission power. For WGA, we reached a higher SNR for small capillaries; bigger vessels, however, were barely visible (Supporting Fig. 3). For EB, he SNR for small capillaries was substantially lower but larger vessels reached a high SNR (Supporting Fig. 3). Thus, integrating the information from both channels allows homogenous staining of the entire vasculature throughout the whole brain, and results in a high SNR for highquality segmentations and analysis.
  • 85. Play withyour DextranDaltons? An eNOStag-GFP mouse was injected with two dextransof different sizes (red = Dextran 2 MDa; purple = dextran10 KDa) and Hoechst (blue = 615 Da), and single-plane images are presented here. 10 min after the injection, presence in the blood and extravasation are seen in the same image. Hoechst extravasates almost immediately out of the blood vessels and is taken up by the surrounding cells (CI). Dextran 10 KDa (CII) can be seen in vessels and in the tumor interstitium. Dextran 2 MDa (CIII) can be found in the vessels. 40 min after injection (CIV), Dextran 10 KDa disappears from the blood (CV), and the fluorescent intensity of Dextran 2 MDa was also diminished (CVI). Scale bar = 100 µm - https://dx.doi.org/10.3791%2F55115 (2018) If you have extra channels, and normally you would like to use 10 KDa Dextran, and for some reason cannot use something with stronger fluorescence that stays better inside the vesselness. You could acquire stacks just for the vasculature segmentation, with the higher molecular weights as the “physical labels” for vasculature?
  • 86. z / Depthcrosstalk duetosuboptimalopticalsectioning Invivothree-photonmicroscopy ofsubcortical structureswithinanintactmousebrain Nicholas G.Horton,Ke Wang,Demirhan Kobat,Catharine G.Clark, FrankW. Wise,Chris B.Schaffer &Chris Xu NaturePhotonics volume7,pages 205–209(2013) https://doi.org/10.1038/nphoton.2012.336 The fluorescence of three-photon excitation (3PE) falls off as 1/z4 (where z is the distance from the focal plane), whereas the fluorescence of two- photon excitation (2PE) falls off as 1/z2 . Therefore, 3PE dramatically reduces the out-of-focus background in regions far from the focal plane, improving the signal-to-background ratio (SBR) by orders of magnitude when compared to 2PE http://biomicroscopy.bu.edu/research/nonlinear-microsc opy http://parkerlab.bio.uci.edu/microscopy_construction/build_your_own_twophoton_microscope.ht m “Background vasculature” is seen in layers in “front of it”, i.e. the z-crosstalk Nonlinear 2-PM reduces this, and 3-PM even more. When you get the binary mask, how to in the end reconstruct your mesh? From 1-PM, your vessels would most likely look very thick in z-dimension? i.e. way too anistropic reconstruction?
  • 87. Depth resolution We still have labeled in 2D so some boundary ambiguity exists Cannyedge radius = 1 Canny on the ground truth Gamma-corrected of version of the input slice. Now you see better the dimmer vessels The upper part of the slice is clearly behind(on z axis), as it is dimmer, but it has been annotated to be a vessel alsoon this plane. This is not necessarily a problem if some sort of consistency exists in labeling, which isnot thecasenecessarily betweendifferent annotators. Then you might need the label noisesolutions outlinedlater on this slideset. Volumerendering of the ground truth of courselooks now thickerthan the original unsegmented volume Multiplying the input volume with this groundtruth mask gives anice rendering ofcourse. We wantto suppress thebackground noise,andmake the voxel mesh → conversion easier with clean segmentations
  • 88. Single-photonconfocalmicroscopesectioning worse than 2-PM but still quite good Images captured by confocal microscopy, showing FITC-dextran (green) and DiI- labeledRBCs(red) in a retinalflatmount. (A, C) Merged green/red images from the superficial section of the retina. (B, D) Red RBC fluorescence in the deeper capillary layers of the retina. The arrow in (A) points to an arteriole that branches down from the superficial layerinto the capillarylayersshown in (B) Comparisonof the Fluorescence Microscopy Techniques (widefield, confocal, two- photon) http://candle.am/ microscopy/ Measurement ofRetinal Blood Flow Ratein DiabeticRats: Disparity Between Techniques Dueto Redistribution of Flow Leskova et al (2013) http://doi.org/10.1167/iovs.13-11915 RatRetina SUPERFICIAL Layers RatRetina CAPILLARY Layer Kornfield andNewman (2014) 10.1523/JNEUROSCI.1971-14.2014 Vessel density in the three vascular layers. Schematic of the trilaminar vascular network showing the first- order arteriole (1) and venule (V) and the connectivity of the superficial (S), intermediate (I), and deep (D) vascular layers and their locations within the retina. GCL, Ganglion cell layer; IPL, inner plexiform layer; INL, inner nuclear layer; OPL, outer plexiform layer; ONL, outer nuclear layer; PR,photoreceptors.
  • 89. z / DepthAttenuationnoiseasfunctionofdepth Effects of depth-dependent noise on line-scanning particle image velocimetry (LS-PIV) analysis. A , Three- dimensional rendering of cortical vessels imaged with TPLSM demonstrating depth-dependent decrease in SNR. The blood plasma was labeled with Texas Red-dextran and an image stack over the top 1000 µm was acquired at 1 µm spacing along the z-axis starting from the brain surface. B, 100 µm-thick projections of regions 1–4 in panel (A). RBC velocities were measured along the central axis of vessels shown in red boxes, with redarrows representing orientation offlow. The raw line-scan data (L/S) are depicted tothe right ofeach fieldandlabeledwith their respective SNR. CorrespondingLS-PIV analyses aredepictedto the far right. Accuracy of LS-PIV analysis with noise and increasing speed. Top, simulation line-scan data with a low level of normally distributed noise with SNR of 8 ( A ), 1 ( B ), 0.5 ( C ), and 0.33 ( D ). Middle, LS-PIV analysis of the line-scan data (blue dots). The red line represents actual particle speed. Bottom, percent error of LS-PIV compared with actual velocity. Tyson N Kim et al. (2012) http://doi.org/10.1371/journal.pone.0038590 - Cited by 46
  • 90. ‘Intersectingvesselsin2-PM’ even though the centerlines actual vessels in 3D do not intersect the vessel masks might #1 Calivá et al. (2015) A new tool to connect blood vessels in fundus retinal images https://doi.org/10.1109/EMBC.2015.7319356 - Cited by 8 In 2D case, the vessel crossings are harder to resolve than in our 3D case Slice#10/26Seems like that theBig and Smaller vessel are going tojoin? Slice#19/26Seems like Smallvessel actually was touching the Biggerone?
  • 91. Cross-channelspectralcrosstalk Newred-fluorescent calciumindicatorsfor optogenetics,photoactivationand multi-colorimaging Oheim M, van ’tHoff M, FeltzA,ZamaleevaA,Mallet J-M,Collot M. 2014 Biochimica et Biophysica Acta (BBA) - Molecular Cell Research 1843. Calcium Signaling in Health and Disease:2284–2306. http://dx.doi.org/10.1016/j.bbamcr.2014.03.010 https://github.com/petteriTeikari/mixedImageSeparation https://github.com/petteriTeikari/spectralSeparability/wiki
  • 92. Color Preprocessing: SpectralUnmixing formicroscopy See “spectral crosstalk” slide above. Or in more general terms you want to do (blind) source separation, “the cocktail party problem” for 2-PM microscopy data, i.e. you might have some astrocyte/calcium/etc signal on your “vasculature channel”. You could just apply ICA here and hope for perfect unmixing or think of something more advanced. Again, seek inspiration from elsewhere. Hyperspectral imaging field is having the same challenge to solve. ImprovedDeepSpectralConvolution NetworkForHyperspectralUnmixingWith MultinomialMixtureKernelandEndmember Uncertainty Savas Ozkan, and Gozde Bozdagi Akar (Submitted on 27 Mar 2019) https://arxiv.org/abs/1904.00815 https://github.com/savasozkan/dscn We propose a novel framework for hyperspectral unmixing by using an improved deep spectral convolution network (DSCN++) combined with endmember uncertainty. DSCN++ is used to compute high-level representations which are further modeled with Multinomial Mixture Model to estimate abundance maps. In the reconstruction step, a new trainable uncertainty term based on a nonlinear neural network model is introduced to provide robustness to endmember uncertainty. For the optimization of the coefficients of the multinomial model and the uncertainty term, Wasserstein Generative Adversarial Network (WGAN) is exploited to improve stability.
  • 93. AnisotropicVolumes z-resolutionnotasgoodas xy 3DAnisotropic HybridNetwork:TransferringConvolutional Features from2DImages to3DAnisotropicVolumes Siqi Liu, Daguang Xu, S. Kevin Zhou, Thomas Mertelmeier, Julia Wicklein, Anna Jerebko, Sasa Grbic, Olivier Pauly, Weidong Cai, Dorin Comaniciu (Submitted on 23 Nov 2017 https://arxiv.org/abs/1711.08580 Elastic Boundary Projection for 3D Medical Image Segmentation Tianwei Ni etal. (CVPR 2019) http://victorni.me/pdf/EBP_CVPR2019/1070.pdf In this paper, we bridge the gap between 2D and 3D using a novel approach named Elastic Boundary Projection (EBP). The key observation is that, although the object is a 3D volume, what we really need in segmentation is to find its boundary which is a 2D surface. Therefore, we place a number of pivot points in the 3D space, and for each pivot, we determine its distance to the object boundary along a dense set of directions. This creates an elastic shell around each pivot which is initialized as a perfect sphere. We train a 2D deep network to determine whether each ending point falls within the object, andgradually adjust the shellsothatit graduallyconverges tothe actualshape of the boundaryand thus achievesthe goalofsegmentation From voxel-based tricks NURBS -like parametrization for “subvoxel” MESH/CFD Analysis? →
  • 94. Not a lot of papers addressingspecifically (multiphoton)microscopy (micro)vasculature thus most of the slides are outside vasculature processing but relevant if you want to work on “next generation” vascular segmentation networks
  • 95. Non-DL ‘classical approaches’ Segmentationof VasculatureFromFluorescentlyLabeledEndothelial CellsinMulti-PhotonMicroscopyImages Russell Bates ; Benjamin Irving ; Bostjan Markelc ; JakobKaeppler ; Graham Brown ; Ruth J. Muschel ; et al. Department of EngineeringScience, Institute of BiomedicalEngineering, University of Oxford, Oxford,U.K. IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 1 , Jan. 2019 ) https://doi.org/10.1109/TMI.2017.2725639 Here, we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopic images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markovrandom fieldapproach. A potential area for future improvement is the limitations imposed by our edge potentials in the MRF which are tuned rather than learned. The expectation of the existenceof fully annotated training sets formany applications is unrealistic.Future work will focus on the suitability of semi-supervised methods to achieve fully supervised levels of performance on sparse annotations. It is possible that this may be donein thecurrentframework using label-transduction methods. Interesting work in the transduction and interactive learning for sparsely labeled superpixel microscopy images has also been undertaken by Suetal.(2016). A method that can take sparse image annotations and use them to leverage information from large set of unlabeled parts of the image to create high quality segmentations would be an extremely powerful tool. This would have very broad applications in novel imaging experiments where large training sets are not readily availableandwherethereis ahigh time-cost in producingsuch atrainingset.
  • 96. InitialEffort with hybrid “2D/3D ZNN” with CPU acceleration DeepLearningConvolutionalNetworksforMultiphotonMicroscopy VasculatureSegmentation Petteri Teikari, Marc Santos, Charissa Poon, Kullervo Hynynen (Submitted on 8 Jun 2016) https://arxiv.org/abs/1606.02382
  • 97. MicrovasculatureCNNs #1 MicrovasculaturesegmentationofarteriolesusingdeepCNN Y. M.Kassimet al. (2017) ComputationalImaging and Vis Analysis (CIVA) Lab https://doi.org/10.1109/ICIP.2017.8296347 Accurate segmentation for separating microvasculature structures is important in quantifying remodeling process. In this work, we utilize a deep convolutional neural network (CNN) framework for obtaining robust segmentations of microvasculature from epifluorescence microscopy imagery of mice dura mater. Due to the inhomogeneous staining of the microvasculature, different binding properties of vessels under fluorescence dye, uneven contrast and low texture content, traditional vessel segmentation approaches obtain sub-optimal accuracy. We proposed an architecture of CNN which is adapted to obtaining robust segmentation of microvasculature structures. By considering overlapping patches along with multiple convolutional layers, our method obtains good vessel differentiation for accurate segmentations.
  • 98. MicrovasculatureCNNs #2 Extracting3DVascularStructuresfromMicroscopy ImagesusingConvolutionalRecurrentNetworks Russell Bates,Benjamin Irving, Bostjan Markelc,Jakob Kaeppler, Ruth Muschel,VicenteGrau, JuliaA. Schnabel Institute of BiomedicalEngineering, Department of EngineeringScience, University of Oxford, United Kingdom CRUK/MRCOxford Centre for Radiation Oncology, Department of Oncology, Universityof Oxford, United Kingdom Division of ImagingSciences and BiomedicalEngineering, Kings College London, United Kingdom. PerspectumDiagnostics, Oxford, United Kingdom (Submitted on 26May 2017) https://arxiv.org/abs/1705.09597 In tumors in particular, the vascularnetworksmaybe extremelyirregularand theappearanceofthe individual vesselsmaynotconformto classicaldescriptionsof vascularappearance. Typically, vessels areextracted by eitherasegmentation and thinningpipeline,or bydirect tracking. Neitherof these methods are wellsuited to microscopy images of tumorvasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.
  • 99. MicrovasculatureCNNs #3 AutomaticGraph-basedModelingof Brain MicrovesselsCapturedwithTwo-PhotonMicroscopy RafatDamseh; PhilippePouliot ;Louis Gagnon ; Sava Sakadzic; David Boas ; FaridaCheriet et al. (2018) Institute of Biomedical Engineering, Ecole Polytechnique de Montreal https://doi.org/10.1109/JBHI.2018.2884678 Graph models of cerebral vasculature derived from 2- photon microscopy have shown to be relevant to study brain microphysiology. Automatic graphing of these microvessels remain problematic due to the vascular network complexity and 2-photon sensitivity limitations with depth. In this work, we propose a fully automatic processing pipeline to address this issue. The modeling scheme consists of a fully-convolutional neural network (FCN) to segment microvessels, a 3D surface model generator and a geometry contraction algorithm to produce graphical models with a single connected component. Quantitative assessment using NetMets metrics, at a tolerance of 60 μm, false negative and false positive geometric error rates are 3.8% and 4.2%, respectively, whereas false negative and false positive topological error rates are6.1%and 4.5%, respectively. One important issue that could be addressed in a future work is related to the difficulty in generating watertight surface models. The employed contraction algorithm is not applicable to surfaces lacking such characteristicsin generating watertight surface models. Introducing a geometric contraction not restricted to such conditions on the obtained surface model could be an area of further investigation.
  • 100. MicrovasculatureCNNs #4 FullyConvolutionalDenseNetsforSegmentationof MicrovesselsinTwo-photonMicroscopy RafatDamseh et al. (2019) https://doi.org/10.1109/EMBC.2018.8512285 Segmentation of microvessels measured using two-photon microscopy has been studied in the literature with limited success due to uneven intensities associated with optical imaging and shadowing effects. In this work, we address this problem using a customized version of a recently developed fully convolutional neural network, namely, FC-DensNets (see DenseNet Cited by 3527 ). To train and validate the network, manual annotations of 8 angiogramsfrom two-photon microscopy was used. However, this study suggests that in order to exploit the output of our deep model in further geometrical and topological analysis, further investigations might be needed to refine the segmentation. This could be done by either adding extra processing blocks on the output of the model orincorporating 3D information in its trainingprocess.
  • 101. MicrovasculatureCNNs #5 A Deep Learning Approach to 3D Segmentation of Brain Vasculature Waleed Tahir, Jiabei Zhu, Sreekanth Kura,Xiaojun Cheng,DavidBoas, and Lei Tian (2019) Department of Electrical and Computer Engineering, Boston University https://www.osapublishing.org/abstract.cfm?uri=BRAIN-2019-BT2A.6 The segmentation of blood-vessels is an important preprocessing step for the quantitative analysis of brain vasculature. We approach the segmentation task for two-photon brain angiograms using a fully convolutional 3D deep neural network. We employ a DNN to learn a statistical model relating the measured angiograms to the vessel labels. The overall structure is derived from V-net [Milletari et al.2016] which consists of a 3D encoder-decoder architecture. The input first passes through the encoder path which consists of four convolutional layers. Each layer comprises of residual connections which speed up convergence, and 3D convolutions with multi-channel convolution kernels which retain 3D context. Loss functions like mean squared error (MSE) and mean absolute error (MAE) have been used widely in deep learning, however, they cannot promote sparsity and are thus unsuitable for sparse objects. In our case, less than 5% of the total volume in the angiogram comprises of blood vessels. Thus, the object under study is not only sparse, there is also a large class-imbalance between the number for foreground vs. background voxels. Thus we resort to balanced cross entropy as the loss function [HED, 2015], which not only promotes sparsity, but also caters fortheclass imbalance
  • 102. MicrovasculatureCNNs #6: State-of-the-Art (SOTA)? Deepconvolutionalneuralnetworksforsegmenting3Dinvivo multiphotonimagesofvasculatureinAlzheimerdiseasemouse modelsMohammad Haft-Javaherian, Linjing Fang,Victorine Muse, Chris B. Schaffer, Nozomi Nishimura,Mert R.Sabuncu Meinig Schoolof BiomedicalEngineering, Cornell University, Ithaca, NY, United States of America March2019 https://doi.org/10.1371/journal.pone.0213539 https://arxiv.org/abs/1801.00880 Data: https://doi.org/10.7298/X4FJ2F1D (1.141 Gb) Code: https://github.com/mhaft/DeepVess (Tensorflow / MATLAB) We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture with a customized loss function, which we call DeepVess, yielded a segmentation accuracy that was better than state-of-the-art methods, while also being orders of magnitude fasterthan themanual annotation While DeepVess offers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together, such as the method proposed by Bates etal. (2017) in a single complete end-to-end learning framework might achieve higher centerline accuracy levels. Comparison ofDeepVess and the state-of- the-art methods 3D rendering of (A) the expert’s manual and (B) DeepVess segmentation results. Comparison ofDeepVess and the gold standard human expertsegmentation results We used 50% dropout during test-time [MCDropout] and computed Shannon’s entropy for the segmentation prediction at each voxel to quantify the uncertainty in the automatedsegmentation.
  • 103. MicrovasculatureCNNs #7: Dual-Dye Network for vasculature Automatedanalysisof wholebrain vasculature usingmachinelearning Mihail Ivilinov Todorov, Johannes C. Paetzold, Oliver Schoppe, Giles Tetteh, Velizar Efremov, Katalin Völgyi, Marco Düring, Martin Dichgans, Marie Piraud, Bjoern Menze, Ali Ertürk (Posted April 18, 2019) https://doi.org/10.1101/613257 http://discotechnologies.org/VesSAP Tissue clearing methods enable imaging of intact biological specimens without sectioning. However, reliable and scalable analysis of such large imaging data in 3D remains a challenge. Towards this goal, we developed a deep learning-based framework to quantify and analyze the brain vasculature, named Vessel Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a fully convolutional network with a transfer learning approach for segmentation. We systematically analyzed vascular features of the whole brains including their length, bifurcation points and radius at the micrometer scale by registering them to the Allen mouse brain atlas. We reported the first evidence of secondary intracranial collateral vascularization in CD1-Elite mice and found reduced vascularization in the brainstem as compared to the cerebrum. VesSAP thus enables unbiased and scalable quantifications for the angioarchitecture of the cleared intact mouse brain and yields newbiological insights related to the vascular brain function.
  • 105. WellwhatabouttheNOVELTYtoADD? Depends a bit on what the benchmarks reveal? The DeepVess does not seem out from this world in terms of their specs so possible to beat it with “brute force”, by trying different standard things proposed in the literature Keep this in mind, and have a look on the following slides INPUT SEGMENTATION UNCERTAINTYMC Dropout While DeepVess offers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) i.e. what the hGRU did can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together multi-task learning (MTL) , such as the method proposed by Bates et al. [25] in a single complete end-to-end learning framework might achieve higher centerline accuracy levels.
  • 106. VasculatureNetworks Future While DeepVess offers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) i.e. what the hGRU did can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together multi-task learning (MTL) , such as the method proposed by Bates et al. [25] in a single complete end-to-end learning framework might achieve higher centerline accuracy levels. FC-DensNets However, this study suggests that in order to exploit the output of our deep model in further geometrical and topological analysis, further investigations might be needed to refine the segmentation. This could be done by either adding extra processing blocks on the output of the model or incorporating 3D information in its training process. http://sci-hub.tw/10.1109 /jbhi.2018.2884678 One important issue that could be addressed in a future work is related to the difficulty in generating watertight surface models. The employed contraction algorithm is not applicable to surfaces lacking such characteristics.