Two-Photon
Microscopy
Vasculature
Segmentation
Petteri Teikari, PhD
PhD in Neuroscience
M.Sc Electrical Engineering
https://www.linkedin.com/in/petteriteikari/
Version August 2019
(Cleaned and simplified
in January 2024, see original)
Executive Summary #1/2
Highlighting relevant literature for:
●
Automating the 3D voxel-level vasculature segmentation (mainly) for
multiphoton vasculature stacks
●
Focus on semi-supervised U-Net based architectures that can
exploit both unlabeled data and costly-to-annotate labeled data.
●
Make sure that “tricks” for thin structure preservation, long-term
spatial correlations and uncertainty estimation are incorporated
Executive Summary #2/2
The lack of automated robust tools do not go well with large-size
datasets and volumes
●
See Electron Microscopy segmentation community for inspiration
who are having even larger stacks to analyze
●
Gamified segmentation annotation tool EyeWire has led for
example to this Nature paper, and slot at the AI: More than Human
exhibition at Barbican
Aboutthe Presentation
Aboutthe Presentation #1
“Quick intro” about vasculature segmentation using deep
learning
●
Assumed that multiphoton (two-photon mainly) techniques
are familiar to you and you want to know what you could do
with your data using more robust “measuring tapes” for your
vasculature, i.e. data-drivenvascularsegmentation
Link coloring for articles, for Github/available code,
and for video demos
Aboutthe Presentation #2
Inspiration for providing “seeds for all sorts of directions” would be
for the reader/person implementing this, finding new avenues and
not having to start from scratch.
Especially targeted for people coming outside medical image
segmentation that might have something to contribute and avoid
“the group think” of deep learning community.
Also it helps for the neuroscientist to have an idea how to gather the
data and design experiments to address both neuroscientific
questions and “auxiliary methodology” challenges solvable by deep
learning. Domainknowledgestillvaluable.
Aboutthe Presentation #3:Why solengthy?
If you are puzzled by some slides on non-specifically
“vasculature segmentation”, remember that this was
designed to be “high school project” friendly or good
for tech/computation-savvy neuroscientists not
necessarily knowing all the different aspects that could be
beneficial for development of successful vasculature
network instead of narrowly-focused slideshow
Aboutthe Presentation #4:Textbookdefs?
A lot of the basic concepts are “easily googled” from
Stackoverflow/Medium/etc., thus focus here is on
recent papers that are published in overwhelming
numbers.
Some ideas picked from these papers that might or might
not be helpful in thinking of your own project tech
specifications
Aboutthe Presentation #5:“History”ofIdeas
In arXiv and in peer-published papers, the various approaches taken by
team before their winning idea(s) {“history of ideas, and all the possible choices you could have made”}
,
are hardly ever discussed in detail. So an attempt of “possibility space” is
outlined here
Towards EffectiveForagingby DataScientiststoFindPast
AnalysisChoices
Mary Beth Kery,BonnieE. John,PatrickO'Flaherty, AmberHorvath, Brad A.Myers Carnegie
MellonUniversity/ Bloomberg L.P., NewYork
https://doi.org/10.1101/650259https://github.com/mkery/Verdant
Data scientists are responsible for the analysis decisions they make, but it is hard
for them to track the process by which they achieved a result. Even when data
scientists keep logs, it is onerous to make sense of the resulting large number of
history records full of overlapping variants of code, output, plots, etc. We developed
algorithmic and visualization techniques for notebook code environments to help
data scientists forage for information in their history. To test these interventions,
we conducted a think-aloud evaluation with 15 data scientists, where participants
were asked to find specific information from the history of another person's data
science project. The participants succeed on a median of 80% of the tasks they
performed. The quantitative results suggest promising aspects of our design, while
qualitative results motivated a number of design improvements. The resulting
system, called Verdant, is released as an open-source extension for JupyterLab.
Summary: “All thestuff”youwishyouknewbefore
startingtheprojectwith“seeds”forcross-
disciplinarycollaboration
TheSecretsofMachineLearning:Ten
ThingsYouWishYouHadKnownEarlierto
beMoreEffectiveatDataAnalysis
CynthiaRudin, David Carlson
Electrical and ComputerEngineering,and Statistical Science, Duke University /
Civil and Environmental Engineering, Biostatistics and Bioinformatics,Electrical and Computer Engineering,and Computer Science, Duke University
(Submitted on 4 Jun 2019)https://arxiv.org/abs/1906.01998
Curated Literature
If you are overwhelmed by all the slides, you could start with these articles
●
Haft-Javaherian et al. (2019). Deepconvolutionalneuralnetworksfor segmenting 3Dinvivo
multiphotonimagesofvasculatureinAlzheimerdiseasemousemodels.
https://doi.org/10.1371/journal.pone.0213539
●
Kisuk Lee et al. (2019) Convolutional netsfor
reconstructing neural circuits from brainimages
acquired by serialsection electron microscopy
https://doi.org/10.1016/j.conb.2019.04.001
●
Amy Zhao et al. (2019) Dataaugmentationusing learned
transformations forone-shotmedical image
segmentation https://arxiv.org/abs/1902.09383https://github.com/xamyzhao/brainstorm Keras
●
Dai et al. (2019) Deep Reinforcement Learningfor
SubpixelNeuralTracking https://openreview.net/forum?id=HJxrNvv0JN
●
Simon Kohl et al. (2018) A ProbabilisticU-Net for
SegmentationofAmbiguousImages https://arxiv.org/abs/1806.05034+
followup https://arxiv.org/abs/1905.13077 https://github.com/SimonKohl/probabilistic_unet
●
Hoel Kervadec et al. (2018) Boundary lossforhighly
unbalanced segmentation https://arxiv.org/abs/1812.07032 https://github.com/LIVIAETS/surface-loss PyTorch
●
Jörg Sander et al. (2018) Towards increased
trustworthiness of deep learning segmentation methods
on cardiacMRI https://doi.org/10.1117/12.2511699
●
Hongda Wang et al. (2018) Deep learning achievessuper-
resolution influorescence microscopy
http://dx.doi.org/10.1038/s41592-018-0239-0
●
Yide Zhang et al. (2019) A Poisson-Gaussian Denoising
DatasetwithRealFluorescence Microscopy Images
https://doi.org/10.1117/12.2511699
●
Trevor Standley et al. (2019) Which TasksShould Be Learned
Together inMulti-task Learning? https://arxiv.org/abs/1905.07553
WhatImagesarewe talkingnow
whenwe talkaboutcerebralvasculature
stacks?
Imaging brainvasculaturethroughtheskullof a mouse/rat
MICROSCOPE SET-UP AT THE SKULL AND EXAMPLES OF TWO-PHOTON MICROSCOPYIMAGES ACQUIRED DURINGLIVE IMAGING.BOTH
EXAMPLES SHOWNEURONS (GREEN)ANDVASCULATURE (RED).BOTTOMEXAMPLE USES AN ADDITIONAL AMYLOID-TARGETING DYE (BLUE)
IN AN ALZHEIMER’S DISEASE MOUSE MODEL. IMAGE CREDIT: ELIZABETH HILLMAN. LICENSED UNDER CC-BY-2.0.
http://www.signaltonoisemag.com/allarticles/2018/9/17/dissecting-two-photon-microscopy
Penetrationdepth dependsonthetheexcitation/emission
wavelengths,numberof “nonlinearphotons”,andtheanimal
model
DeFelipe etal. (2011)
http://dx.doi.org/10.3389/fnana.2011.00029
Tischbireketal.(2015):
Cal-590, .. improved our ability
to image calcium signals ... down
to layers 5 and 6 at depths of up
to −900 μm below the pia.
3-PM depth = 601 μm
2-PM depth = 429 μm
Wang et al. (2015)
Better image
deeper
penetration
Dyeless vasculatureimaging in “deeplearningsense” nottoo different
Third-Harmony Generation (THG)
image of blood vessels in the top layer
of the cerebralcortex of a live,
anesthetized mouse.
Emission wavelength = 1/3 of excitation wavelength
Witte et al. (2011)
Optoacoustic ultrasound bio-microscopy
Imaging of skull and brain vasculature (B) was
performed by focusing nanosecond laser
pulses with a custom-designed gradient index
(GRIN) lens and detecting the generated
optoacoustic responses by the same
transducer used for the US reflection-mode
imaging. (C) Irradiation of half of the skull
resulted in inhibited angiogenesis in the
calvarium microvasculature (blue) of the
irradiated hemisphere, but not the non-
irradiated one. - prelights.biologists.com
(Mariana De Niz)
- https://doi.org/10.1101/500017
Third harmonic generation microscopy of
cells andtissue organization
http://doi.org/10.1242/jcs.152272
Model as cross-vendor or cross-modal problem? As you are imaging the “same vasculature” but it looks a bit different with different techniques
“Cross-Modal” 3DVasculatureNetworkseventually wouldbe very nice
Imaging the microarchitecture of the rodent
cerebral vasculature. (A) Wide-field epi-fluorescence
image of a C57Bl/6 mouse brain perfused with a
fluorescein-conjugated gel and extracted from the skull (
Tsai et al, 2009). Pial vessels are visible on the dorsal
surface, although some surface vessels, particularly those
that were immediately contiguous to the sagittal sinus, were
lost during the brain extraction process. (B) Three-
dimensional reconstruction of a block of tissue collected
by in vivo two-photon laser scanning microscopy (TPLSM)
from the upper layers of mouse cortex. Penetrating vessels
plunge into the depth of the cortex, bridging flow from
surface vascular networks to capillary beds. (C) In
vivo image of a cortical capillary, 200 μm below the pial
surface, collected using TPLSM through a cranial window
in a rat. The blood serum (green) was labeled by
intravenous injection with fluorescein-dextran conjugate (
Table 2) and astrocytes (red) were labeled by topical
application of SR101 (Nimmerjahn et al, 2004). (D) A
plot of lateral imaging resolution vs. range of depths
accessible for common in vivo blood flow imaging
techniques. The panels to the right show a cartoon of
cortical angioarchitecture for mouse, and cortical layers for
mouse and rat in relation to imaging depth. BOLD fMRI,
blood-oxygenation level-dependent functional magnetic
resonance imaging.
Network learns to disentangle the
‘vesselness’ from image formation i.e.
how the vascularity looks like when viewed
with different modalities
Compare this to ‘clinical networks’ e.g. Jeffrey De Fauw et al. 2018
that need to handle cross-vendor differences (e.g.
different OCT or MRI machines from different vendors
produce slightly different images of the same anatomical
structures)
Shih et al. (2012)https://dx.doi.org/10.1038%2Fjcbfm.2011.196
e.g. FunctionalUltrasoundImaging fasterthantypical2Pmicroscopes
Alan Urban etal. (2017) Pablo Blinder’s lab
https://doi.org/10.1016/j.addr.2017.07.018
Alan Urban et al. (2017) Pablo Blinder’s lab
https://doi.org/10.1016/j.addr.2017.07.018
Brunner et al. (2018)
https://doi.org/10.1177%2F0271678X18786359
And keep in mind when going through the slides, the development of “cross-
discipline” networks. e.g. 2-PM as “ground truth” for lower quality modalities such
as OCT (OCT angiography for retinal microvasculature) or photoacoustic imaging
thatarepossibleinclinicalworkforhumans
Two-photonmicroscopic imagingofcapillaryred bloodcellfluxinmouse
brainreveals vulnerabilityofcerebralwhitemattertohypoperfusion
Baoqiang Li, Ryo Ohtomo, Martin Thunemann,Stephen R Adams, Jing Yang,Buyin Fu, Mohammad AYaseen
, Chongzhao Ran, Jonathan R Polimeni, David A Boas, Anna Devor,Eng H Lo, Ken Arai,Sava SakadžićFirst
Published March 4,2019 https://doi.org/10.1177%2F0271678X19831016
This imaging system integrates photoacoustic microscopy (PAM),
optical coherence tomography (OCT), optical Doppler tomography
(ODT) and fluorescence microscopy in one platform. - DOI:
10.1117/12.2289211
SimultaneouslyacquiredPAM,FLM,OCTandODTimagesofamouse ear.(a)PA image (average contrast-to-
noise ratio 34dB);(b)OCTB-scan at the location marked in panel (e) by the solid line (displayed dynamic range,40
dB); (c)ODT B-scanatthe locationmarked in panel (e)bythe solid line; (d)FLMimage (average contrast-to-noise
ratio 14dB);(e)OCT2Dprojection images generated from the acquired 3D OCT datasets; SG: Sebaceous glands;
bar, 100μm.
Vasculature Biomarkers
‘Traditional’StructuralVascularBiomarkers #1
i.e. You want to analyze the changes in vascular morphology in disease, in response totreatment, etc limited by the imagination of your in-house biologist,
e.g. Artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratioand wall-to-lumen length
FEMmesh ofthevasculaturedisplaying
arteries, capillaries,and veins.
Gagnon etal. (2015)doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by 93
“We created the graphs and performed
image processing using a suite of custom-
designed tools in MATLAB”
Classical vascular analysis reveals a decrease in the
number of junctions and total vessel length following
TBI. (A) An axial AngioTool image where vessels (red)
and junctions (blue) are displayed. Whole cortex and
specific concentric radial ROIs projecting outward from
the injury site (circles 1–3), were analyzed to quantify
vascular alterations. (B) Analysis of the entire whole
cortex demonstrated a significant reduction in the both
number of junctions and in the total vessel length in TBI
animals compared to sham animals. (C) TBIanimals also
exhibited a significant decline in the number vascular
junctions moving radially outward from the injury site
(ROIs 1 to 3).
Fractal analysis reveals a quantitative reduction
in both vascular complexityand frequency in TBI
animals. (A) A binary image of the axial vascular
network of a representative sham animal with
radial ROIs radiating outward from the injury or
sham surgery site (ROI1–3). The right panel
illustrates the complexity changes in the
vasculature from the concentric circles as you
move radially outward from the injury site. These
fractal images are colorized based on the resultant
fractal dimension with a gradient from lower local
fractal dimension (LFD) in red (less complex
network) to higher LFD in purple (more complex
network).
Traumaticbraininjuryresultsinacuterareficationof
thevascularnetwork.
http://doi.org/10.1038/s41598-017-00161-4
Tortuous Microvessels Contribute to Wound Healing
via SproutingAngiogenesis (2017)
https://doi.org/10.1161/ATVBAHA.117.309993
Multifractal and Lacunarity Analysis of
Microvascular Morphology and Remodeling
https://doi.org/10.1111/j.1549-8719.2010.00075.x
see “Fractal and multifractal analysis: a review”
‘Traditional’StructuralVascularBiomarkers #2
Schemeillustratingtheprincipleofvascularcorrosion
casts
Scheme depicting the definition of vascularbranchpoints. Each voxel of the vessel center line (black) with more than two
neighboring voxels was defined as a vascular branchpoint. This results in branchpoint degrees (number of vessels joining in
a certain branchpoint) of minimally three. In addition, two branchpoints were considered as a single one if the distance
between them was below 2 mm. Of note, nearly all branchpoints had a degree of 3. Branchpoint degrees of four or even
higher accounted together for far less than 1% of all branchpoints
Scheme showing the definition of vessel diameter (a), vessel length
(a), and vessel tortuosity (b). The segment diameter is defined as the
average diameter of all single elements of a segment (a). The segment
length is defined as the sum of the length of all single elements between
two branchpoints. The segment tortuosity is the ratio between the
effective distance le
and the shortest distance ls
between the two
branchpoints associated to this segment.
Schematic displaying the parameter extravascular distance, being defined as the shortest distance of any given voxel
in the tissue to the next vessel structure. (b) Color map indicating the extravascular distance in the cortex of a P10 WT
mouse. Each voxel outside a vessel structure is assigned a color to depict its shortest distance to the nearest vessel
structure.
‘Traditional’StructuralVascularBiomarkers #3:
InClinical context,you cansee that incertaindisease (by vascularpathologies, or by yourpathology Xthatyou are
interestedin), the connectivity oftextbook case mightget altered.Andthenyouwantto quantify thischange asa
function ofdisease severity,pharmacological treatment,otherintervention.
RelationshipbetweenVariations in
theCircleofWillis andFlowRates
inInternalCarotidandBasilar
Arteries DeterminedbyMeans of
MagneticResonanceImagingwith
SemiautomatedLumen
Segmentation:ReferenceData
from125 Healthy Volunteers
H. Tanaka, N. Fujita, T. Enoki, K.
Matsumoto, Y. Watanabe, K.
Murase and H. NakamuraAmerican
Journal of Neuroradiology September
2006, 27 (8) 1770-1775;
https://www.ncbi.nlm.nih.g
ov/pubmed/16971634
Cited by 124 -
Related articles
‘Traditional’FunctionalVascularBiomarkers #1
Blood flow -based biomarkers spatiotemporal (graph) deep learning model needed.See forsome
→ fMRI literatureorpoach someone from Über.
C, Blood flow distribution simulated across the vascular network assuming a global perfusion
value of 100 ml/min/100 g. D, Distribution of the partial pressure of oxygen (pO2
) simulated
across the vascular network using the finite element method model. E, TPM experimental
measurements of pO2
in vivo using PtP-C343 dye. F, Quantitative comparison of simulated
and experimental pO2
and SO2
distributions across the vascular network for a single animal.
Traces represent arterioles and capillaries (red) and venules and capillaries (blue) as a
function of the branching order from pial arterioles and venules, respectively.
doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by93
F, Vessel type. G, Spatiotemporal evolution of simulated SO2
changes following forepaw stimulus.
‘Traditional’FunctionalVascularBiomarkers #2
Time-averaged velocity magnitudes of a measurement region are shown,
together with with the corresponding skeleton (black line), branch points (white
circles), and end points (gray circles). The flow enters the measurement region from
theright. Notethat anon-linearcolor scalewas used forthevelocity magnitude.
Multiple parabolic fits at several locations on the vessel centerline
were performed to obtain a single characteristic velocity and
diameter for each vessel segment. The time-averaged flow rate is assumed
constant throughout the vessel segment. The valid region is bounded by 0.5 and 1.5×the median
flow rate, and the red-encircled data points were not incorporated, due to a strongly deviating flow
rate. Note that the fitted diameters and flow rates for the two data points on the far rightare too large
to be visible in the graph.
QuantificationofBloodFlowandTopologyinDevelopingVascularNetworks
Astrid Kloosterman, Beerend Hierck, Jerry Westerweel, Christian Poelma
Published: May 13, 2014 https://doi.org/10.1371/journal.pone.0096856
Vasculatureimagingandvideooximetry
Methods forcalculating retinal bloodvesseloxygen saturation (sO2)
by(a)thetraditional LSF,and (b) ourneuralnetwork-based DSLwith
uncertainty quantification.
Deep spectrallearningfor label-freeopticalimaging
oximetrywithuncertaintyquantification
RongrongLiu,ShiyiCheng,Lei Tian,Ji Yi
https://doi.org/10.1101/650259
Traditional approaches for quantifying sO2
often rely on analytical models that are fitted by the spectral
measurements. These approaches in practice suffer from uncertainties due to biological variability,
tissue geometry, light scattering, systemic spectral bias, and variations in experimental conditions.
Here, we propose a new data-driven approach, termed deep spectral learning (DSL) for oximetry to be
highly robust to experimental variations, and more importantly to provide uncertainty quantification for
each sO2prediction.
Two-photon phosphorescence lifetime
microscopyofretinalcapillaryplexus
oxygenation in mice
IkbalSencan; Tatiana V. Esipova;MohammadA. Yaseen;Buyin
Fu;DavidA. Boas; Sergei A. Vinogradov; MahnazShahidi; Sava
Sakadžic
https://doi.org/10.1117/1.JBO.23.12.126501
NeurovascularDiseaseResearch functioningofthe“neurovascularunit”(NVU) is ofinterest
Example of two-photon microscopy (TPM).
The TPM provides high spatial resolution
images such as angiogram (left, scale bar:
100 lm) and multi-channel images, such as
endothelial glycocalyx (green) with
bloodflow(red,scalebar: 10lm)
Intermsofdeep learning, you might think of multimodal/channel models and “context dependent” localization of dye signals
Yoon and Yong Jeong (2019) https://doi.org/10.1007/s12272-019-01128-x
Whatdowewantoutfrom
multiphotonvasculaturestacks?
- Voxel masks?
- Graph Networks?
- Meshes?
Computationalhemodynamicanalysis requiresegmentationswithnogaps
Towardsaglaucomariskindexbasedonsimulatedhemodynamics
fromfundusimages
José IgnacioOrlando, JoãoBarbosa Breda, Karelvan Keer, Matthew B. Blaschko, PabloJ. Blanco, CarlosA.
Bulant
https://arxiv.org/abs/1805.10273 (revised27 Jun 2018)
https://ignaciorlando.github.io./
It has been recently observed that glaucoma induces changes in the ocular hemodynamics (
Harris et al. 2013; Abegão Pinto et al. 2016). However, its effects on the functional behavior
of the retinal arterioles have not been studied yet. In this paper we propose a first approach
for characterizing those changes using computational hemodynamics. The retinal blood flow
is simulated using a 0D model for a steady, incompressible non Newtonian fluid in rigid
domains.
Finally, our MATLAB/C++/python code and the LES-AV database are publicly
released. To the best of our knowledge, our data set is the first in providing not only
the segmentations of the arterio-venous structures but also diagnostics and
clinical parameters at an image level.
(a)Multiscaledescriptionofneurovascular coupling in theretina. The modelinputsatthe Macroscale (A)
are the bloodpressuresatthe inletand outletof the retinalcirculation, Pin andPout. The Mesoscale (B) focuses
on arterioles, whosewalls comprise endotheliumandsmooth muscle cells.The Microscale (C) entails the
biochemistryatthe cellular levelthatgoverns the change in smooth muscle shape.(b)
Voxel Mesh
→ conversion“trivial”withcorrectsegmentation/graph model
DeepMarchingCubes:LearningExplicitSurface
Representations
Yiyi Liao, Simon Donńe, Andreas Geiger (2018)
https://avg.is.tue.mpg.de/research_projects/deep-marching-cubes
http://www.cvlibs.net/publications/Liao2018CVPR.pdf
https://www.youtube.com/watch?v=vhrvl9qOSKM
Moreover, we showed that surface-based supervision results in better
predictions in case the ground truth 3D model is incomplete. In future
work, we plan to adapt our method to higher resolution outputs using
octrees techniques [Häne et al. 2017; Riegler et al. 2017; Tatarchenko et al. 2017]
and integrate
our approach with other input modalities
Learning3DShapeCompletionfromLaserScanDatawithWeakSupervision
David Stutz, Andreas Geiger (2018)
http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1708.pdf
Deep-learning-assistedVolumeVisualization
Hsueh-Chien Cheng, Antonio Cardone, Somay Jain, Eric Krokos, Kedar Narayan, Sriram Subramaniam,
Amitabh Varshney
IEEE Transactions on Visualization and Computer Graphics ( 2018)
https://doi.org/10.1109/TVCG.2018.2796085
Although modern rendering techniques and hardware can now render volumetric data
interactively, we still need a suitablefeaturespace that facilitates naturaldifferentiationof
target structures andan intuitive and interactive way of designing visualizations
Motivation
Some scriptability available for ImageJ in many languages
https://imagej.net/Scripti
ng
Imaris had to listen to their customers
but still closed-source with poor
→
integration to 3rd
party code
ITK does someone still
use?
Howabout‘scaling’allyourandothers’
manualworkforanautomaticsolution?
→ data-driven vascularsegmentation
‘Downstream uncertainty’ reduced with near-perfectvoxelsegmentation
Influenceofimagesegmentationonone-dimensional
fluiddynamicspredictionsinthemousepulmonary
arteries
Mitchel J. Colebank, L. Mihaela Paun, M. Umar Qureshi, Naomi Chesler, Dirk
Husmeier, Mette S. Olufsen, Laura Ellwein Fix
NC State University, UniversityofGlasgow, University of Wisconsin-Madison, Virginia Commonwealth University,
(Submitted on 14 Jan 2019 https://arxiv.org/abs/1901.04116
Computational fluid dynamics (CFD) models are emerging as tools
for assisting in diagnostic assessment of cardiovascular disease.
Recent advances in image segmentation has made subject-specific
modelling of the cardiovascular system a feasible task, which is
particularly important in the case of pulmonary hypertension (PH),
which requires a combination of invasive and non-invasive
procedures for diagnosis. Uncertainty in image segmentation
can easily propagate to CFD model predictions, making
uncertainty quantification crucial for subject-specific models.
This study quantifies the variability of one-dimensional (1D) CFD
predictions by propagating the uncertainty of network
geometry and connectivity to blood pressure and flow
predictions. We analyse multiple segmentations of an image of an
excised mouse lung using different pre-segmentation parameters. A
custom algorithm extracts vessel length, vessel radii, and network
connectivity for each segmented pulmonary network. We quantify
uncertainty in geometric features by constructing probability
densities for vessel radius and length, and then sample from these
distributions and propagate uncertainties of haemodynamic
predictions using a 1D CFD model. Results show that variation in
network connectivity is a larger contributor to haemodynamic
uncertainty than vessel radius and length.
‘Measurement uncertainties’ propagatetoyourdeeplearningmodelsaswell
Arnold et al. (2017) Uncertainty Quantification in a Patient-Specific One-
Dimensional Arterial Network Model: ensemble Kalman filter (EnKF)-Based
Inflow Estimator http://doi.org/10.1115/1.4035918
Marquis et al. (2018) Practical identifiability and uncertainty quantification of
a pulsatile cardiovascular model https://doi.org/10.1016/j.mbs.2018.07.001
Mathematical models are essential tools to study how the cardiovascular system maintains
homeostasis. The utility of such models is limited by the accuracy of their predictions,
which can be determined by uncertainty quantification (UQ). A challenge associated with
the use of UQ is that many published methods assume that the underlying model is
identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model
output).
Păun et al. (2018) MCMC methods for inference in a mathematical model of
pulmonary circulation https://doi.org/10.1111/stan.12132
The Delayed Rejection Adaptive Metropolis (DRAM) algorithm, coupled with constraint non‐
linear optimization, is successfully used to learn the parameter values and quantify the
uncertaintyin the parameter estimates
Schiavazzi et al. (2017) A generalized multi-resolution expansion for uncertainty
propagation with application to cardiovascular modeling
https://dx.doi.org/10.1016%2Fj.cma.2016.09.024
A general stochastic system may be characterized by a large number of arbitrarily distributed
and correlated random inputs, and a limited support response with sharp gradients or event
discontinuities. This motivates continued research into novel adaptive algorithms for
uncertainty propagation, particularly those handling high dimensional, arbitrarily distributed
random inputs and non-smoothstochasticresponses.
Sankaran and Marsdenal. (2011) A stochastic collocation method for uncertainty
quantification and propagation in cardiovascular simulations.
http://doi.org/10.1115/1.4003259
In this work, we develop a general set of tools to evaluate the sensitivity of output parameters
to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary
conditions, geometrical parameters, or clinical data. These uncertainties result in a range of
possible outputs which are quantified using probabilitydensity functions (PDFs).
Tran et al. (2019) Uncertainty quantification of simulated biomechanical stimuli
in coronary artery bypass grafts https://doi.org/10.1016/j.cma.2018.10.024
Prior studies have primarily focused on deterministic evaluations, without reporting variability
in the model parameters due to uncertainty. This study aims to assess confidence in multi-
scale predictions of wall shear stress and wall strain while accounting for uncertainty in
peripheral hemodynamics and material properties. Boundary condition distributions are
computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness
are obtained through approximation by a random field. We developed a stochastic
submodeling approach to mitigate the computational burden of repeated multi-scale model
evaluations to focus exclusively on the bypass grafts.
Yin et al. (2019) One-dimensional modeling of fractional flow reserve in coronary
artery disease: Uncertainty quantification and Bayesian optimization
https://doi.org/10.1016/j.cma.2019.05.005
The computational cost to perform three-dimensional (3D) simulations has limited the use of
CFD in most clinical settings. This could become more restrictive if one aims to quantify the
uncertainty associated with fractional flow reserve (FFR) calculations due to the uncertainty in
anatomic and physiologic properties as a significant number of 3D simulations is required to
sample a relatively large parametric space. We have developed a predictive probabilistic
model of FFR, which quantifies the uncertainty of the predicted values with significantly lower
computational costs. Based on global sensitivity analysis, we first identify the important
physiologic and anatomic parameters thatimpact the predictions of FFR
Dendrograms
Usedassymbolic
(“grammar”)abstraction
ofneuronaltrees
Neuronalbranching graphs #1
Explicit representation of a neuron model. (left) The network can be represented as a graph
structure, where nodes are end points and branch points. Each fiber is represented by a single
edge. (right) The same networkisshown withseveral commonerrorsintroduced.
Dendrograms
Representation of brain vasculature using
circular dendrograms
A Method for the Symbolic Representation of Neurons
Maraver et al. (2018) https://doi.org/10.3389/fnana.2018.00106
NetMets: Software for quantifying and visualizing errors in biological network
segmentation Mayerich et al. (2012) http://doi.org/10.1186/1471-2105-13-S8-S7
Neuronalbranching graphs #2
Topological characterization of neuronal arbor morphology via sequence representation: I - motif analysis Todd A Gillette and Giorgio A Ascoli
BMC Bioinformatics 2015 https://doi.org/10.1186/s12859-015-0604-2 “Grammar model” for deep learning?
Tree size and complexity. a. Complexity of trees is
limited by tree size. Here are shown the set of possible
tree shapes for trees with 1 to 6 bifurcations. Additionally,
the number of T nodes (red dots in sample trees) is
always 1 more than A nodes (green dots). Thus, size and
number or percent of C nodes (yellow dots) fully captures
node-type statistics.
Neuronalbranching graphs #3
NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery
Shinn-Ying Ho et al. BMC Bioinformatics201112:230 https://doi.org/10.1186/1471-2105-12-230
Image enhancement process of NeurphologyJ
does not remove thin and dim neurites. Shown here is an
example image of mouse hippocampal neurons analyzed by NeurphologyJ. Notice that
both thick neurites and thin/dim neurites (arrowheads) are preserved after the image
enhancement process.The scale bar represents 50 μm.
Neuritecomplexity
can bededucedfrom
neurite attachment
pointandending
point.Examples of
neuronswithdifferent
levelsofneurite
complexityare shown
NeuronalCircuitTracing Similartoourchallenges#1
FlexibleLearning-FreeSegmentationand ReconstructionforSparseNeuronalCircuitTracing
Ali Shahbazi, Jeffery Kinnison, Rafael Vescovi, Ming Du, Robert Hill, Maximilian Joesch, Marc Takeno, Hongkui Zeng, Nuno Macarico da
Costa, Jaime Grutzendler, Narayanan Kasthuri, Walter J. Scheirer
July 06, 2018 https://doi.org/10.1101/278515
FLoRIN reconstructions of the Standard Rodent Brain (SRB) (top) and APEX2-labeled
Rodent Brain sample (ALRB) (bottom) µCT X-ray volumes. (A) Within the SRB volume,
cells and vasculature are visually distinct in the raw images, with vasculature
appearing darker than cells. (B) Individualstructuresmay be extremely close(such
as the cells and vasculature in this example), making reconstruction efforts prone to
mergeerrors.
NeuronalCircuitTracing Similartoourchallenges#2
DenseneuronalreconstructionthroughX-rayholographicnano-tomography
Alexandra Pacureanu, Jasper Maniates-Selvin, Aaron T. Kuan, Logan A. Thomas, Chiao-Lin Chen, Peter Cloetens, Wei-Chung Allen Lee
May 30, 2019. https://doi.org/10.1101/653188
3D U-NET
everywhere
Howto“deploy” to
scientificworkflow
then?
Deeplearningiscool
butdoesthattranslate
to productivitygains?
Thinkintermsofsystems
The machine learning model is just a part of all this in your labs
Atonof stacksjust sitting
on your hard drives
Takesalot of workto
annotate the vasculature
voxel-by-voxel
“AI”
buzzword
MODEL
The following slides will
showcase variouswaysof
how thisbuzzhas been
done “in practice”
Aspoiler:We would
like to have a semi-
supervised model.
doi: 10.1038/s41592-018-0115-y
doi: 10.1038/s41592-018-0115-y
We want to
predict the
vessel / non-
vessel mask*
for each voxel
* (i.e. foreground-
background, binary
segmentation)
PracticalSystemsParts
Highlighted later on as well: Active Learning
Atonof stacksjust sitting
on your hard drives
Takesalot of workto
annotate the vasculature
voxel-by-voxel
“AI”
buzzword
MODEL
doi: 10.1038/s41592-018-0115-y
doi: 10.1038/s41592-018-0115-y
Youwould liketo keep
researchersintheloopwith
thesystem and make It better
as you do more experiments
and acquire more data.
But you have so many stacks on
your hard drive that howdo
youselectthestacks/slices
thatyoushouldselectin
orderto improvethemodel
themost?
Check the ActiveLearning
slides later
PracticalSystemsParts
Highlighted later on as well: Proofreading
We want to
predict the
vessel / non-
vessel mask*
for each voxel
* (i.e. foreground-
background, binary
segmentation)
“AI”
buzzword
MODEL
Yoursegmentationmodel willmake100%some
erroneouspredictions and you would like to “show the
errors” to the system so it can learn from them and predict
better next time
Proof-
reading
Labelling
Thinkingintermsof a product
If you would like to release this all as an open-source software/toolbox or a as a
spin-off startup, instead of just sharing your network on Github
“AI”
buzzword
MODEL
Active
Learning
TheFinalMask
You could now expose APIs to the
parts needed, and get a modular
system where you can focus on
segmentation and maybe your
collaborators are really into building
good front-ends for proofreading
and labelling?
Annotateddataasthebottleneck
Even with the semi-supervised approach, you won’t most likely face a situation
where you have too many volumes with vasculature ground truths
Thus
The faster and more
intuitive your proofreader /
annotator / labelling tool is,
The faster you can make
progress with your model
performance.
→UX Matters
UX as in User Experience, as most likely your professor has never used this word.
https://hackernoon.com/why-ux-design-must-be-the-foundation-of-your-software-product-f66e431cc7b4
‘Stealideas’fornicetousesystemsaroundyou
Voxeleron OrionWorkflow Advantages
https://www.voxeleron.com/orion-workflow-advantages/
Click on your
inliers/outliers
interactively and
Orion updates the
spline fittings for
you in real-time
Polygon-RNN
https://youtu.be/S1UUR4FlJ84
https://github.com/topics
/annotation-tool
●
wkentaro/ labelme
●
Labelbox / Labelbox
●
microsoft /VoTT
●
opencv /cvat
Makebiologyagamegamification tickedofffromthebuzzwordbingohere
https://doi.org/10.1016/j.chb.2016.12.0
74
Eyewire Elite Gameplay |
https://eyewire.org/explore
https://phys.org/news/2019-06-video-gamers-brand-proteins.html
Theslidesetto follow willallowmultiplewaystosolvethe
segmentationchallenge,and aswellto startbuilding the
“product”inmodules “ablation study friendly”
,sononeedto tryto
makeitallatonce... necessarily
Bio/neuroscientists,
can have a look of this
classic
Can a biologist fix a radio?—Or, what I learned while studying apoptosis
https://doi.org/10.1016/S1535-6108(02)00133-2- Citedby 371
Integratetosomethingand exploittheexistingopen-sourcecode
USIDandPycroscopy--Openframeworksforstoringand
analyzingspectroscopicandimagingdataSuhas Somnath,Chris R. Smith,
Nouamane Laanait, Rama K. Vasudevan,Anton Ievlev,Alex Belianinov,AndrewR. Lupini, Mallikarjun Shankar,
Sergei V.Kalinin, Stephen JesseOak Ridge National Laboratory
(Submitted on 22 Mar 2019)
https://arxiv.org/abs/1903.09515
https://www.youtube.com/channel/UCyh-7XlL-BuymJD7vdoNOvw
pycroscopy
https://pycroscopy.github.io/pycroscopy/about.html
pycroscopy is a python package for image processing and scientific
analysis of imaging modalities such as multi-frequency scanning probe
microscopy, scanning tunneling spectroscopy, x-ray diffraction
microscopy, and transmission electron microscopy.
pycroscopy uses a data-centric model wherein the raw data collected
from the microscope, results from analysis and processing routines are
all written to standardized hierarchical data format (HDF5) files for
traceability, reproducibility,and provenance.
OME
https://www.openmicroscopy.org/
Har-Gil, H., Golgher, L., Israel, S., Kain, D., Cheshnovsky, O., Parnas, M., & Blinder, P.
(2018). PySight: plug and play photon counting for fast continuous volumetric
intravital microscopy. Optica, 5(9), 1104-1112. https://doi.org/10.1364/OPTICA.5.001104
Integratetosomethingand exploittheexistingopen-sourcecode
VMTK
http://www.vmtk.org/ by Orobix
https://github.com/vmtk/vmtk Python3
VMTKADD-ON FORBLENDER November 13, 2017 EPFL has
developed an add-on for Blender that loads centerlines generated by
VMTKinto Blender,and writes meshes from Blender.
http://www.vmtk.org/tutorials/
Youcould for example improvethesegmentation tobeused
with VMTKletVMTK/Blenderstillvisualizethestacks
For example, we could start by
doing this the “deep learning”
way outlined on this slideshow
If you feel that this do not really work well for
your needs, you can work on this, or ask for
improvements from Orobix team
Blender integrationwithmeshes?
BioBlender is a software package built on the open-source 3D modeling software Blender.
BioBlender is the result of a collaboration, driven by the SciVis group at the CNR in Pisa (Italy),
between scientists of different disciplines (biology, chemistry, physics, computer sciences) and
artists, using Blender in a rigorous but at the same time creative way. http://www.bioblender.org/
https://github.com/mcellteam/cellblen
der
https://github.com/NeuroMorph-EPFL/NeuroM
orph/tree/master/NeuroMorph_CenterLines_Cr
ossSections
Processes center lines generated by the Vascular Modeling Toolkit
(VMTK), perform calculations in Blender using these center lines.
Includes tools to clean meshes, export meshes to VMTK, and import
center lines generated by VMTK. Also includes tools to generate
cross-sectional surfaces, calculate surface areas of the mesh along
the center line, and project spherical objects (such as vesicles) or
surface areas onto the center line. Tools are also provided for
detectingbouton swellings. Data can be exportedfor analysis.
Howdoesour “deep
learning”demands
affect
”The Neuroscientific
Experiment”design
FluorescenceMicroscopy networksexistfor“smallerblobs”
DeepFLaSH,adeeplearningpipelineforsegmentationof
fluorescentlabelsinmicroscopyimages
Dennis Segebarth et al. November 2018
https://doi.org/10.1101/473199
Here we present and evaluate DeepFLaSH, a unique deep learning
pipeline to automatize the segmentation of fluorescent labels in
microscopy images. The pipeline allows training and validation of label-
specific convolutional neural network (CNN) models that can be
uploaded to an open-source CNN-model library. As there is no ground
truth for fluorescent signal segmentation tasks, we evaluated the CNN
with respect to inter-coding reliability. Similarity analysis showed that
CNN-predictions highly correlated with segmentations by human
experts.
DeepFLaSH runs as a guided, hassle-free open-source tool
on a cloud-based virtual notebook (Google Colab
http://colab.research.google.com, in a Jupyter Notebook)
with free access to high computing power and requires no
machinelearningexpertise.
Label-specific CNN-models, validated on base of inter-coding approaches may
become a new benchmark for feature segmentation in neuroscience. These
models will allow transferring expert performance in image feature analysis from
one lab to any other. Deep segmentation can better interpret feature-to-noise
borders, can work on the whole dynamic range of bit-values and exhibits
consistent performance. This should increase both, objectivity and
reproducibility of image feature analysis. DeepFLaSH is suited to create CNN-
models for high-throughput microscopy techniques and allows automatic
analysis of large image datasets with expert-like performance and at super-
human speed.
With a nice notebook deployment example
VasculatureNetworks Multimodali.e.“multidye”
3DCNNsifpossible
HyperDense-Net:A hyper-densely connected
CNN formulti-modal image segmentation
Jose Dolz
https://arxiv.org/abs/1804.02967(9 April 2018)
https://www.github.com/josedolz/HyperDenseNe
t
We propose HyperDenseNet, a 3D fully convolutional
neural network that extends the definition of dense
connectivity to multi-modal segmentation problems
[MRI Modalities: MR-T1, PD MR-T2,
FLAIR]. Each imaging modality has a path, and dense
connections occur not only between the pairs of
layers within the same path, but also between those
across different paths.
A multimodal imaging platform with integrated
simultaneousphotoacousticmicroscopy, optical
coherencetomography,optical Doppler tomography
and fluorescence microscopy
Arash Dadkhah; Jun Zhou; Nusrat Yeasmin; Shuliang Jiao
https://sci-hub.tw/https://doi.org/10.1117/12.2289211
(2018)
Here, we developed a multimodal optical imaging system with
the capability of providing comprehensive structural,
functional and molecular information of living tissue in
micrometer scale.
An artery-specificfluorescent dye for studying
neurovascularcoupling
Zhiming Shen, Zhongyang Lu, Pratik Y Chhatbar, Philip
O’Herron, and Prakash Kara
https://dx.doi.org/10.1038%2Fnmeth.1857(2012)
Here, we developed a multimodal optical imaging system with
the capability of providing comprehensive structural,
functional and molecular information of living tissue in
micrometer scale.
Astrocytes are intimatelylinked to the function
of the inner retinalvasculature. A flat-mounted
retina labelled for astrocytes (green) and retinal
vasculature (pink). - from Prof Erica Fletcher
Multimodalsegmentation glialcells,A fibrils,etc.provide
β
‘context’ for vasculatureandviceversa
Diffuse and vascular A deposits induce astrocyte endfeet retraction and swelling in
β
TG arcA mice, starting at early-stage pathology.
β Triple-stained for GFAP, laminin
and A /APP.
β https://doi.org/10.1007/s00401-011-0834-y
In vivo imagingof theneurovascular unit in Stroke,Multiple
Sclerosis (MS) and Alzheimer’s Disease.
InvivoimagingoftheneurovascularunitinCNS disease
https://www.researchgate.net/publication/265418103_In_vivo_i
maging_of_the_neurovascular_unit_in_CNS_disease
NeurovascularUnit(NVU) astrocyte /neuron/vasculatureinterplay
(A)Immunostaining depiction of
components of the neurovascularunit
(NVU). The astrocytes (stained with
rhodamine labeled GFAP)shown in red.
The neurons are stained withfluorescein
tagged NSE shown in green and the blood
vessels are stained with PECAM shown in
blue.Note the location of the foot
processes around the vasculature.
(B)Histochemical localization of -
β
galactosidase expression in rat brain
following lateral ventricular infusion of
Ad5/CMV- -galactosidase (magnification
β
× 1000).Note staining of astrocytes and
astrocytic footprocesses surrounding
blood vessel emulating the exploded
section of the immunostained brain slice
A B
Schematicrepresentation ofaneurovascular unit
with astrocytes being thecentral processorof
neuronalsignals as depicted in both panels A and
panel B.
Harder et al. (2018) Regulationof Cerebral
BloodFlow:ResponsetoCytochrome
P450LipidMetabolites
http://doi.org/10.1002/cphy.c170025
NVU examplesof dyes/labelsinvolved#1
CALCIUM OGB-1
Neuron
CA2+
ASTROCYTE SR-101
Astrocytic
CA2+
ARTERY AlexaFluor 633
or FITC/TexasRed
Vessel
diameter
Neuron (OGB-1)
and arteriole
response (Alexa
Fluor 633) to
drifting grating in
cat visual cortex.
https://dx.doi.org/10
.1038%2Fnmeth.185
7
Low-intensity afferent neural activity caused vasodilation
in the absence of astrocyte Ca2+ transients.
https://dx.doi.org/10.1038%2Fjcbfm.2015.141
Astrocytes trigger rapid vasodilation
following photolysis of caged Ca+.
https:/
/dx.doi.org/10.3389%2Ffnins
.2014.00103
NVU“PhysicalCheating”forartery-veinclassification
https://doi.org/10.1016/j.rmed.2013.02.004
https://doi.org/10.1182/blood-2018-01-824185
https://doi.org/10.1364/BOE.9.002056
http://doi.org/10.5772/intechopen.80888
Traces of relative Hb and HbO2
concentrations for a human
subject during three consecutive
cycles of cuff inflation and
deflation.
http://doi.org/10.1063/1.3398450
sO2
and blood flow on four orders of artery-vein pairs
http://doi.org/10.1117/1.3594786
NVUOxygenProbesforMultiphotonMicroscopy
Examples of in vivo two-photon PLIM oxygen sensing of platinum porphyrin-coumarin-
343 a Maximum intensity projection image montage of a blood vessel entering the bone marrow
(BM) from the bone. Bone (blue) and blood vessels (yellow) are delineated with collagen second
harmonic generation signal and Rhodamine B-dextran fluorescence, respectively. b
Measurement of pO2
in cortical microvasculature. Left: measured pO 2 values in
microvasculature at various depths (colored dots), overlaid on the maximum intensity projection
image of vasculature structure (grayscale). Right: composite image showing a projection of the
imaged vasculature stack. Red arrows mark pO2
measurement locations in the capillary vessels
at 240 μm depth. Orange arrows point to the consecutive branches of the vascular tree, from pial
arteriole (bottom left arrow) to the capillary and then to the connection with ascending venule
(topright arrow). Scale bars: 200 μm.
Chelushkin and Tunik (2019) 10.1007/978-3-030-05974-3_6
Devor et al. (2012) Frontiersin opticalimagingof
cerebralbloodflowandmetabolism
http://doi.org/10.1038/jcbfm.2011.195
Optical imaging of oxygen availability and metabolism. ( A ) Two-photon
partial pressure of oxygen (pO2
) imaging in cerebral tissue. Each plot
shows baseline pO2
as a function of the radial distance from the center of
the blood vessel—diving arteriole (left) or surfacing venule (right)—for a
specific cortical depth range
DyeEngineering afieldofitsown,andcheckfornewexcitingdyes
BrightAIEgen–ProteinHybrid
NanocompositeforDeepandHigh‐
ResolutionIn VivoTwo PhotonBrainImaging
‐
Shaowei Wang Fang Hu Yutong Pan Lai Guan Ng Bin
Liu
Department ofChemical and Biomolecular Engineering,National University ofSingapore
Advanced Functional Materials 24 May 2019 https://doi.org/10.1002/adfm.201902717
NIR IIExcitableConjugated PolymerDotswith
‐
BrightNIR IEmissionforDeepInVivoTwo
‐ ‐
PhotonBrainImagingThroughIntactSkull
Shaowei Wang Jie Liu Guangxue Feng Lai Guan Ng Bin Liu
Department of Chemical and Biomolecular Engineering,National University ofSingapore
Advanced Functional Materials 21 January 2019 https://doi.org/10.1002/adfm.201808365
When Quantum Dots gets old, enter PolymerDots
In vivo vascularimaging in miceafterlabellingwith polymerdots(CNPPV, PFBT, PFPV), fluorescein and
QD605 semiconductorquantum dots; scalebars =100µm. (Biomed.Opt.Express
10.1364/BOE.10.000584, Universityof Texas, Ahmed M.Hassan etal. (2019))
https://physicsworld.com/a/polymer-dots-image-deep-into-the-brain/
Furthermore, we justify the
use of pdotsover
conventional fluorophores
for multiphoton imaging
experiments inthe 800 –
900 nm excitationrange
due to their increased
brightness relativeto
quantumdots,organic
dyes,andfluorescent
proteins.
An important caveat toconsider,
however, is that pdots were delivered
intravenously in our studies, and
labelingneuralstructureslocatedin
high-density extravascular brain tissue
couldposeachallenge due to the
relativelylargediametersofpdots
(~20-30nm). Recent efforts have
producedpdot nanoparticles with sub-5
nm diameters, yet the yield from these
preparations is still quite low
Whatif youhavethe ‘dyelabels’ from differentexperiments
And you would like to combine them into a training of a single network?
LearningwithMultitaskAdversariesusingWeaklyLabelled
DataforSemanticSegmentationinRetinalImages
Oindrila Saha, Rachana Sathish, Debdoot Sheet
13 Dec 2018 (modified: 15 Apr 2019)
https://openreview.net/forum?id=HJe6f0BexN
In case of retinal images, data driven learning-based algorithms have been
developed for segmenting anatomical landmarks like vessels and optic
disc as well as pathologies like microaneurysms, hemorrhages, hard
exudatesand soft exudates.
The aspiration is to learn to segment all such classes using only a single fully
convolutional neural network (FCN), while the challenge being that there is
no single training dataset with all classes annotated. We solve this problem
by training a single network using separate weakly labelled datasets.
Essentially we use an adversarial learning approach in addition to the
classically employed objective of distortion loss minimization for semantic
segmentation using FCN, where the objectives of discriminators are to
learn to (a) predict which of the classes are actually present in the input
fundus image, and (b) distinguish between manual annotations vs.
segmentedresults for each of the classes.
The first discriminator works to enforce the network to segment those
classes which are present in the fundus image although may not have been
annotated i.e. all retinal images have vessels while pathology datasets may
not have annotated them in the dataset. The second discriminator
contributes to making the segmentation result as realistic as possible. We
experimentally demonstrate using weakly labelled datasets of DRIVE
containing only annotations of vessels and IDRiD containing annotations for
lesions and optic disc.
2D Vasculature
networksexist
benchmarked
mainlyon retinal
microvasculature
datasets
OverviewoftheMethods
blood vessels as special example of curvilinear structure object segmentation
Bloodvesselsegmentationalgorithms—
Reviewof methods,datasetsand
evaluationmetrics
Sara Moccia, Elena De Momi, Sara El Hadji, Leonardo
S.Mattos
Computer Methods and Programs in Biomedicine May 2018
https://doi.org/10.1016/j.cmpb.2018.02.001
No single segmentation approach is suitable for all
the different anatomical region or imaging modalities,
thus the primary goal of this review was to provide an
up to date source of information about the state of
the art of the vessel segmentation algorithms so
that the most suitable methods can be chosen
accordingtothespecifictask.
U-Netyouwillsee thisrepeatedmanytimes
U-Net:Convolutional
Networksfor Biomedical
Image Segmentation
Olaf Ronneberger, Philipp Fischer, Thomas
Brox
(Submitted on 18 May 2015)
https://arxiv.org/abs/1505.04597
Cited by 77,660
U-Net:deeplearningforcell
counting,detection,and
morphometry
Thorsten Falk et al. (2019)
Nature Methods 16, 67–70 (2019)
https://doi.org/10.1038/s41592-018-0261-2
Citedby1,496
The ‘vanilla U-Net’ Is typically the baseline to beat in many articles, and its modified
version is being proposed as the novel state-of-the-art network
https://towardsdatascience.com/u-net-b229b32b4
a71
The architecture looks like a ‘U’
which justifies its name. This
architecture consists of three
sections: The contraction
(encoder, downsampling part),
The bottleneck, and the
expansion (decoder,
upsampling part) section.
contraction
encoder
downsampling
expansion
decoder
upsampling
BOTTLENECK
Skipconnections
U-Net 2D Example
Image
Size
noFeatureMaps
4x
Downsampling
”Stages”With 2x2
Max Pooling
572 x572 px
32 x32px
4xUpsampling
”Stages”
ENCODER DECODER
First stage decoder
filter outputs (activation
maps) are passedtothe
final4th
decoder stage
2ndstage decoder filter
outputs (activation
maps) are passedtothe
3rd
decoder stage
3rd - 2nd
4th- 1st
2Dretinalvasculature 2DU-Netasthe “baseline”
Retinabloodvessel
segmentationwith a
convolutionneural
network(U-net)
orobix/retina-unet Keras
http://vmtklab.orobix.com/
https://orobix.com/ as in the
company from Italy behind the
VMTKLab
Jointsegmentationandvascular reconstruction
Marry CNNs with graph (non-euclidean) CNNs, “grammar models” or something even better
DeepVesselSegmentationByLearningGraphicalConnectivity
Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee
https://arxiv.org/abs/1806.02279 (Submitted on 6 Jun 2018)
We incorporate a graph convolutional network into a unified CNN architecture,
where the final segmentation is inferred by combining the different types of features.
The proposed method can be applied to expand any type of CNN-based vessel
segmentation methodtoenhance the performance.
Learning about the strong
relationship that exists
between neighborhoods is not
guaranteed in existing CNN-
based vessel segmentation
methods.The proposed
vesselgraph network
(VGN) utilizes a GCN together
with a CNN to address this
issue.
Overall networkarchitecture
of VGN comprising the CNN,
graph convolutional network,
and inference modules.
“Grammar” as in if you know how molecules are
composed (e.g. SMILES model), you can
constrain the model to have only physically
possible connections. Well we donotexactly
havethatluxury and we need to learn the graph
constraints from data (but have noannotations
at the moment for edge nodes)
Automatic Chemical Design Using a Data-Driven Continuous Representation of
Molecules http://doi.org/10.1021/acscentsci.7b00572 some authors from Toronto,
including David Duvenaud
“Grammarmodels”possibletocertainextent
Remember that healthy and pathological vasculature might be be “quite different” (highly quantitative term)
Mitchell G. Newberry et al. Self-Similar Processes
Follow a Power Law in Discrete Logarithmic
Space, Physical Review Letters (2019).
DOI: 10.1103/PhysRevLett.122.158303
Although blood vessels also branch dichotomously, random asymmetry in branching disperses vessel
diameters from any specific ratios. On a database of 1569 blood vessel radii measured from a single
mouse lung, αc
and αd
produced statistically indistinguishable estimates (Table I), independent of the chosen
, and are therefore both likely accurate. The mutual consistency between the estimators suggests that the
λ
distribution ofbloodvesselmeasurementsiseffectivelyscaleinvariant despitetheunderlyingbranching.
Quantitating the Subtleties of
Microglial Morphology with Fractal
Analysis
Frontiers in Cellular
Neuroscience 7(3):3
http://doi.org/10.3389/fncel.2013.00003
Grammar as you can guess, used in languagemodeling
Kim Martineau | MIT Quest for Intelligence May 29, 2019
http://news.mit.edu/2019/teaching-language-models-grammar-makes-them-smarter-0529
NeuralLanguage ModelsasPsycholinguisticSubjects:
RepresentationsofSyntacticState
Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy
(Submitted on 8 Mar 2019) https://arxiv.org/abs/1903.03260
We deploy the methods of controlled psycholinguistic experimentation to shed light
on the extent to which the behavior of neural network language models reflects
incremental representations of syntactic state. To do so, we examine model
behavior on artificial sentences containing a variety of syntactically complex
structures. We find evidence that the LSTMs trained on large datasets represent
syntactic state over large spans of text in a way that is comparable to the Recurrent
Neural Network Grammars (RNNG, Dyer et al. 2016 Cited by157
), while the LSTM trained
on the small dataset does not or does so only weakly.
StructuralSupervisionImprovesLearningof Non-Local
GrammaticalDependencies
Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy
(Submitted on 3 Mar 2019) https://arxiv.org/abs/1903.00943
Using controlled experimental methods from psycholinguistics, we compare the
performance of word-based LSTM models versus two models that represent
hierarchical structure and deploy it in left-to-right processing: Recurrent Neural
Network Grammars (RNNGs) (Dyer et al. 2016 Cited by157
) and a incrementalized version
of the Parsing-as-Language-Modeling configuration from Chariak et al., (2016).
Structural supervision thus provides data efficiency advantages over purely
string-based training of neural language models in acquiring human-like
generalizationsabout non-local grammatical dependencies.
VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#1
TowardsEnd-to-EndImage-to-Treefor
VasculatureModeling
YunlongHuo and Ghassan S. Kassab
Journal of the Royal SocietyInterface
Published:15 June2011
https://doi.org/10.1098/rsif.2011.0270- Citedby 87
A fundamental physics-based derivation of
intraspecific scaling laws of vascular trees has
not been previously realized. Here, we provide such a
theoretical derivation for the volume–diameter
and flow–length scaling laws of intraspecific vascular
trees. In conjunction with the minimum energy
hypothesis, this formulation also results in
diameter–length, flow–diameter and flow–volume
scaling laws.
The intraspecific scaling predicts the volume–
diameter power relation with a theoretical exponent
of 3, which is validated by the experimental
measurements for the three major coronary
arterial trees in swine. This scaling law as well as
others agrees very well with the measured
morphometric data of vascular trees in various other
organs and species. This study is fundamental to
the understanding of morphological and
haemodynamic features in a biological vascular
treeand has implications forvasculardisease.
Relation between normalized stem diameter (Ds
/(Ds
)max
)
and normalized crown volume (Vc
/(Vc)max
) for vascular
trees of various organs and species corresponding to those
trees in table 1. The solid line represents the least-squares fit
of all the experimental measurements (exponent of
2.91, r2
= 0.966).
VascularTreeBranchingStatistics constrain with a“Grammarmodel”?#2
BranchingPatternoftheCerebral
ArterialTree
Jasper H. G. Helthuis Tristan P. C. van Doormaal Berend Hillen Ronald L. A. W. Bleys
Anita A. Harteveld Jeroen Hendrikse Annette van der Toorn Mariana Brozici Jaco J. M.
Zwanenburg Albert van der Zwan
TheAnatomical Record(17 October2018)
https://doi.org/10.1002/ar.23994
Quantitative data on branching patterns of the human
cerebral arterial tree are lacking in the 1.0–0.1 mm
radius range. We aimed to collect quantitative data in this
range, and to study if the cerebral artery tree complies with
the principle of minimal work (Lawof Murray).
Data showed a large variation in branching pattern
parameters (asymmetry ratio, area ratio, length radius
‐ ‐ ‐ ‐
ratio, tapering). Part of the variation may be explained by
the variation in measurement techniques, number of
measurements and location of measurement in the
vascular tree. This study confirms that the cerebral arterial
tree complies with the principle of minimum work.
These data are essential in the future development of
more accuratemathematicalbloodflow models.
Relative frequencies of (A) asymmetry ratio, (B) area ratio,(C) length to radius ratio,
‐ ‐ ‐ (D)tapering.
Branch-basedfunctionalmeasures?
Changsi Cai et al. (2018) Stimulation-inducedincreases in
cerebral bloodflowandlocalcapillary vasoconstriction
dependonconductedvascularresponses
https://doi.org/10.1073/pnas.1707702115
Functional vessel dilation in the mouse barrel cortex. (A) A two-photon image of the barrel cortex of a NG2-DsRed
mouse at 150 µm depth. The p.a.s branch out a capillary horizontally (
∼ first order). Further branches are defined as
second- and third-order capillaries. Pericytes are labeled with a red fluorophore (NG2-DsRed) and the vessel
lumen with FITC-dextran (green). ROIs are placed across the vessel to allow measurement of the vessel diameter
(colored bars). (Scale bar: 10 µm.)
Changsi Cai et al. (2018) Stimulation-induced increases in
cerebralbloodflowandlocal capillaryvasoconstriction
dependonconducted vascularresponses
https://doi.org/10.1073/pnas.1707702115
Measurement of blood vessel diameter and red blood cell (RBC) flux in the retina.A, Confocal
image of a whole-mount retina labeled for the blood vessel marker isolectin (blue), the contractile
protein -SMA (red), and the pericyte marker NG2 (green). Blood vessel order in the superficial vascular
α
layer is indicated. First-order arterioles (1) branch from the central retinal artery. Each subsequent
branch (2-5)has a higher order.Venules (V)connect with the central retinal vein. Scale bar,100 μm.
2Dretinalvasculature datasetsavailable
Highlights also how availability of freely-available databases DRIVE and STARE with a lot of annotations lead to a lot of
methodological papers from “non-retina” researchers
De et al. (2016) A Graph-Theoretical Approach for Tracing Filamentary Structures in Neuronal and Retinal Images https://dx.doi.org/10.1109/TMI.2015.2465962
2DMicrovasculatureCNNswithGraphs
TowardsEnd-to-EndImage-to-TreeforVasculatureModeling
Manish Sharma, Matthew C.H.Lee,James Batten,Michiel Schaap, Ben
Glocker
Google, ImperialCollege, Heartflow
MIDL2019Conference https://openreview.net/forum?id=ByxVpY5htN
This work explores an end-to-end image-to-tree approach for extracting
accurate representations of vessel structures which may be beneficial for
diagnosis of stenosis (blockages) and modeling of blood flow. Current image
segmentation approaches capture only an implicit representation, while this
work utilizes a subscale U-Net to extract explicit tree representations
from vascularscans.
Check othermodalities
alsoforfurther
inspiration
SS-OCTVasculatureSegmentation
Robustdeeplearningmethodforchoroidalvesselsegmentationon
sweptsourceopticalcoherencetomographyimages
Xiaoxiao Liu, Lei Bi,Yupeng Xu, Dagan Feng, Jinman Kim, and Xun Xu
Department of Ophthalmology, Shanghai General Hospital, ShanghaiJiaoTong UniversitySchool ofMedicine
BiomedicalOpticsExpressVol. 10, Issue 4, pp.1601-1612(2019)
https://doi.org/10.1364/BOE.10.001601
Motivated by the leading segmentation performance in medical images from the
use of deep learning methods, in this study, we proposed the adoption of a deep
learning method, RefineNet, to segment the choroidal vessels from SS-OCT
images. We quantitatively evaluated the RefineNet on 40 SS-OCT images
consisting of ~3,900 manually annotated choroidal vessels regions. We
achieved a segmentation agreement (SA) of 0.840 ± 0.035 with clinician 1
(C1) and 0.823 ± 0.027 with clinician 2 (C2). These results were higher than
inter-observervariability measurein SA between C1 and C2of 0.821 ±0.037.
Currently, researchers have limited imaging modalities to obtain information
about the choroidal vessels. Traditional indocyanine green angiography (ICGA) is
the goldstandard in clinical practice for detecting abnormality in the choroidal
vessels. ICGA provide 2D images of the choroid vasculature, which can show the
exudation or filling defects. However, ICGA does not provide 3D choroidal
structure or the volume of the whole choroidal vessel networks, and the ICGA
images overlap retinal vessels and choroidal vessels together, thereby
making it hard to independently observe and analyze the choroidal vessels
quantitatively. OCT Angiography (OCTA) can clearly show the blood flow from
superficial and deep retinal capillary network, as well as retinalpigment epithelium
to superficial choroidal vascular network; however, it cannot show the blood
flowindeepchoroidalvessels.
https://arxiv.org/abs/1806.05034
Fundus/OCT/OCTA multimodal quality enhancement
Generatingretinalflowmapsfromstructuralopticalcoherencetomography
withartificialintelligence Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem, Nicolaas
P. DeRuyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang & Aaron Y. Lee
Department of Ophthalmology, Universityof Washington, Seattle, WA, USA; eScience Institute, Universityof Washington, Seattle, WA, USA
ScientificReportsArticle number: 5694(2019)
https://doi.org/10.1038/s41598-019-42042-y
Using the human generated annotations as the ground truth limits the
learning ability of the AI, given that it is problematic for AI to surpass the accuracy
of humans, by definition. In addition, expert-generated labels suffer from inherent
inter-rater variability, thereby limiting the accuracy of the AI to at most variable
human discriminative abilities. Thus, the use of more accurate, objectively-generated
annotations would be a key advance in machine learning algorithms in diverse areas
ofmedicine.
Given the relationship of OCT and OCTA, we sought to explore the deep learning’s
ability to first infer between structure and retinal vascular function, then
generate an OCTA-like en-face image from structural OCT image alone. By
taking OCT as input and using the more cumbersome, expensive modality, OCTA,
as an objective training target, deep learning could overcome limitations with the
second modality and circumvent theneedforgeneratinglabels.
Unlike current AI models which are primarily targeted towards classification or
segmentation of images, to our knowledge, this is the first application of artificial
neural networks in ophthalmic imaging to generate a new image based on a
different imaging modality data. In addition, this is the first example in medical
imaging, to our knowledge, where expert annotations for training deep learning
modelsare bypassedbyusingobjective,functional flow measurements.
“FITC” in 2-PMContext
“QD” in
2-PM Context
Learn the mapping from FITC QD
→ (with QD as supervision)
to improve the quality of already acquired FITC stacks
unsupervised conditional image-to-image translation possible
also, but probably trickier
Electronmicroscopy similarreconstructionpipeline forvasculature
High-precisionautomatedreconstructionof
neuronswithflood-fillingnetworksMichałJanuszewski,
Jörgen Kornfeld,PeterH. Li,Art Pope,Tim Blakely,Larry Lindsey,Jeremy Maitin-Shepard,Mike
Tyka,Winfried Denk & Viren Jain
Nature Methods volume 15, pages 605–610 (2018)
https://doi.org/10.1038/s41592-018-0049-4
e introduce a CNN architecture, which is linearly equivariant (a
generalization of invariance defined in the next section) to 3D
rotations about patch centers. To the best of our knowledge, this
paper provides the first example of a CNN with linear
equivariance to 3Drotations and 3Dtranslations of voxelized
data. By exploiting the symmetries of the classification task, we
are able to reduce the numberof trainable parameters using
judicious weight tying. We also need less training and test time
data augmentation, since some aspects of 3D geometry are
already ‘hard-baked’ into the network.
As a proof of concept we try segmentation as a 3D problem,
feeding 3D image chunks into a 3D network. We use an
architecture based on Weiler et al. (2017)’s steerable version of
the FusionNet. It is a UNet with added skip connections within
the encoder and decoder paths to encourage better gradient
flow.
Effectiveautomatedpipelinefor3Dreconstructionofsynapsesbasedondeeplearning
Chi Xiao, Weifu Li, Hao Deng, Xi Chen, Yang Yang, Qiwei Xie and Hua Han
https://doi.org/10.1186/s12859-018-2232-0BMC Bioinformatics (13 July 2018) 19:263
Five basic steps implemented bythe authors
1) Imageregistration,
e.g. An Unsupervised Learning Model for Deformable Medical Image Registration
2)ROIDetection,
e.g. Weighted Hausdorff Distance: A Loss Function For Object Localization
3)3DCNNs,
e.g. DeepMedic for brain tumor segmentation
4a)Dijkstra shortestpath,
e.g. shiluyuan/Reinforcement-Learning-in-Path-Finding
4b)Oldschoolalgorithm refinement,
e.g. 3D CRF, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
5)MeshReconstruction,
e.g. Robust Surface Reconstruction via Dictionary Learning
Deep-learning-assisted Volume Visualization
Deep Marching Cubes: Learning Explicit Surface Representations
Problemsspecific to
multiphoton microscopy
VasculatureImagingArtifacts Movement artifact
00
In vivoMPMimagesofacapillary.
Because MPM images are acquire by raster scanning, images at different depths (z) are
acquired with a time lag (t). Unlabeled red blood cells moving through the lumen cause dark
spots and streaks and result in variable patterns within a single vessel.
Haft-Javaherian et al.(2019) https://doi.org/10.1371/journal.pone.0213539
VasculatureImagingArtifacts”Vessel Breakage” / Intensity inhomogeneity
Anovelmethodforidentifyingagraph-basedrepresentationof
3-Dmicrovascularnetworksfromfluorescencemicroscopy
imagestacks
S. Almasi, X. Xu, A.Ben-Zvi, B. Lacoste, C. Guet al.
MedicalImage Analysis, 20(1):208–223, February2015.
http://dx.doi.org/10.1016/j.media.2014.11.007
Vasculature Image Quality. An example of false fractions in the
structure caused by imaging imperfections and an area of more
artifacts in a maximum-intensity projection (MIP) slice of a 3-D
fluorescent microscopy image of microvasculature
Jointvolumetricextractionandenhancementof vasculaturefrom
low-SNR3-Dfluorescencemicroscopyimages
Sepideh Almasi, AyalBen-Zvi, Baptiste Lacoste, Chenghua Gu, Eric L.Miller, Xiaoyin Xu
Pattern Recognition Volume 63, March 2017, Pages710-718
https://doi.org/10.1016/j.patcog.2016.09.031
Highlights
*We introduce intensity-based features to directlysegmentartifactedimages ofvasculature.
*The segmentation method isshown to be robust tonon-uniformillumination and noise ofmixed type.
*This methodis free of apriori statisticalandgeometricalassumptions.
For fluorescence signals, adaptive optics, quantum dots and three-photon microscopy not always feasible
In this maximum intensity projection of 3-
D fluorescence microscopy image of
murine cranial tissue, miscellaneous
imaging artifacts are visible: uneven
illumination (upper vs. lower parts),
non-homogenous intensity
distribution inside the vessels (visible in
the larger vessels located at top right
corner), low SNR regions (lower areas),
high spatial density or closeness of
vessels (majorly in the center-upper
parts), reduced contrast at edges
(visible as blurs mostly for the central
vessels), brokenor faint vessels (lower
vessels), and low frequency
background variations caused by
scattered light (at higher density
regions).
MultidyeExperimentsfor ‘self-supervisedtraining’
CAMvesselfluorescence followed overtime for
Q705PEGaand 500kDaFITC–dextran.500kDa
FITC–dextran(A) and Q705PEGa(B)were
coinjected and images weretaken at the designated
times.
Theuseof quantumdots for ana
lysisofchickCAMvasculature
JD Smith,GW Fisher, AS Waggoner… -
Microvascularresearch, 2007 - Elsevier
Citedby69
Intravitally injected QDs were found to
be biocompatible and were kept in circulation
over the course of 4days without any observed
deleterious effects. QD vascular residence time
was tunable through QD surface chemistry
modification. We also found that use of QDs
with higher emission wavelengths
(> 655nm) virtually eliminated all chick-
derived autofluorescence andimproved depth-
of-field imaging. QDs were compared to FITC–
dextrans, a fluorescent dye commonly used for
imaging CAM vessels. QDs were found to
image vessels as well as or better than FITC–
dextrans at 2–3 orders of magnitude lower
concentration. We also demonstrated that QDs
are fixable with low fluorescence loss and thus
can be used in conjunction with histological
processing for further sample analysis.
i.e. which would give you a nicer
mask with Otsu’s thresholding for
example?
Easier to obtain ground truth labels from QD stacks and
use thoseto train forFITC stacks or multimodal FITC+QD
networks ifthere arecomplimentary information available?
Inpaintingmasks (‘vessel breakage’) from
differencebetween theQD and FITC stacks?
Quantum dots vs. Fluorescein Dextran (FITC)
MultidyeExperimentsfor OptimizedSNRfor allvesselsizes
Todorov et al. (2019) Automated analysis of whole brain vasculature using machinelearning https://doi.org/10.1101/613257
A-C, Maximum
intensity
projections of
the
automatically
reconstructed
tiling scans of
WGA (A) and
Evans blue (B)
signal in the
same sample
reveal all details
of the perfused
vascular
network in the
merged view
(C). D-F:
Zoom-ins from
marked region
in (C) showing
fine details. G-
L, Confocal
microscopy
confirms that
WGA and EB
dyes stain the
vascular wall
(G-I, maximum
intensity
projections of
112 µm) and
that the vessels
retain their
tubular shape
(J-L, single slice
of 1 µm).
Furthermore, owing to the dual labeling, we maximized the signal to
noise ratio (SNR) for each dye independently to avoid saturation of
differently sized vessels when only a single channel is used. We achieved
this by independently optimizing the excitation and emission power. For
WGA, we reached a higher SNR for small capillaries; bigger vessels,
however, were barely visible (Supporting Fig. 3). For EB, he SNR for small
capillaries was substantially lower but larger vessels reached a high SNR
(Supporting Fig. 3). Thus, integrating the information from both channels
allows homogenous staining of the entire vasculature throughout the
whole brain, and results in a high SNR for highquality segmentations and
analysis.
Play withyour DextranDaltons?
An eNOStag-GFP mouse was injected with two dextransof different sizes (red = Dextran 2 MDa; purple = dextran10 KDa) and Hoechst (blue = 615
Da), and single-plane images are presented here. 10 min after the injection, presence in the blood and extravasation are seen in the same image. Hoechst
extravasates almost immediately out of the blood vessels and is taken up by the surrounding cells (CI). Dextran 10 KDa (CII) can be seen in vessels
and in the tumor interstitium. Dextran 2 MDa (CIII) can be found in the vessels. 40 min after injection (CIV), Dextran 10 KDa disappears from the blood
(CV), and the fluorescent intensity of Dextran 2 MDa was also diminished (CVI). Scale bar = 100 µm - https://dx.doi.org/10.3791%2F55115
(2018)
If you have extra channels, and normally you would like to use 10 KDa Dextran, and for some reason cannot use something with stronger
fluorescence that stays better inside the vesselness. You could acquire stacks just for the vasculature segmentation, with the higher
molecular weights as the “physical labels” for vasculature?
z / Depthcrosstalk duetosuboptimalopticalsectioning
Invivothree-photonmicroscopy ofsubcortical
structureswithinanintactmousebrain
Nicholas G.Horton,Ke Wang,Demirhan Kobat,Catharine G.Clark, FrankW. Wise,Chris B.Schaffer &Chris Xu
NaturePhotonics volume7,pages 205–209(2013)
https://doi.org/10.1038/nphoton.2012.336
The fluorescence of three-photon excitation (3PE) falls off as 1/z4
(where z
is the distance from the focal plane), whereas the fluorescence of two-
photon excitation (2PE) falls off as 1/z2
. Therefore, 3PE dramatically
reduces the out-of-focus background in regions far from the focal plane,
improving the signal-to-background ratio (SBR) by orders of magnitude
when compared to 2PE
http://biomicroscopy.bu.edu/research/nonlinear-microsc
opy
http://parkerlab.bio.uci.edu/microscopy_construction/build_your_own_twophoton_microscope.ht
m
“Background
vasculature”
is seen in layers
in “front of it”, i.e.
the z-crosstalk
Nonlinear 2-PM
reduces this, and
3-PM even more.
When you get the binary mask, how to in the end reconstruct your mesh? From 1-PM, your vessels would most likely look very thick in z-dimension? i.e.
way too anistropic reconstruction?
Depth resolution We still have labeled in 2D so some boundary ambiguity exists
Cannyedge
radius = 1
Canny on the
ground truth
Gamma-corrected of version of
the input slice. Now you see
better the dimmer vessels
The upper part of the slice is clearly
behind(on z axis), as it is dimmer, but
it has been annotated to be a vessel
alsoon this plane. This is not
necessarily a problem if some sort of
consistency exists in labeling, which
isnot thecasenecessarily
betweendifferent annotators.
Then you might need the label
noisesolutions outlinedlater on this
slideset.
Volumerendering of
the ground truth of
courselooks now
thickerthan the
original
unsegmented
volume
Multiplying the input
volume with this
groundtruth mask
gives anice
rendering ofcourse.
We wantto suppress
thebackground
noise,andmake the
voxel mesh
→
conversion easier
with clean
segmentations
Single-photonconfocalmicroscopesectioning worse than 2-PM but still quite good
Images captured by confocal microscopy, showing FITC-dextran (green) and DiI-
labeledRBCs(red) in a retinalflatmount. (A, C) Merged green/red images from the
superficial section of the retina. (B, D) Red RBC fluorescence in the deeper capillary
layers of the retina. The arrow in (A) points to an arteriole that branches down from
the superficial layerinto the capillarylayersshown in (B)
Comparisonof
the
Fluorescence
Microscopy
Techniques
(widefield,
confocal, two-
photon)
http://candle.am/
microscopy/
Measurement ofRetinal Blood Flow Ratein DiabeticRats: Disparity Between
Techniques Dueto Redistribution of Flow Leskova et al (2013)
http://doi.org/10.1167/iovs.13-11915
RatRetina
SUPERFICIAL
Layers
RatRetina
CAPILLARY
Layer
Kornfield andNewman (2014)
10.1523/JNEUROSCI.1971-14.2014
Vessel density in the three
vascular layers. Schematic
of the trilaminar vascular
network showing the first-
order arteriole (1) and
venule (V) and the
connectivity of the superficial
(S), intermediate (I), and deep
(D) vascular layers and their
locations within the retina.
GCL, Ganglion cell layer; IPL, inner
plexiform layer; INL, inner nuclear layer;
OPL, outer plexiform layer; ONL, outer
nuclear layer; PR,photoreceptors.
z / DepthAttenuationnoiseasfunctionofdepth
Effects of depth-dependent noise on line-scanning particle image velocimetry (LS-PIV) analysis. A , Three-
dimensional rendering of cortical vessels imaged with TPLSM demonstrating depth-dependent decrease in
SNR. The blood plasma was labeled with Texas Red-dextran and an image stack over the top 1000 µm was
acquired at 1 µm spacing along the z-axis starting from the brain surface. B, 100 µm-thick projections of
regions 1–4 in panel (A). RBC velocities were measured along the central axis of vessels shown in red boxes,
with redarrows representing orientation offlow. The raw line-scan data (L/S) are depicted tothe right ofeach
fieldandlabeledwith their respective SNR. CorrespondingLS-PIV analyses aredepictedto the far right.
Accuracy of LS-PIV analysis with noise and increasing speed. Top, simulation line-scan data with a
low level of normally distributed noise with SNR of 8 ( A ), 1 ( B ), 0.5 ( C ), and 0.33 ( D ). Middle, LS-PIV
analysis of the line-scan data (blue dots). The red line represents actual particle speed. Bottom,
percent error of LS-PIV compared with actual velocity.
Tyson N Kim et al. (2012) http://doi.org/10.1371/journal.pone.0038590 - Cited by 46
‘Intersectingvesselsin2-PM’ even though the centerlines actual vessels
in 3D do not intersect the vessel masks might #1
Calivá et al. (2015) A new tool to connect blood vessels in fundus retinal images
https://doi.org/10.1109/EMBC.2015.7319356 - Cited by 8
In 2D case, the
vessel
crossings are
harder to
resolve than in
our 3D case
Slice#10/26Seems like that theBig and Smaller
vessel are going tojoin?
Slice#19/26Seems like Smallvessel actually was
touching the Biggerone?
Cross-channelspectralcrosstalk
Newred-fluorescent calciumindicatorsfor
optogenetics,photoactivationand multi-colorimaging
Oheim M, van ’tHoff M, FeltzA,ZamaleevaA,Mallet J-M,Collot M. 2014
Biochimica et Biophysica Acta (BBA) - Molecular Cell Research 1843. Calcium Signaling in Health and
Disease:2284–2306. http://dx.doi.org/10.1016/j.bbamcr.2014.03.010
https://github.com/petteriTeikari/mixedImageSeparation
https://github.com/petteriTeikari/spectralSeparability/wiki
Color Preprocessing: SpectralUnmixing formicroscopy
See “spectral crosstalk” slide above. Or in more general terms you want to do (blind) source separation, “the
cocktail party problem” for 2-PM microscopy data, i.e. you might have some astrocyte/calcium/etc signal on your
“vasculature channel”. You could just apply ICA here and hope for perfect unmixing or think of something more
advanced. Again, seek inspiration from elsewhere. Hyperspectral imaging field is having the same
challenge to solve.
ImprovedDeepSpectralConvolution
NetworkForHyperspectralUnmixingWith
MultinomialMixtureKernelandEndmember
Uncertainty
Savas Ozkan, and Gozde Bozdagi Akar
(Submitted on 27 Mar 2019) https://arxiv.org/abs/1904.00815
https://github.com/savasozkan/dscn
We propose a novel framework for hyperspectral unmixing by using
an improved deep spectral convolution network (DSCN++) combined
with endmember uncertainty. DSCN++ is used to compute high-level
representations which are further modeled with Multinomial Mixture
Model to estimate abundance maps. In the reconstruction step, a new
trainable uncertainty term based on a nonlinear neural network
model is introduced to provide robustness to endmember uncertainty.
For the optimization of the coefficients of the multinomial model and the
uncertainty term, Wasserstein Generative Adversarial Network (WGAN)
is exploited to improve stability.
AnisotropicVolumes z-resolutionnotasgoodas xy
3DAnisotropic HybridNetwork:TransferringConvolutional
Features from2DImages to3DAnisotropicVolumes
Siqi Liu, Daguang Xu, S. Kevin Zhou, Thomas Mertelmeier, Julia Wicklein, Anna Jerebko, Sasa
Grbic, Olivier Pauly, Weidong Cai, Dorin Comaniciu (Submitted on 23 Nov 2017
https://arxiv.org/abs/1711.08580
Elastic Boundary Projection for 3D
Medical Image Segmentation
Tianwei Ni etal. (CVPR 2019)
http://victorni.me/pdf/EBP_CVPR2019/1070.pdf
In this paper, we bridge the gap between 2D and 3D using a novel approach named
Elastic Boundary Projection (EBP). The key observation is that, although the object
is a 3D volume, what we really need in segmentation is to find its boundary which is a
2D surface. Therefore, we place a number of pivot points in the 3D space, and for each
pivot, we determine its distance to the object boundary along a dense set of directions.
This creates an elastic shell around each pivot which is initialized as a perfect sphere.
We train a 2D deep network to determine whether each ending point falls within the
object, andgradually adjust the shellsothatit graduallyconverges tothe actualshape of
the boundaryand thus achievesthe goalofsegmentation
From voxel-based tricks NURBS -like parametrization for “subvoxel” MESH/CFD Analysis?
→
Not a lot of papers
addressingspecifically
(multiphoton)microscopy
(micro)vasculature
thus most of the slides are outside
vasculature processing but relevant if
you want to work on “next
generation” vascular segmentation
networks
Non-DL ‘classical approaches’
Segmentationof VasculatureFromFluorescentlyLabeledEndothelial
CellsinMulti-PhotonMicroscopyImages
Russell Bates ; Benjamin Irving ; Bostjan Markelc ; JakobKaeppler ; Graham Brown ; Ruth J.
Muschel ; et al. Department of EngineeringScience, Institute of BiomedicalEngineering, University of Oxford, Oxford,U.K.
IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 1 , Jan. 2019 )
https://doi.org/10.1109/TMI.2017.2725639
Here, we present a method for the segmentation of tumor vasculature in 3D
fluorescence microscopic images using signals from the endothelial and
surrounding cells. We show that our method can provide complete and
semantically meaningful segmentations of complex vasculature using a
supervoxel-Markovrandom fieldapproach.
A potential area for future improvement is the limitations imposed by our edge
potentials in the MRF which are tuned rather than learned. The expectation of the
existenceof fully annotated training sets formany applications is unrealistic.Future
work will focus on the suitability of semi-supervised methods to achieve fully
supervised levels of performance on sparse annotations. It is possible that this
may be donein thecurrentframework using label-transduction methods.
Interesting work in the transduction and interactive learning for sparsely labeled
superpixel microscopy images has also been undertaken by Suetal.(2016). A
method that can take sparse image annotations and use them to leverage
information from large set of unlabeled parts of the image to create high quality
segmentations would be an extremely powerful tool. This would have very broad
applications in novel imaging experiments where large training sets are not readily
availableandwherethereis ahigh time-cost in producingsuch atrainingset.
InitialEffort with hybrid “2D/3D ZNN” with CPU acceleration
DeepLearningConvolutionalNetworksforMultiphotonMicroscopy
VasculatureSegmentation
Petteri Teikari, Marc Santos, Charissa Poon, Kullervo Hynynen (Submitted on 8 Jun 2016)
https://arxiv.org/abs/1606.02382
MicrovasculatureCNNs #1
MicrovasculaturesegmentationofarteriolesusingdeepCNN
Y. M.Kassimet al. (2017)
ComputationalImaging and Vis Analysis (CIVA) Lab
https://doi.org/10.1109/ICIP.2017.8296347
Accurate segmentation for separating microvasculature
structures is important in quantifying remodeling process.
In this work, we utilize a deep convolutional neural
network (CNN) framework for obtaining robust
segmentations of microvasculature from epifluorescence
microscopy imagery of mice dura mater. Due to the
inhomogeneous staining of the microvasculature,
different binding properties of vessels under fluorescence
dye, uneven contrast and low texture content, traditional
vessel segmentation approaches obtain sub-optimal
accuracy.
We proposed an architecture of CNN which is adapted to
obtaining robust segmentation of microvasculature
structures. By considering overlapping patches along
with multiple convolutional layers, our method obtains good
vessel differentiation for accurate segmentations.
MicrovasculatureCNNs #2
Extracting3DVascularStructuresfromMicroscopy
ImagesusingConvolutionalRecurrentNetworks
Russell Bates,Benjamin Irving, Bostjan Markelc,Jakob
Kaeppler, Ruth Muschel,VicenteGrau, JuliaA. Schnabel
Institute of BiomedicalEngineering, Department of EngineeringScience, University of Oxford, United Kingdom
CRUK/MRCOxford Centre for Radiation Oncology, Department of Oncology, Universityof Oxford, United
Kingdom
Division of ImagingSciences and BiomedicalEngineering, Kings College London, United Kingdom.
PerspectumDiagnostics, Oxford, United Kingdom
(Submitted on 26May 2017)
https://arxiv.org/abs/1705.09597
In tumors in particular, the vascularnetworksmaybe
extremelyirregularand theappearanceofthe individual
vesselsmaynotconformto classicaldescriptionsof
vascularappearance. Typically, vessels areextracted by
eitherasegmentation and thinningpipeline,or bydirect
tracking. Neitherof these methods are wellsuited to
microscopy images of tumorvasculature.
In order to address this we propose a method to directly
extract a medial representation of the vessels using
Convolutional Neural Networks. We then show that
these two-dimensional centerlines can be meaningfully
extended into 3D in anisotropic and complex microscopy
images using the recently popularized Convolutional Long
Short-Term Memory units (ConvLSTM). We demonstrate
the effectiveness of this hybrid convolutional-recurrent
architecture over both 2D and 3D convolutional
comparators.
MicrovasculatureCNNs #3
AutomaticGraph-basedModelingof Brain
MicrovesselsCapturedwithTwo-PhotonMicroscopy
RafatDamseh; PhilippePouliot ;Louis Gagnon ; Sava
Sakadzic; David Boas ; FaridaCheriet et al. (2018)
Institute of Biomedical Engineering, Ecole Polytechnique de Montreal
https://doi.org/10.1109/JBHI.2018.2884678
Graph models of cerebral vasculature derived from 2-
photon microscopy have shown to be relevant to study
brain microphysiology. Automatic graphing of these
microvessels remain problematic due to the vascular
network complexity and 2-photon sensitivity limitations
with depth.
In this work, we propose a fully automatic processing
pipeline to address this issue. The modeling scheme
consists of a fully-convolutional neural network (FCN) to
segment microvessels, a 3D surface model generator
and a geometry contraction algorithm to produce
graphical models with a single connected component.
Quantitative assessment using NetMets metrics, at a
tolerance of 60 μm, false negative and false positive
geometric error rates are 3.8% and 4.2%, respectively,
whereas false negative and false positive topological error
rates are6.1%and 4.5%, respectively.
One important issue that could be addressed in a future
work is related to the difficulty in generating watertight
surface models. The employed contraction algorithm is
not applicable to surfaces lacking such characteristicsin
generating watertight surface models. Introducing a
geometric contraction not restricted to such conditions
on the obtained surface model could be an area of further
investigation.
MicrovasculatureCNNs #4
FullyConvolutionalDenseNetsforSegmentationof
MicrovesselsinTwo-photonMicroscopy
RafatDamseh et al. (2019)
https://doi.org/10.1109/EMBC.2018.8512285
Segmentation of microvessels measured using two-photon
microscopy has been studied in the literature with limited
success due to uneven intensities associated with
optical imaging and shadowing effects. In this work, we
address this problem using a customized version of a
recently developed fully convolutional neural network,
namely, FC-DensNets (see DenseNet Cited by 3527
). To train
and validate the network, manual annotations of 8
angiogramsfrom two-photon microscopy was used.
However, this study suggests that in order to exploit the output of our deep
model in further geometrical and topological analysis, further
investigations might be needed to refine the segmentation. This could
be done by either adding extra processing blocks on the output of the
model orincorporating 3D information in its trainingprocess.
MicrovasculatureCNNs #5
A Deep Learning Approach to 3D Segmentation of Brain Vasculature
Waleed Tahir, Jiabei Zhu, Sreekanth Kura,Xiaojun Cheng,DavidBoas,
and Lei Tian (2019)
Department of Electrical and Computer Engineering, Boston University
https://www.osapublishing.org/abstract.cfm?uri=BRAIN-2019-BT2A.6
The segmentation of blood-vessels is an important preprocessing
step for the quantitative analysis of brain vasculature. We approach
the segmentation task for two-photon brain angiograms using a fully
convolutional 3D deep neural network.
We employ a DNN to learn a statistical model relating the measured
angiograms to the vessel labels. The overall structure is derived from
V-net [Milletari et al.2016] which consists of a 3D encoder-decoder
architecture. The input first passes through the encoder path which
consists of four convolutional layers. Each layer comprises of residual
connections which speed up convergence, and 3D convolutions with
multi-channel convolution kernels which retain 3D context.
Loss functions like mean squared error (MSE) and mean absolute
error (MAE) have been used widely in deep learning, however, they
cannot promote sparsity and are thus unsuitable for sparse objects. In
our case, less than 5% of the total volume in the angiogram comprises
of blood vessels. Thus, the object under study is not only sparse, there
is also a large class-imbalance between the number for foreground vs.
background voxels. Thus we resort to balanced cross entropy as the
loss function [HED, 2015], which not only promotes sparsity, but also
caters fortheclass imbalance
MicrovasculatureCNNs #6: State-of-the-Art (SOTA)?
Deepconvolutionalneuralnetworksforsegmenting3Dinvivo
multiphotonimagesofvasculatureinAlzheimerdiseasemouse
modelsMohammad Haft-Javaherian, Linjing Fang,Victorine Muse, Chris B.
Schaffer, Nozomi Nishimura,Mert R.Sabuncu
Meinig Schoolof BiomedicalEngineering, Cornell University, Ithaca, NY, United States of America
March2019
https://doi.org/10.1371/journal.pone.0213539
https://arxiv.org/abs/1801.00880
Data: https://doi.org/10.7298/X4FJ2F1D (1.141 Gb)
Code: https://github.com/mhaft/DeepVess (Tensorflow / MATLAB)
We explored the use of convolutional neural networks to segment 3D vessels
within volumetric in vivo images acquired by multiphoton microscopy. We
evaluated different network architectures and machine learning
techniques in the context of this segmentation problem. We show that our
optimized convolutional neural network architecture with a customized loss
function, which we call DeepVess, yielded a segmentation accuracy that
was better than state-of-the-art methods, while also being orders of
magnitude fasterthan themanual annotation
While DeepVess offers very high accuracy in the problem we consider, there
is room for further improvement and validation, in particular in the
application to other vasiform structures and modalities. For example, other
types of (e.g., non-convolutional) architectures such as long short-term
memory (LSTM) can be examined for this problem. Likewise, a combined
approach that treats segmentation and centerline extraction methods
together, such as the method proposed by Bates etal. (2017) in a single
complete end-to-end learning framework might achieve higher centerline
accuracy levels.
Comparison ofDeepVess and the state-of-
the-art methods
3D rendering of (A) the expert’s
manual and
(B) DeepVess segmentation results.
Comparison ofDeepVess and the gold standard human
expertsegmentation results
We used 50% dropout during test-time [MCDropout] and
computed Shannon’s entropy for the segmentation
prediction at each voxel to quantify the uncertainty in
the automatedsegmentation.
MicrovasculatureCNNs #7: Dual-Dye Network for vasculature
Automatedanalysisof wholebrain
vasculature usingmachinelearning
Mihail Ivilinov Todorov, Johannes C. Paetzold, Oliver Schoppe, Giles Tetteh, Velizar Efremov,
Katalin Völgyi, Marco Düring, Martin Dichgans, Marie Piraud, Bjoern Menze, Ali Ertürk
(Posted April 18, 2019) https://doi.org/10.1101/613257
http://discotechnologies.org/VesSAP
Tissue clearing methods enable imaging of intact biological
specimens without sectioning. However, reliable and scalable
analysis of such large imaging data in 3D remains a challenge.
Towards this goal, we developed a deep learning-based framework
to quantify and analyze the brain vasculature, named Vessel
Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a
fully convolutional network with a transfer learning approach for
segmentation.
We systematically analyzed vascular features of the whole brains
including their length, bifurcation points and radius at the micrometer
scale by registering them to the Allen mouse brain atlas. We
reported the first evidence of secondary intracranial collateral
vascularization in CD1-Elite mice and found reduced vascularization
in the brainstem as compared to the cerebrum. VesSAP thus enables
unbiased and scalable quantifications for the
angioarchitecture of the cleared intact mouse brain and yields
newbiological insights related to the vascular brain function.
What
next?
2019 ideas
WellwhatabouttheNOVELTYtoADD?
Depends a bit on what the
benchmarks reveal?
The DeepVess does not
seem out from this world in
terms of their specs so
possible to beat it with
“brute force”, by trying
different standard things
proposed in the literature
Keep this in mind, and
have a look on the
following slides
INPUT SEGMENTATION UNCERTAINTYMC Dropout
While DeepVess offers very high accuracy in the problem we consider,
there is room for further improvement and validation, in particular in the
application to other vasiform structures and modalities. For example, other
types of (e.g., non-convolutional) architectures such as long short-term
memory (LSTM) i.e. what the hGRU did
can be examined for this problem.
Likewise, a combined approach that treats segmentation and centerline
extraction methods together multi-task learning (MTL)
, such as the method
proposed by Bates et al. [25] in a single complete end-to-end learning
framework might achieve higher centerline accuracy levels.
VasculatureNetworks Future
While DeepVess offers very high
accuracy in the problem we
consider, there is room for further
improvement and validation, in
particular in the application to other
vasiform structures and modalities.
For example, other types of (e.g.,
non-convolutional) architectures
such as long short-term memory
(LSTM) i.e. what the hGRU did
can be
examined for this problem. Likewise,
a combined approach that treats
segmentation and centerline
extraction methods together multi-task
learning (MTL)
, such as the method
proposed by Bates et al. [25] in a
single complete end-to-end
learning framework might achieve
higher centerline accuracy levels.
FC-DensNets
However, this study
suggests that in order
to exploit the output
of our deep model in
further geometrical
and topological
analysis, further
investigations might
be needed to refine
the segmentation.
This could be done
by either adding extra
processing blocks on
the output of the
model or
incorporating 3D
information in its
training process.
http://sci-hub.tw/10.1109
/jbhi.2018.2884678
One important issue that could be
addressed in a future work is related
to the difficulty in generating
watertight surface models. The
employed contraction algorithm is
not applicable to surfaces lacking
such characteristics.
Generalidea
ofthe
End-to-End
network
High-level GenericArchitecture
IMAGE
RESTORATION
IMAGE
SEGMENTATION
GRAPH
RECONSTRUCTION
Denoisedstack
forVisualization
Voxelmasks
fordiameter
quantification
Graphnetwork/
Watertightmesh
forCFD analysis
y1
y2
y3
End-to-endnetwork(i.e. learn jointlyall 3tasks with deeply supervisedtargets y1-3
)
Composed of different blocks, trained hopefully end-to-end
Image
Restoration
andQuality
NoiseModels biomedical images typicallydonotfollow AWGN model
Rician distribution is far from being
Gaussian for small SNR (A/σ≤1). For
ratios as small as A/σ=3, however, it
starts to approximate the Gaussian
distribution.
Gudbjartsson and Samuel Patz (1995)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2254141/
MRI
Additive WhiteGaussian Noise (AWGN).AWGN issignal-independent,whereas with 2-PM,atlowphotoncounts thenoise dependsonthesignal(“Poisson noise”)
Making noise ‘Gaussian’
Using a nonlinear variance-stabilizing
transformation (VST, e.g. generalized
Anscombe transformationMäkitalo andFoi2013,
see github/petteriTeikari/.../denoise_anscombeTransform.m
) to
convert the Poisson-Gaussian
denoising problem into a Gaussian
noise removal problem. This allows you
to use denoising algorithms designed
for AWGN noise.
https://doi.org/10.1117/12.22165
62
Ultrasound
Existing work for image restoration mainly deals with
additive Gaussian noise, impulse noise, and Poisson
noise. Multiplicative noise, a different noise model, is
commonly found in laser images, synthetic aperture
radar (SAR), ultrasoundimaging, etc. The associated
mathematical model assumes that the original image u
has been blurred by a blurring operator K and then
corrupted by some multiplicative noise (e.g., Gamma
noise) .
η
Results ofdifferent
restoring methods on
real ultrasoundimage
(“Kidney”).(a) Real
image; (b) RLOmethod;
(c) SRAD method; (d) JY
method; (e) Algorithm 1.
Lu et al. (2018)
https://doi.org/10.1016/j.apm.2018.05.007
Introtomicroscopy noiseandimage formationmodels
The effects of decreasing signal-to-noise ratio in fluorescence microscopy is illustrated by the series of digital images presented in
Figure 1. The specimen is an adherent culture of opossum kidney proximaltubule epithelial cells (OKcell line) stained with SYTOX Green to
image the nuclei. At high signal-to-noise ratios, a pair of interphase nuclei (Figure 1(a)) is imaged with sharp contrast and good
definition of fine detail on a black background. As the signal-to-noise ratio decreases (Figures 1(b) and 1(c)), the definition and contrast
of the nuclei also decrease until they almost completely blend into the noisy background (Figure 1(d)) as the SNR approaches
unity.
Three primaryundesirable signalcomponents(noise) aretypicallyconsideredin calculatingoverallsignal-to-noise ratios:
●
Photon noise results from the inherent statistical variation in the arrival rate of photons incident on the detector. The interval
between photon arrivals is governed by Poisson statistics. In general, the term shot noise is applied to any noise component
reflectinga similarstatisticalvariation, or uncertainty.
●
Dark noise arises from statistical variation in the number of electrons thermally generated within the silicon structure of the
detector, which is independent of photon-induced signal, but highly dependent on device temperature. In similarity to photon noise,
dark noise follows a Poisson relationshiptodark current.
●
Read noise is a combination of system noise components inherent to the process of converting CCD charge carriers into a voltage
signal for quantification, and the subsequent processing and analog-to-digital (A/D) conversion. The major contribution to read
noise usuallyoriginates with the on-chip preamplifier, andthisnoise isadded uniformlytoeveryimage pixel.
CCDNoise Sources and Signal-to-Noise Ratio https://micro.magnet.fsu.edu/primer/digitalimaging/concepts/ccdsnr.html
Hownoisemakes your lifeharderinpractice
AToolforAlignmentand Averagingof
SparseFluorescenceSignalsinRod-Shaped
Bacteria
Joris M.H. Goudsmits, Antoine M. van Oijen,Andrew Robinson
Biophysical Journal (April 2016) https://doi.org/10.1016/j.bpj.2016.02.039
Analyzing the spatial distribution of low-abundance proteins
within cells is highly challenging because information obtained
from multiple cells needs to be combined to provide well-defined
maps of protein locations. We present (to our knowledge) a novel
tool for fast, automated, and user-impartial analysis of
fluorescent protein distribution across the short axis of rod-
shaped bacteria. To demonstrate the strength of our approach in
extracting spatial distributions and visualizing dynamic
intracellular processes, we analyzed sparse fluorescence
signals from single-molecule time-lapse images of individual
Escherichia coli cells.
StatisticalDenoisingforsinglemoleculefluorescence
microscopic imagesJi Won Yoon(Submittedon 7 Jun
2013) https://arxiv.org/abs/1306.1619
PhotonShot NoiseLimits on
OpticalDetectionofNeuronal
Spikes andEstimationofSpike
Timing
Brian A.Wilt,James E.Fitzgerald, MarkJ.Schnitzer
Biophysical Journal (January 2013)
https://doi.org/10.1016/j.bpj.2012.07.058
Optical approaches for tracking neural dynamics
are of widespread interest, but a theoretical
framework quantifying the physical limits of
these techniques has been lacking. We
formulate such a framework by using signal
detection and estimation theory to obtain physical
bounds on the detection of neural spikes and the
estimation of their occurrence times as set by
photon counting statistics (shot noise). These
bounds are succinctly expressed via a
discriminability index that depends on the kinetics
of the optical indicator and the relative fluxes of
signal and background photons. Finally, the ideas
presented here may be applicable to the
inference of time-varying spike rates from
fluorescence data.
Denoising Two-Photon Calcium Imaging Data Malik
et al. (2011)
https://doi.org/10.1371/journal.pone.0020490
Quantifyingtheperformanceof microscopysystems
UsingtheNoiSeeworkflowtomeasure
signal-to-noiseratios ofconfocal
microscopes
Alexia Ferrand,Kai D. Schleicher, Nikolaus Ehrenfeuchter,Wolf Heusermann &Oliver
Biehlmaier ScientificReportsvolume 9, Article number: 1165 (2019)
https://doi.org/10.1038/s41598-018-37781-3
https://imagej.net/NoiSee
By design, alarge portion of thesignal is discarded in
confocal imaging, leading to a decreased signal-
to-noise ratio (SNR) which in turn limits resolution.
A well-aligned system and high performance
detectors are needed in order to generate an image
of best quality. However, a convenient method to
address system status and performance on the
emission side is still lacking. Here, we present a
complete method to assess microscope and
emission light path performance in terms of SNR,
with a comprehensive protocol alongside NoiSee,
an easy-to-usemacro forFiji(ImageJ).
Our method reveals differences in microscope
performance and highlights the various detector
types used (multialkali photomultiplier tube (PMT),
gallium arsenide phosphide (GaAsP) PMT, and
Hybrid detector). Altogether, our method will provide
useful information to research groups and facilities
to diagnosetheirconfocal microscopes.
Image quality as a function ofSNR. Cells with actin fibres imaged on
different confocal systems and showing decreasing SNR scores from left to
right. The detectors were chosen to represent the range of SNR score: (a,f)
Zeiss LSM800 GaAsP2; (b,g) LSM700up PMT2; (c,h) SP8M PMT3; (d,I) SP5II
PMT1; (e,j)SP5 MP HyD2. Scale bar 10µm.
Influence ofSNRand signal-to-background ratio (SBR) on image quality
Nicetohavesome realisticnoise-free2-PMvolumes/slices asbenchmarksfordenoising
performance evenifyouusethe‘statisticalnoisemodel’ approach
GRDN:GroupedResidualDenseNetworkforRealImageDenoising andGAN-
based Real-worldNoiseModeling
Dong-WookKim, Jae Ryun Chung, Seung-Won Jung
(Submitted on 27 May 2019) https://arxiv.org/abs/1905.11172
Toward real-world image denoising, there have been two main approaches. The first approach is to
find a better statistical model of real-world noise rather than the additive white Gaussian noise[e.g.
Brooks et al. 2018, Guo et al. 2018, Plötzand Roth 2018]
. In particular, a combination of Gaussian and Poisson
distributions was shown to closely model both signal-dependent and signal-independent noise.
The networks trained using these new synthetic noisy images demonstrated the superiority in
denoising real-world noisy images. One clear advantage of this approach is that we can have infinitely
many training image pairs by simply adding the synthetic noise to noise-free ground-truth images.
The second approach is thus in an opposite direction. From real-world noisy images, nearly noise-
freeground-truthimagescan be obtained by inverting an image acquisition procedure
We improve the previous GAN-based real-world noise simulation technique [Chen et al. 2018]
by including conditioning signals such as the noise-free image patch, ISO, and shutter speed as
additional inputs to the generator. The conditioning on the noise-free image patch can help
generating more realistic signal-dependent noise and the other camera parameters can increase
controllability and variety of simulated noise signals. We also change the discriminator of the previous
architecture [Chen et al. 2018] by using a recent relativistic GAN [Jolicoeur-Martineau 2018
Cited by 38
]. Unlike conventional GANs, the discriminator of the relativistic GAN learns to determine
which is morerealistic between real data and fake data
We thus plan to apply the proposed image denoising network to other image restoration tasks.
We also could not fully and quantitatively justify the effectiveness of the proposed real-world noise
modeling method. A more elaborate design is clearly necessary for better real-world noise
modeling. We believe that our real-world noise modeling method can be extended to other real-
worlddegradations such as blur, aliasing, and haze, which will be demonstratedin our future work
If the image restoration is successful with the image segmentation
target, we could train another network for estimating image
quality and even optimize that in real-time in the 2-PM Setup [*]
?
[*]
Assuming that one has the time to acquire multiple times the same
slices if they are deemed sub-quality. Might work for histopathology,
butforanesthetized rodents?
Adeepneural networkforimagequalityassessment
Bosse et al. (2016) https://doi.org/10.1109/ICIP.2016.7533065
ExploitingUnlabeledData inCNNs bySelf-supervisedLearningto
Rank Xialei Liu et al. (2019) https://doi.org/10.1109/TPAMI.2019.2899857
JND-SalCAR:ANovelJND-basedSaliency-ChannelAttention
ResidualNetworkforImageQualityPrediction
Seo et al. (2016) https://arxiv.org/abs/1902.05316
Real-TimeQualityAssessmentofPediatricMRI viaSemi-
SupervisedDeepNonlocalResidualNeuralNetworks
Siyuan Liu et al. (2019) https://arxiv.org/abs/1904.03639
GANs-NQM:AGenerativeAdversarialNetworks basedNo
ReferenceQualityAssessmentMetricforRGB-DSynthesized
Views Suiyi Ling et al. (2019) https://arxiv.org/abs/1903.12088
Quality-awareUnpairedImage-to-ImageTranslation
Lei Chen et al. (2019) https://arxiv.org/abs/1903.06399
Fluorescent MicrosopyDatasets There could more benchmarking data around
APoisson-GaussianDenoising
DatasetwithReal Fluorescence
MicroscopyImages
Yide Zhang,Yinhao Zhu,Evan Nichols,Qingfei Wang,Siyuan
Zhang, CodySmith,ScottHoward University of Notre Dame
(Submitted on 26 Dec 2018(v1), last revised 5 Apr 2019)
https://arxiv.org/abs/1812.10366
http://tinyurl.com/y6mwqcjs
https://github.com/bmmi/denoising-fluorescence
The dataset consists of 12,000 real
fluorescence microscopy images
obtained with commercial confocal,
two-photon, and wide-field
microscopes and representative
biological samples such as cells,
zebrafish, and mouse brain tissues.
We use image averaging to
effectively obtain ground truth
images and 60,000 noisy images
with different noise levels.
We have made our FMD dataset
publicly available as a benchmark for
Poisson/Gaussian denoising
research, which, we believe, will be
especially useful for researchers that
are interested in improving the
imaging quality of fluorescence
microscopy.
Examples of images with different noise levels and ground
truth. The single-channel (gray) images are acquired with two-
photon microscopy on fixed mouse brain tissues. The
multichannel (color) images are obtained with two-photon
microscopy on fixed BPAE cells. The ground truth images are
estimated by averaging 50 noisy raw images.
Estimatednoise
parameters (aand b)
ofaveragedimages
obtained with different
rawimage numbers in
theaverage. The
estimation is
performed on the
secondFOVofeach
imagingconfiguration.
Ifyouhada custom-builtmicroscope makeitmore‘intelligent’
ReducingUncertaintyinUndersampledMRI ReconstructionwithActive
Acquisition ZizhaoZhang, Adriana Romero, Matthew J. Muckley, Pascal Vincent, Lin Yang, MichalDrozdzal
(Submittedon 8Feb2019) https://arxiv.org/abs/1902.03051
LearningFastMagnetic ResonanceImaging TomerWeiss, Sanketh Vedula, OrtalSenouf, Alex
Bronstein, Oleg Michailovich, MichaelZibulevsky (Submittedon 22 May2019) https://arxiv.org/abs/1905.09324
Self-supervisedlearningofinverseproblem solvers inmedicalimaging Ortal
Senouf, Sanketh Vedula, Tomer Weiss, Alex Bronstein, OlegMichailovich, MichaelZibulevsky (Submittedon 22May2019)
https://arxiv.org/abs/1905.09325
CompressedSensing:From ResearchtoClinicalPracticewithData-Driven
Learning Joseph Y. Cheng, Feiyu Chen, Christopher Sandino, Morteza Mardani, JohnM. Pauly, ShreyasS. Vasanawala
(Submittedon 19 Mar 2019) https://arxiv.org/abs/1903.07824
Deep LearningMethods forParallelMagneticResonanceImage
Reconstruction Florian Knoll, Kerstin Hammernik, Chi Zhang, Steen Moeller, ThomasPock, DanielK. Sodickson,
MehmetAkcakaya (Submittedon 1 Apr 2019) https://arxiv.org/abs/1904.01112
●
Sample so that you get the best image quality possible even before any of the image
restoration networks (and jointly optimize the whole thing)
●
Also take into account your other physiological sensors as well (e.g. blood pressure
measurement, cardiac gating ECG lead, EEG, etc.), design the triggering, DAQ systems to
integrate to your microscope (commercial Olympus or custom-built)
https://www.slideshare.net/PetteriTeikariPhD/instrumentation
-for-in-vivo-intravital-microscopy
Instrumentation forinvivo intravital microscopy
End-to-endrestoration+segmentationwith deepsupervision?
Not a bad “side-product” if you get a cleaned volume for visualization
along with the segmentation? Especially for clinical use?
RawImage
Swaminathan etal.(2012) http://doi.org/10.1515/bmt-2012-0055
CLAHE Contourlettransform
method+CLAHE
You are trying to segment the
vasculature, thus having the labels
for vasculature allowing “weighed
sharpening” within the network
Clinician should be able to
make more sense from this
than from the “raw image”?
Create aproof-readingtoolfor ‘2-PMquality’ estimation?
HistoQC:AnOpen-SourceQuality
ControlToolforDigitalPathology
Slides
AndrewJanowczyk;Ren Zuo; Hannah Gilmore;
Michael Feldman; and Anant Madabhushi
http://doi.org/10.1200/CCI.18.00157
Here we present HistoQC, a tool for rapidly
performing quality control to not only identify and
delineate artefacts but also discover cohort-level
outliers (eg, slides stained darker or lighter than
others in the cohort). This open-source tool
employs a combination of image metrics (eg, color
histograms, brightness, contrast), features (eg, edge
detectors), and supervised classifiers (eg, pen
detection) to identify artefact-free regions on
digitized slides.
These regions and metrics are presented to the user
via an interactive graphical user interface,
facilitating artefact detection through real-time
visualization and filtering. These same metrics afford
users the opportunity to explicitly define acceptable
tolerancesfortheirworkflows.
And get multiple “ground truth” masks now in your database (MongoDB Cited by 14
or whatever your DevOps
expert recommends for your HDF5s / OME-TIFFs)
HDR:Morebits useful for vascularsegmentation?
At least dimmer vessels should beeasier to separate from “dark noise”, assuming that no additional artifacts are introduces.
But if you are studying NeurovascularUnit (NVU), yourcalciumanalysis from neurons and astrocytes, you get obvious benefits?
Real-timehighdynamicrangelaserscanningmicroscopy
C. Vinegoni, C. Leon Swisher, P. Fumene Feruglio, R. J. Giedt, D. L. Rousso, S.
Stapleton & R. WeisslederCenterfor Systems Biology,Massachusetts General Hospital and Harvard Medical School,
Richard B. Simches Research Center
Nature Communications (2016)
https://doi.org/10.1038/ncomms11077
Principle for HDR imaging. Only a restricted portion of the detector dynamic range
can be effectively used for signal quantization (R2
). The dark noise (blue area, R1
)
limits the low signal detection, while the high intensity signal near the detector’s
maximum threshold is saturated (red) and is also disregarded (R3
). By combining
multiple images (LDR1, LDR2, LDR3) with different sensitivities (α0
, α1
, α2
) the
quantization range can be increased giving rise to a high dynamic range image
(HDR). This paricular setup “sacrifices” three PMTs for the same “color channel”
(dye) with different neutral density filters.
In vivo intravascular dye kinetics. In vivo intravascular
real-time quantification of the time–intensity variations
demonstrating the vascular pharmacokinetics of a
fluorescent probe across multiple regions of interests
(ROIs). A bolus of 2 MDa FITC-Dextran was injected
intravenously through the lateral tail vein and vascular
kinetics were captured by collecting a time sequence of
real-timeHDR images.
Automated segmentation of rHDR images allowed for the identification
of vascular features that agreed with values obtained using ground truth
manual segmentation. Conversely, LDR image segmentation
resulted in a high degree of vasculature fragmentation (low
branch length) due to the low SNR presentwithinthe image.
“Shortexposure” “Longexposure”
Midexposure”
Volumetric vasculature HDR confocalimaging. LDRs(a–c)Images of
a cleared Dil stained heart, and (d) corresponding rHDR image
reconstruction. (e) Projection of the three-dimensional rHDR acquisition
of the vasculature where colours represent different imaging depths and
brightness is related to the fluorescence (Fluo.) signal amplitude. Scale bar,
150 μm.
HDR Vascular Research illustration, more bits better resolution for permeability
→
Drugdeliverytothebrainbyfocused ultrasound inducedblood–brainbarrierdisruption:Quantitativeevaluationof enhanced permeabilityofcerebral
vasculatureusingtwo-photonmicroscopyTam Nhan, Alison Burgess, Eunice E.Cho, BojanaStefanovic, Lothar Lilge, Kullervo, Hynynen Journal of ControlledRelease
Volume 172, Issue 1,28 November 2011) https://doi.org/10.1016/j.jconrel.2013.08.029
Data analysisof 2PFM
data(FV1000MPE, Olympus)
capturing fluorescent
dye leakage upon
BBBD.
A) Depth projection
images illustrate the
transient BBBD induced
by MBs & FUS at 0.6 MPa
(scale bar: 100 μm).
Sonication and MB
injection occurred during
the first 2 min while the
vessels remained
impermeable to dextran
conjugated TexasRed
TR10kDa. As soon as
sonication ceased,
disruption started at
multiple vessels within
the imaging FOV and
extravascular signal
increases over time.
B) Quantitative
measurement of
averaged fluorescent
signalintensities
associated with
intravascular and
extravascular
compartments over time.
C) Permeability was
evaluated accordingly
Olympus FV1000MPE
12-bit PMTs (4,096
intensity levels), and no the
study has not used the full
dynamic range with none of
the intensities exceeding
2,048 (11-bit) arbitrary
units
Intravascular Space
ExtravascularSpace
8-bit
→ 256 levels
0.39%smallest change detectable in intensity
12-bit
→ 4,096 levels
0.0244% smallest change detectable in intensity
16-bit
→ 65,536 levels
0.0015% smallest change detectable in intensity
24-bit
→ 16,777,216 levels
0.00000596% smallest change detectable in
intensity
Assuming that this is “effective DR”, and you are not just sampling
noise more accurately with 24-bit ADC connected to a noisy PMT.
HowaboutthePSFmodeling in2-PMsystems? For deblurring
OptimalMultivariateGaussianFittingforPSFModeling
inTwo-Photon Microscopy
Tim Tsz-Kit, Emilie Chouzenoux, Claire Lefort, Jean-Christophe Pesquet
http://doi.org/10.1109/ISBI.2018.8363621f
The mathematical representation of the light distribution of this spread phenomenon (of
infinitesimally pointsource) is well-known and described by the Point Spread Function
(PSF). The implementation of an efficient deblurring strategy often requires a
preliminary step of experimental data acquisition, aiming at modeling the PSF whose
shapedepends on the optical parameters of themicroscope.
The fitting model is chosen as a trade-off between its accuracy and its simplicity.
Several works in this field have been inventoried and specifically developed for
fluorescence microscopy [Zhang et al. 2007; Kirshner et al. 2012, 2013]
. In particular, Gaussian models
often lead to both tractable and good approximations of PSF [Anthonyand Granick2009;
Zhu and Zhang 2013]
. Although there exists an important amount of works regarding Gaussian
shape fitting [Hagen and Dereniak2008; Roonizi 2013]
, to the best of our knowledge, these techniques
remain limited to the 1D or 2D cases. Moreover, only few of them take into
account explicitly the presence of noise. Finally, a zero background value is usually
assumed (for instance, in the famous Caruanaet al. (1986) Cited by 46
’s approach). All the
aforementioned limitations severely reduce the applicability of existing methods for
processingreal3Dmicroscopy datasets.
In this paper, a novel optimization approach called FIGARO (FijiPlugin) has been
introduced for multivariate Gaussian shape fitting, with guaranteed convergence
properties. Experiments have clearly illustrated the applicative interest of FIGARO, in the
context of PSF identification in two-photon imaging. The versatility of FIGARO
makes it applicable to a wide range of applicative areas, in particular other microscopy
modalities.
The deblurringstepis performedusingthe OPTIMISMtoolboxfrom
Fiji
Deblurring groundtruths from AdaptiveOptics(AO) Systems
Adaptiveoptical fluorescencemicroscopy
Na Ji NatureMethodsvolume14, pages374–380(2017) https://doi.org/10.1038/nmeth.4218 -Citedby89
Jiet al. (2012) Cited by 122
“Characterization
and adaptive optical correction of
aberrationsduringin vivoimaging inthe
mouse cortex” Lateral and axial images of
GFP-expressing dendritic processes
(mousecortex, 2-PM, 170 μm
Future directions Though research efforts on AO microscopy have been largely focused on the most common modalities of single- or two-
photon excitation fluorescence, similar approaches can improve the performance of optical microscopy in aberrating samples in general.
Correcting aberrations is especially important for microscopy involving higher-order nonlinear optical processes, such as in three-
photon excitation fluorescence (Sinefeld etal.2015)
thirdharmonicgeneration (Jesacheretal.2009)
. Ultimately, the applications ofAOtomicroscopyneedtogo
beyond technical, proof-of-principle demonstrations. We need to make existing methods simple to use and robust in performance, as well as
prove that AO can enable biological discoveries, which requires close collaborations between microscopists and bio­logists, as
demonstrated recently (Sunetal.2016)
. With the rapid incorporation into both diffraction-limited and super-resolution microscopy, one envisions that
adaptiveopticswill soon bean essential elementfor allhigh-resolution imagingdeepin multicellular specimens.
‘PhysicalDeepLearning’ for deriving ZernikeCoefficients
Deeplearningwavefront sensing
Yohei Nishizaki, Matias Valdivia, Ryoichi Horisaki, Katsuhisa Kitaguchi,
Mamoru Saito, Jun Tanida, and Esteban Vera
Optics Express Vol. 27, Issue 1, pp. 240-251 (2019)
https://doi.org/10.1364/OE.27.000240
We present a new class of wavefront sensors by extending their
design space based on machine learning. This approach simplifies
both the optical hardware and image processing in wavefront sensing.
We experimentally demonstrated a variety of image-based wavefront
sensing architectures that can directly estimate Zernike
coefficients of aberrated wavefronts from a single intensity image by
using a convolutional neural network. We also demonstrated that the
proposed deep learning wavefront sensor can be trained to estimate
wavefront aberrations stimulated by a point source and even extended
sources.
In this paper, we experimentally demonstrated the DLWFS with three
preconditioners: overexposure, defocus, and scatter, for a point
source and extended sources. The results showed that all of them can
vastly improve the estimation accuracy obtained when performing in-
focus image-based estimation. The applicability of the DLWFS to
practical situations, e.g. cases with a large number of Zernike
coefficients, a low luminous flux, and an extended field of view, should
be investigated.
The concept of the generalized preconditioner allows the design of
innovative wavefront sensors (WFSs). In particular, the choice and
optimization of the preconditioner for the DLWFS, which can be any
optical transformation, is an open research question. Even other optical
elements that are already used in traditional WFSs could be potentially
used, such as a lenslet array or a pyramid. Nonetheless, the proposed
DLWFS scheme has the advantage that it can be trained asmounted,
without requiring further alignment or precision optics. Therefore, our
proposed framework simplifies and rationalizes WFSs.
Schematic and experimental diagram of the
deep learning wavefront sensor. LED: light
emitting diode. P: Polarizer. SLM: Spatial light
modulator. Xception: A convolutional neural
network. DO: Dropout layer. FC: Fully
connected layer.
The Human Eye andAdaptive Optics
Fuensanta A.Vera-Díaz, NathanDoble (2012)
Thorlabs Shack-Hartmann Wavefront Sensors
https://www.thorlabs.com/newgrouppage9.cf
m?objectgroup_id=5287
‘PhysicalModel’ +‘Deep Learning’ e.g.for OCTAngiography
OCTMonteCarlo&DeepLearning
https://www.slideshare.net/PetteriTeikariPhD/oct-monte-carlo-deep-lear
ning
(a)Typical light propagation in a macrovessel (top)
and a capillary (bottom); (b) left: en faceMIP
(over 200μm in
∼ Z) of regular OCTA obtained after
averaging 20 images; right: XZ cross-sectional
image shows the “tail” artifacts in axial direction; and
(c) g1
(τ)with time lags spanning 4 ms showing the
decorrelation at selected positions (black, above
vessel; red, inside vessel; and magenta, beneath
vessel); top: g1
(τ)decorrelation for the large vein
marked in (b); bottom:g1
(τ) decorrelation for the
capillary marked in (b).
Normalizedfieldautocorrelationfunction-based
opticalcoherencetomographythree-dimensional
angiography Jianbo Tang; Sefik Evren Erdener;
Smrithi Sunil; David A. Boas (2019).
https://doi.org/10.1117/1.JBO.24.3.036005
“Physical GroundTruths” Choice of objective and lens parameters?
Comparisonofobjectivelensesformultiphoton
microscopyinturbidsamples
Avtar Singh, JesseD. McMullen, Eli A. Doris, and Warren R. Zipfel
Biomed Opt Express. 2015 Aug 1; 6(8): 3113–3127.
https://dx.doi.org/10.1364%2FBOE.6.003113
Optimization of illumination and detection optics is
pivotal for multiphoton imaging in highly scattering tissue
and the objective lens is the central component in both of
these pathways. To better understand how basic lens
parameters (NA, magnification, field number) affect
fluorescence collection and image quality, a two-
detector setup was used with a specialized sample cell to
separate measurement of total excitation from
epifluorescence collection. Our data corroborate earlier
findings that low-mag lenses can be superior at
collecting scattered photons, and we compare a set of
commonly used multiphoton objective lenses in
terms of their ability to collect scattered fluorescence,
providing guidance for the design of multiphoton
imaging systems. For example, our measurements of epi-
fluorescence beam divergence in the presence of
scattering reveal minimal beam broadening, indicating that
often-advocated over-sized collection optics are not
asadvantageousaspreviouslythought.
Experimentalapparatus used
to measurelens
transmittance.
Experimental setup for two-channel detection of epi-
collected and transmitted fluorescence. Laser
illumination was focused through a scattering medium
into a solution of fluorescein. Emissions were
collected in both epifluorescence and transmission
channels. A confocal pinhole in the lower path was used
to reject any back-scattered light from the bead layer. An
iris in the upper channel was adjusted to controllably
vignette the beam in order to measure the emission
beam divergence.
Epi-collection objective lens characteristics in scattering media. (a)
Ratios of counts in the epifluorescence channel to counts in the
transmission channel for each lens at zs
= 0 (water), 3 and 5, showing
the decrease in epi-collection efficiencies as a function of sample
scattering. (b) Normalized ratios (relative to zs
= 0 value for each lens)
with data taken over a larger number ofzs
values. Error bars are SEM
Onlyto
900 nm!
“Physical GroundTruths” Beyond 900 nm for three photons: Objectives?
TransmittanceCharacterizationofObjective
LensesCoveringallFourNearInfrared Optical
Windowsand itsApplicationtoThree-Photon
Microscopy Excitedat1820nm
Ke Wang ; Wenhui Wen ; Hongji Liu ; Yu Du ; Ziwei Zhuang ; Ping Qiu
IEEE Photonics Journal ( Volume: 10 , Issue: 3 , June 2018 )
https://doi.org/10.1109/JPHOT.2018.2828435
The transmittance data of objective lenses covering all four
optical windows (The 800-nm, the 1300-nm, the 1700-nm,
and the 2200-nm window) are unknown, nor could they be
provided by manufacturers. This poses a notable obstacle for
imaging especially at the III and IV window. Here through
experimental measurement, we establish a detailed
transmittance database covering all four windows, and
further demonstrate its guidance to imagingat1820 nm. High-
numerical aperture (NA) objective lenses are needed in optical
microscopy to both deliver sufficient excitation light and collect
efficient signal light to enable deep-tissue imaging. The
transmittance performances of objective lenses are of vital
importance. However, there is a lack of experimental
characterization of the transmittance, especially at long
wavelengths, which poses a dramatic obstacle for lens
selection in imaging experiments. Here, we demonstrate
detailed measurement results of the transmittance
performance of air, water-immersion, and oil-immersion
objectives available to us, covering all the four NIR optical
windows.
UPLSAPO40X2, UPLFLN 40X,N PLAN,XLPLN25XWMP2-SP1700,XLPLN25XWMP2,XLPL25XVMP2,UPLSAPO 30X SIR,UPLSAPO 60XO
We can easily find that the customized objective lens XLPLN25XWMP2-
SP1700 has the highest transmittance at 1820 nm. Besides, it has high
transmittance at both the 3-photon fluorescence (89.6% at 645 nm) and
THG (92.9% at 607 nm) signal wavelengths, making it efficient for both
excitation and signal delivery.
.olympus-lifescience.com
“Physical GroundTruths” Beyond 900 nm for three photons: PMT?
ComparisonofSignalDetection ofGaAsPand
GaAsPMTsfor MultiphotonMicroscopy atthe
1700-nmwindow
Yuxin Wang ; Kai Wang ; Wenhui Wen ; Ping Qiu ; Ke Wang
IEEE Photonics Journal ( Volume: 8 , Issue: 3 , June 2016 )
https://doi.org/10.1109/JPHOT.2016.2570005
Signal depletion is currently the limiting factor for
imaging depth at the 1700-nm window in MPM.
Thus, efficient signal detection is an effective means
to further boost imaging depth. GaAsP and GaAs
PMTs are commonly used for signal detection. Our
results show that with a 1667-nm excitation, GaAsP
PMT is more efficient for signal detection of a 3-
photon fluorescence of quantum dot Qtracker
655, third-harmonic generation signal, and a 4-
photon fluorescence of fluorescein, whereas
GaAs PMT is far superior in detecting a second-
harmonic generation signal. The measured results
are in good agreement with theoretical calculations
based on wavelength-dependent cathode radiant
sensitivities. We expect that our results will offer
guidelines for PMT selection for MPM at the 1700-nm
window.
“Physical GroundTruths” Beyond 900 nm for three photons:Immersion Medium?
Order-of-magnitudemultiphotonsignal
enhancementbased oncharacterizationof
absorptionspectraofimmersionoilsatthe
1700-nmwindow
Ke Wang, Wenhui Wen, Yuxin Wang, Kai Wang, Jiexing He, Jiaqi Wang, Peng
Zhai, Yanfu Yang, and Ping Qiu
OpticsExpress (2017) https://doi.org/10.1364/OE.25.005909
Seealso https://doi.org/10.1002/jbio.201800263 for 3-PM
Based on these measured results, glycerol/D2
O mixture immersion shows lower
absorption than glycerol/water mixture immersion. For oil immersion, within the 1700-
nm window, 1600-nm excitation should be selected due to the much smaller
absorption by the immersion medium. . We further note that compared with 800-nm
and 1300-nm excitation, 1700-nm excitation suffers from more water absorption in
biological samples, which could potentially lead to more heating and temperature
rise. However, due to the low excitation power used (a few mW on the sample surface
and even smaller at the focus), based on our calculation and previous experiments on
the mouse brain (22 mW on the surface), we expect the temperature rise would be
only afraction of 1 K and will not lead to thermal damage.
Measured absorption spectra αA
(cm-1
) of glycerol,water,D2
O and mixtures of glycerol and water or D2
O with
different volume ratios
3D THG third harmonic generation
imaging stacks of the mouse ear with 1.8-mw 1600-nm excitation (a) and 2.3-mW
1700-nm excitation (b) after the objective lens and before the immersion oil. 2D images corresponding to (a)
and (b) at different depths (60 µm, 92 µm, 154 µm, and 200 µm below the surface) are show in (c) and (d),
respectively, with THG (red) and SHG (green) signals acquired simultaneously. The arrow, arrow head, and
circles in (c) indicate corneocytes, sebaceous gland, and the same adipocyte, respectively. Scale bars: 30 µm.
“Physical GroundTruths” Effect of the laser beam shape
Volumetric two-photon microscopy with a
non-diffracting Airy beam
Xiao-Jie Tan, Cihang Kong, Yu-Xuan Ren, Cora S. W. Lai, Kevin
K. Tsia, and Kenneth K. Y. Wong
Optics Letters Vol. 44, Issue 2, pp. 391-394 (2019)
https://doi.org/10.1364/OL.44.000391 + PDF
We demonstrate a volumetric two-photon
microscopy (TPM) using the non-diffracting
Airy beam as illumination. Direct mapping
of the imaging trajectory shows that the Airy
beam extends the axial imaging range
around six times longer than a traditional
Gaussian beam does along the propagation
direction, while maintaining a comparable
lateral width. Benefiting from its non-diffracting
nature, the TPM with Airy beam illumination is
able not only to capture a volumetric image
within a single frame, but also to acquire
image structures behind a strongly
scatteredmedium.
Meanwhile, unlike the traditional Gaussian TPM, which is very sensitive to the
position of the sample, Airy TPM is more robust against the axial motion of the
samples, since it captures all the axial images within the effective focal length. The
skipped axial scan avoids additional noise as well as possible time decay. Moreover,
the diffraction-free nature of Airy beams assures deep penetration and less
scattering while imaging deep scattering samples. In all, these advantages make Airy
TPM a potential tool for real-time monitoring of the deep biological activities in a
large volume, e. g., the transient reactions of neurons in large and deep brain tissue. We
anticipate that the current study will promote the application of Airy beams in optical
imaging and contribute to biologicalresearch.
Whyallthisphysics/optics insteadofdata/computerscience?
https://xkcd.com/793/
The “software guy” necessarily won’t be
obnoxious, but you can advance more your
workflow by ‘systemsapproach’
Optimally involving people from the start that
understand the optics, neuroscience and the
deep learning involved? Everyone a bit
understanding each other’s challenges?
Rather than as a neuroscientist just “pressing
the button” of the microscope and throwing
the stack(s) to the software guy and ask
him/her to find the vessels?
“Physical GroundTruths” with some hardware/software help for your microscope
PySight: plug andplay photon
counting for fastintravital microscopy
Hagai Har-Gil, Lior Golgher, Shai Israel, David Kain, Ori
Cheshnovsky, Moshe Parnas, Pablo Blinder
Optica(2018) https://doi.org/10.1364/OL.44.000391
https://www.biorxiv.org/content/10.1101/316125v3.abstract
https://github.com/PBLab/python-pysight
Imaging increasingly large neuronal populations at high
rates pushed multi-photon microscopy into the photon-
deprived regime. We present PySight, an add-on
hardware and software solution tailored for photon-
deprived imaging conditions. PySight more than triples
the median amplitude of neuronal calcium transients
in awake mice, and facilitates single-trial intravital voltage
imaging in fruit flies. Its unique data streaming architecture
allowed us to image a fruit fly’s olfactory response over
234 ×; 600 ×; 330µm3 at 73 volumes per second,
outperforming top-tier imaging setups while retaining over
200 times lower data rates. PySight requires no
electronics expertise nor custom synchronization
boards, and its open-source software is extensible to
any imaging method based on single-pixel (bucket)
detectors. PySight offers an optimal data acquisition
scheme for ever increasing imaging volumes of turbid
living tissue.
The imaging setup of the proposed system and
representative in-vivo images taken from an awake
mouse expressing a genetically encoded calcium
indicator under a neuronal promoter (Thy1-GCaMP6f)
a) A typical multi-photon imagingsetup, depicted in gray,
can be easily upgraded to encompass the multiscaler
and enable photon-counting acquisition (blue). The
output of the PMTs, after optional amplification by fast
preamplifiers, is relayed to the multiscaler’s analog
inputs (STOP1 and STOP2) where it’s discretized, time-
stamped and logged. Finally, the PySight software
package, provided with this article, processes the
logged data into multi-dimensional time series.
Additionally, the multiscaler’s SYNC port can output
the discriminated signal for a specific PMT, enabling
simultaneous digital acquisition and monitoring of the
discriminated signal through the analog imaging setup.
b) Images produced by analog and digital acquisition
schemes. Images were summed over 200 frames
taken at 15 Hz. Scale bar is 50 µm. DM - dichroic mirror.
PMT - photomultiplier tube. Preamp - preamplifier. ADC -
analog todigitalconverter.
Electrical pulses following photon detections in
each PMT are optionally amplified with a high-
bandwidth preamplifier (
TA1000B-100-50, Fast ComTec). The amplified
pulses are then conveyed to an ultrafast
multiscaler (MCS6A, Fast Comtec) where a
software-controlled discriminator threshold
determines the voltage amplitude that will be
accepted as an event. The arrival time of each
event is registered at a temporal resolution of 100
picoseconds, with nodeadtime between events
Howmanyphotonsinpractice?
Adaptive optics in multiphoton microscopy:
comparison of two, three and four photon
fluorescenceDavid Sinefeld, Hari P. Paudel, Dimitre G. Ouzounov, Thomas
G. Bifano, and Chris Xu Optics Express Vol. 23, Issue 24, pp. 31472-31483 (2015)
https://doi.org/10.1364/OE.23.031472
We showed experimentally that the effect of aberrations on the signal increases
exponentially with the order of nonlinearity in a thick fluorescent sample.
Therefore, the impact of AO on higher order nonlinear imaging is much more dramatic.
We anticipate that the signal improvement shown here, will serve as a significant
enhancement to current 3PM, and perhaps for future 4PM systems, allowing
imagingdeeper and with betterresolution in biologicaltissues.
Phase correction for a 2-m-focallength cylindricallensfor 2-, 3- and 4- photon excited
fluorescence of Alexa Fluor 790, Sulforhodamine 101 and Fluorescein. (a) Left – 4-
photon fluorescence convergence curve showing a signal improvement factor of ×
320. Right – final phase applied on the SLM (b) left – 3-photon fluorescence
convergence curve showing a signal improvement factor of × 40. Right – final phase
applied on the SLM. (c) Left – 2-photon fluorescence convergence curve showing a
signalimprovement factor of×2.1.
(left) The spectral response of oxygenated hemoglobin, deoxygenated hemoglobin,
and water as a function of wavelength. The red highlighted area indicates the
biological optical window where adsorption due to the body is at a minimum
Doane and Burda(2012)
(right) Wavelength-dependent attenuation length in brain tissue and
measured laser characteristics. Attenuation spectrum of a tissue model based on Mie
scattering and water absorption, showing the absorption length of water(la, blue dashed
line), the scattering length of mouse brain cortex (ls, red dashed-dotted line), and the
combined effective attenuation length (le, green solid line). The red stars indicate
the attenuation lengths reported for mouse cortex in vivo from previous work[Kobat et al., 2009]
.
The figure hows that the optimum wavelength window (for three-photon microscopy) in
terms of tissue penetration is near 1,700 nm when both tissue scattering and absorption
areconsidered. Horton et al. (2013)
Why not4-PM ifthesectioningimproveswithincreased
nonlinearorder?There are spectral absorbers in the brain,
“near-infrared” (NIR) optical windows for optimal wavelengths
ImageSmoothing with ImageRestoration?
In theory, additional “deep intermediate target” could help the final segmentation
result as you want your network “to pop out” the vasculature, without the texture,
from the background.
In practice then, think of how to either get the intermediate target in such a way that you do not throw any
details away (see Xu et al. 2015), or employ a Noise2Noise type of network for edge-aware smoothing as
well. And check the use of bilateral kernels in deep learning (see e.g. Barron and Poole 2015; Jampani et al. 2016; Gharbi et al. 2017;
Su et al. 2019
). The proposal of Suetal.2019 seems like a good starting point if you are into making this
happen?
RAW After IMAGERESTORATION Edge-AwareIMAGESMOOTHING
Inspiration for
Vasculature Segmentation
Vesselness (tubular)filtersfor“early”segmentationapproaches
DetectingIrregularCurvilinearStructuresin
Gray ScaleandColorImagery usingMulti-
DirectionalOrientedFluxMDOF
Engin Turetken, Carlos Becker, Przemysław Głowacki, Fethallah
Benmansour, Pascal Fua CVLab, EPFL,Lausanne,Switzerland
(2013)
https://doi.ieeecomputersociety.org/10.1109/ICCV.2013.196
Gradient-basedenhancementoftubular
structuresinmedicalimages LDOG
Rodrigo Moreno, Örjän Smedby (2015) School of Technology and Health, KTH Royal Institute of Technology; Center for Medical Image
Science and Visualization (CMIV),Linköping University, Sweden
https://doi.org/10.1016/j.media.2015.07.001
Vesselnessestimationthrough higher-order
orientationtensors Rodrigo Moreno, Örjän Smedby School of Technology and
Health, KTH Royal Institute of Technology; Center for Medical Image Science and Visualization (CMIV), Linköping University,Sweden
(2016)
https://doi.ieeecomputersociety.org/10.1109/ICCV.2013.196
We presented and validated a new
tubularity measure that performs better
than existing approaches on irregular
structures whose cross sections deviate
from circular symmetric profiles. This is
important because many imaging
modalities produce irregular structures as a
result of noise, due to point spread function
blur, and non-uniform staining, among
others
It is worthwhile to pointout thatthe proposed generalization
of the relationship between spherical harmonics and
higher-order tensors is applicable to other applications
where the spherical harmonics transform Archontis Politis,
Aalto
is required. For example, the vesselness method
proposed in Rivest-Henault and Cheriet,2013, which is also
based on spherical harmonics, could also benefit of the
proposed method. Our current research includes a more
exhaustive evaluation with more datasets.
Frangi, Oriented Flow(OOF) variants such as plain OOF, OOF-OFA, and MDOF, etc.
Vesselness visualizedfor2-PM
Maxium Intensity Projection (MIP)
raw input image
MaxiumIntensity Projection (MIP)
“OOF Vesselness”
Law, Max WK, and Albert CS Chung. "Three
dimensional curvilinear structure detection using
optimally oriented flux." European conference on
computer vision. Springer, Berlin, Heidelberg, 2008.
https://doi.org/10.1007/978-3-540-88693-8_27
Classical vesselnessenhancement Image (Poisson) noise
now affects the
vesselness
enhancement
End-to-end deep learning
network to learn
representation (segmentor)
that is invariant
“automagically” to all possible
artifacts and image noise.
You do not want to play with
the “millions of tunable
parameters” of these old-
school algorithms
But you can think if some of
the “expert knowledge”
could be incorporated into the
deep learning loss functions?
Long-range
Modeling
Transformers to Mamba
in 2024 already
RecurrentModels
Learninglong-rangespatialdependencieswith
horizontalgatedrecurrentunits
https://arxiv.org/abs/1805.08315
->https://github.com/serre-lab/hgru_share
See also https://arxiv.org/abs/1811.11356
Why is it that a CNN can accurately detect contours in a natural scene like Fig. 1a but also struggle to
integrate paths in the stimuli shown in Fig. 1b? In principle, the ability of CNNs to learn such long-range
spatial dependencies is limited by their localized receptive fields (RFs) – hence the need to consider deeper
networks because they allow the buildup of larger and more complex RFs. Here, we use a large-scale analysis of
CNN performance on the Pathfinder challenge to demonstrate that simply increasing depth in feedforward
networks constitutes an inefficient solution to learning the long-range spatial dependencies needed to solve the
Pathfinder challenge.
Vasculature tree Morelikethis
defining ‘vesselness’ easier using
the whole volume rather than small
subvolume
e.g. the flower does not require
such long-range sptial analysis
Can we design more efficient models for elongated thin structures with some recurrence instead of relying on feed-forward models?
”Long-rangespatialdependencies” whatdoesthismean?
i.e. what is the network “looking at”
when making a decision of the class of
that given voxel?
https://arxiv.org/abs/1603.059
59
Easier to get
the “correct
ground truth”
withlarger
receptivefield
Contrast quite low
already for such a
smallreceptive
field
Helps the
segmentation
quality to
consider the
“wholespatial
range” of the 3D
stack.
Inspiration for
Centerline/
Graph Network
Centerline algorithmbasics
MultiscaleCenterlineDetection
Amos Sironi, Engin Türetken, Vincent Lepetit, and Pascal Fua
CVLab, EPFL,Lausanne,Switzerland
(2016)
https://doi.org/10.1109/TPAMI.2015.2462363
Citedby 78
We have introduced an efficient regression-based approach to
centerline detection, which we showed to outperform both methods
based on hand-designed filters and classification-based approaches.
The output of our method can be used in combination with tracing
algorithms requiring a scale-space tubularity measure as input,
increasing accuracy also on this task. Our approach is very general and
applicable to other linear structure detection tasks when training data is
available. For example, we obtained an improvement over the state-of-
the-art when training it to detect boundaries on natural images.
Acloserlook of DeepCenterline: Task formulations
DeepCenterline: a Multi-task Fully Convolutional Network for Centerline Extraction Zhihui Guo, Junjie Bai, Yi Lu, Xin Wang, Kunlin Cao, Qi Song, Milan Sonka, Youbing Yin
(Submitted on 25 Mar 2019) https://arxiv.org/abs/1903.10481
Due to the large variations in
branch radius (coronary
artery proximal radius can be
five times bigger than the distal
radius), a straightforward
Euclidean distance transform
computation generates
centerline distance map with
largely variable range of values
at different sections of the
branch. To obtain a centerline
consistently well-positioned in
the “center” from beginning to
end requires tricky balancing of
cost image contrast between
thick and thin sections. To
achieve the desired scale-
invariance property, we
propose to use FCN to
generate a locally normalized
centerlinedistancemap.
Branch endpoint detection Different from
centerline distance map which consists of
continuous values inside the whole
segmentation mask, branch endpoints are
just a few isolated points. Directly
predicting these points using a voxel-wise
classification or segmentation framework is
not feasible due to the extreme class
imbalance. To tackle the class imbalance
problem, a voxel-wise endpoint confidence
map is generated by constructing a
Gaussian distribution around each
endpoint to occupy a certain area spatially.
The FCN is then trained to predict the
endpoint confidence map, which has a more
balanced ratio between nonzero and zero
voxels.
SegmentationasaRegressionProblem DeepCenterline
DeepCenterline:a Multi-task Fully ConvolutionalNetworkfor Centerline Extraction Zhihui Guo, Junjie Bai, Yi Lu, Xin Wang, Kunlin Cao, Qi Song, Milan Sonka, Youbing Yin
(Submitted on 25 Mar 2019) https://arxiv.org/abs/1903.10481
We will come back to the use of distance transformation in medical segmentation,
so remember this!
Comparison of centerline distance map prediction with and without attention. a) Coronary artery segmetnation mask. b) A cross-
sectional view of segmentation mask note the “staircased profile” and possibility to have a subvoxel NURBS fitting
.
c) Centerline distance map without attention module. d)
Centerline distance map with attention module. e) Centerline distance map values at the profile line shown as double-arrowed line in b). With
attention, the centerline distance map shows a high peak around centerline instead of a plateau by the model without attention.
Distance Transform of vessel
mask in Fiji with the centerline
having the lowest (highest) value
The attention seems to really help
localizing the centerline
instead of broader “lumen”
detection without.
Gaussian Blur
for the branch
endpoint
GaussianBluroverlaidon
the boundarymap of the
vessel mask,for comparison
of your options
Petteri Toy demo:
”Scale invariance” for thedistance map with the
hope of having same intensity on the centerline of big
and small vessels with some intensity gradient (not
that visible on large vessels anymore after LOG
transform)
Distance
transform
CLAHE LOG10
Transform
→
→
TopologyEnforcement with deeplearning
IterativeDeepRetinalTopology Extraction
Carles Ventura, Jordi Pont-Tuset, Sergi Caelles, Kevis-Kokitsi Maninis, Luc Van Gool (2018)
https://doi.org/10.1007/978-3-030-00500-9_15
https://github.com/carlesventura/iterative-deep-learning
Download graph annotations for DRIVEdataset from website:
http://people.duke.edu/~sf59/Estrada_TMI_2015_dataset.htm
Building on top of a global model that performs a dense semantical classification of the pixels
of the image, we design a Convolutional Neural Network (CNN) that predicts the local
connectivity between the central pixel of an input patch and its border points. By iterating this
local connectivity we sweep the whole image and infer the global topology of the filamentary
network, inspired by a human delineating acomplex network with the tip of their finger.
Vasculartopology asGraph
Two-Photon Imaging of CorticalSurface MicrovesselsRevealsa Robust
Redistribution in Blood Flow after Vascular Occlusion
Chris B Schaffer, Beth Friedman, Nozomi Nishimura, Lee F Schroeder, Philbert S Tsai, Ford F
Ebner, Patrick D Lyden, David Kleinfeld
January 3, 2006 https://doi.org/10.1371/journal.pbio.0040022
In humans, damage to microvessels is a known pathological
condition. In particular, occlusion of small-scale arterioles is a likely
cause of clinically silent lacunar infarcts that are correlated with an
increased risk of dementia and cognitive decline. It is thus interesting
that the Rotterdam Scan study, which identified clinically silentlacunar
infarcts through magnetic resonant imaging, found that few cortical
infarcts were located near the surface and thus where the vasculature
appears to be most redundant. Our results for surface blood flow
dynamics suggest an emerging relation between vascular topology
andsusceptibility tostroke in different regions of the brain.
ExamplesofFlowChangesthatResultfromLocalizedOcclusion
ofaCorticalSurfaceArteriole (A–C) On the left and right are TPLSM
images taken at baseline and after photothrombotic clotting of an individual vessel,
respectively. Left center and right center are diagrams of the surface vasculature
with RBC speeds (in mm/s) and directions indicated. The red X indicates the
location of the clot, and vessels whose flow direction has reversed are indicated
with red arrows and labels. In the examples of panels (A) and (B) we show
maximal projections of image stacks whereas the example in panel (C) shows
single TPLSM planar images; the streaks evident in the vessels in these latter
frames are due to RBC motion, and the dashed box in the diagrams represents the
area shown in the images.
VascularMesh
Modeling
Mainly for clinical patient-specific
modeling from MRA, for surgical
planning, 3D printing personalized
vasculature for surgical training, etc.
Exampleofa 2-PMMeshAnalysiscase
Cerebralmicrovascularnetworkgeometrychangesinresponsetofunctionalstimulations
Liis Lindvere, Rafal Janik,Adrienne Dorr,DavidChartash, BhupinderSahota,John G. Sled, BojanaStefanovic Imaging Research, Sunnybrook Research Institute; MouseImaging Centre, The Hospital for Sick Children; Departmentof Medical Biophysics,
University of Toronto
NeuroImage71 (2013) 248–259 http://dx.doi.org/10.1016/j.neuroimage.2013.01.011
The anatomical data were segmented using semi-automated analysis via commercially available software (Imaris, Bitplane, Zurich). Prior to segmentation, the data were subjected to edge-
preserving 3D anisotropic diffusion filtering. Thereafter, the intravascular space was identified based on a range of user supplied signal intensity thresholds corresponding to the background and
foreground signal intensity ranges. The labor intensive semi-automated segmentation was followed by removal of hair-like terminal branches. The resulting volumes were next skeletonized, with
the network sampled roughly every 1 μm, and the aforementioned graph data structure produced. The local tangents to the vessel were evaluated at each vertex following spline interpolation to the
vertices' locations
Voxel
presentation
of the volume
Mesh
presentation
of the volume
With mean
radius quantified
along the vessels
You couldtry Screened
“Poisson Reconstruction”
fromMeshLab ifyou do not
have Imaris license
And“ShapeDiameter
Function”from CGAL for
some segmentation ofthe
mesh
Patient-specific Mesh models
ApplicationofPatient-SpecificComputational
FluidDynamicsinCoronary andIntra-Cardiac
FlowSimulations:ChallengesandOpportunities
Liang Zhong, Jun-Mei Zhang, Boyang Su, Ru San Tan, John C. Allen and Ghassan S.
Kassab NationalHeart Centre Singapore, NationalHeart Research Institute of Singapore, Singapore, Singapore; Duke-NUS MedicalSchool,
Singapore, Singapore; CaliforniaMedicalInnovations Institute, San Diego, CA, United States )
https://doi.org/10.3389/fphys.2018.00742
The emergence of patient-specific computational fluid dynamics
(CFD) has paved the way for the new field of computer-aided
diagnostics. This article provides a review of CFD methods, challenges
and opportunities in coronary and intra-cardiac flow simulations. It
includes a review of market products and clinical trials. Key components
of patient-specific CFD are covered briefly which include image
segmentation, geometry reconstruction, mesh generation, fluid-
structure interaction, and solver techniques.
Distributions of (A)P (Pressure), (B)WPG (wall
pressure gradient), (C) WSS (wall shear
stress), (D)OSI (oscillatory shear index), (E)RRT
(relative residence time), and (F)SPA (stress phase
angle) on the virtually healthy and diseased left
coronary artery trees respectively.
NURBS andVasculatureReconstruction?
The airfoils generated with BézierGAN do
not lookthat alientovasculatureanymore
when you consider cross-sections of
vessels?
Cubic curves (1D) -> BiCubic Surfaces (2D)
Bezier curve -> Bezier surface
NURBS Surface Approximation Using
Rational B-spline Neural Networks
July 2011
Tawfik El-Midany, Mohammed
Elkhateeb, et. Al
www.researchgate.net
Onur Rauf Bingol, Adarsh
Krishnamurthy
Department ofMechanical Engineering, Iowa State University, United
States
https://doi.org/10.1016/j.softx.2018.12.005
https://github.com/orbingol/NURBS-Python
We introduce NURBS-Python, an object-oriented,
open-source, Pure Python NURBS evaluation library
with no external dependencies. The library is
capable of evaluating single or multiple NURBS
curves and surfaces, provides a customizable
visualization interface, and enables importing and
exportingdata usingpopular CADfile formats.
2D surface
”3D volume”
Remember ourproblemwithstaircasedvasculature
‘Anisotropic Staircasing’ best priors for isotropic
reconstruction? https://doi.org/10.2312/VCBM%2FVCBM10%2F083-090
You would like
to have data-
driven
smooth
surface fitted
along the
staircased
lego Corgi
Lego corgi : corgi Reddit
Same “asadog”
DeepSpline for unsupervised 3D surface reconstruction
DeepSpline:Data-Driven
reconstructionofParametricCurves
andSurfaces
Jun Gao,Chengcheng Tang, VigneshGanapathi-Subramanian,Jiahui Huang,Hao Su,Leonidas J.
Guibas UniversityofToronto;VectorInstitute; TsinghuaUniversity;StanfordUniversity;UCSan Diego
(Submitted on 12 Jan 2019) https://arxiv.org/abs/1901.03781
Reconstruction of geometry based on different input modes, such as
images or point clouds, has been instrumental in the development of
computer aided design and computer graphics. Optimal
implementations of these applications have traditionally involved the use
of spline-based representations at their core. Most such methods
attempt to solve optimization problems that minimize an output-target
mismatch. However, these optimization techniques require an
initialization that is close enough, as they are local methods by nature.
We propose a deep learning architecture that adapts to perform
spline fitting tasks accordingly, providing complementary results to
the aforementioned traditional methods.
To tackle challenges with the 2D cases such as multiple splines with
intersections, we use a hierarchical Recurrent Neural Network
(RNN) Krause et al. 2017
trained with ground truth labels, to predict a variable
number of spline curves, each with an undetermined number of control
points.
In the 3D case, we reconstruct surfaces of revolution and extrusion
without sel-fintersection through an unsupervised learning
approach, that circumvents the requirement for ground truth labels. We
use the Chamfer distance to measure the distance between the
predicted point cloud and target point cloud. This architecture is
generalizable, since predicting other kinds of surfaces (like surfaces of
sweeping or NURBS), would require only a change of this individual
layer, with the rest of the model remaining the same.
Inspiration for
Ensemble Models
for each “block” (restoration,
segmentation, etc.) you could have
multiple models and you use some
sort of consensus of those outputs
as the “ensembled output” for the
next block
High-level SegmentationCNNEnsemble
Model A
Model B
Model C
Average
Voxelwise
(continuous value)
predictions
Ensemble 3 independent models
Textbook
Ensemble
methods:
Bagging
Boosting
Stacking
BinaryMask
Vessel vs.
non-vessel
Uncertainty
via MC Dropout or
“Bayesian”Batch
normalization
Or model as
(Bayesian)
SensorFusion
problem?
Inspiration for
Multi-Task
Approaches
within each “block”, we
could have multiple tasks
(learn simultaneously e.g.
centerline, vessel edges
and segmentation mask)
High-level SegmentationCNNMulti-Task?
Multi-task have e.g. 3 different targets
Predict the vessel mask
“imagesegmentation”task
Predict the edge mask
“edgedetection”task
Predict the distance map
“centerlinedetection”task
TaskA:
Segment
TaskB:
Edges
Task C:
Distance Map
Whatfeaturescanyou“detect”fromthestack?
MainTask:
3DSemantic
Segmentation
Vessel Mask
“STRONG”
Auxiliarytasks:
Vessel Edges
“1st
derivative” Distance Map
Bifurcations
“WEAK”
Auxiliarytasks:
Branch end point
s
Graph constraints for watertight segmentation
for CFDanalysis where?
https://doi.org/10.3389/fninf.2011.000
03
Inpainting masks
”Outlier detection”
See. Photoshop
Content-Aware Fill
GAN Inpainting
i.e. hallucinate the missing
vessels (“vessel breakage”) to
be continuous
Idea:
We hope to
get better
vessel mask
with the help
of auxiliary
tasks
compared to
just trying to
obtain vessel
mask with one
loss function
i.e. are these that
useful, and
theserequire
manual
annotation
work
“Curvature”
“2nd
derivative”
second-order smoothness
prior
Nofurtherlabellingneeded,getthesefrombinarymask
Auto-LabellingImageJ/FijiExample
1st
derivateof intensity
“Edge”/
Gradient Magnitude
2nd
derivateof intensity
“Curvature” /
GradientMagnitudeof Gradient Magnitude
Apply Gradient Magnitude
from Differentials plugin
Process
– Binary
– Distance Map
Distance
Map
Process
– EnhanceLocal
Contrast(CLAHE)
Quick’n’dirty localnormalizationof Distance Map
Maximum slope: 1.50
Maximum slope: 3.00
Using these “auxiliary scalar measures” in multi-task learning
Auxiliary Tasksin Multi-task Learning
Lukas Liebel,MarcoKörner(Submitted on 16May 2018)
https://arxiv.org/abs/1805.06334
We extend multi-task learning by adding auxiliary tasks,
which are of minor relevance for the application, to the set
of learned tasks. As a kind of additional regularization,
they are expected to boost the performance of the
ultimately desired main tasks.
Advanced driver assistance systems (ADAS) and
autonomous vehicles need to gather information about the
surrounding of the vehicle, in order to be able to safely guide the
driver or vehicle itself through complex traffic scenes. Apart from
traffic signs and lane markings, typical components of such scenes
that need to be considered are other road users, i.e., primarily
vehicles and pedestrians. When it comes to decision making,
additional important local or global parameters will certainly
be taken into account. Those include, i.a., object distances and
positions, or the current time of day and weather conditions. The
application of vision-based methodology seems natural in the
context of RSU, since the rules of the road and signage were
designed for humans who mainly rely on visual inspection in this
regard. Figure 1 shows an overview of our network architecture
and illustrates the concept of auxiliary tasks, with SIDE and
semantic segmentation serving as main tasks and the estimation of
thetimeofdayandweatherconditionsas auxiliary tasks.
“AuxiliaryMicroscope Learning” in multi-tasklearningsetting
CouldknowingtheBBB
permeability(i.e.focused
ultrasoundacousticpressure,MPa)
measure,help thesegmentation
process by providing auxiliary
information aboutthe signal-to-
background-ratio? Mostlikely, yes,
but how much, another thing?
Nhan et eal. (2013)https://doi.org/10.1016/j.jconrel.2013.08.029
ContinuousBloodPressure(NIBPorinvasive)
for monitoringanesthesialevel and “vasculartone”.
Helpfulforthe segmentor network “understanding”
the vessel shape? Neededinmostcasesifyou
aredoingNeurovascularUnit studies, oryou
wantto quantify forfunctional hyperemiafunction
studies with Retinal TrilaminarVascularNetwork in
flickeringlight protocol Tess E. Kornfield and Eric A. Newman (2014)
?
MeasuringBloodPressureUsingaNoninvasive
TailCuffMethodinMice
https://doi.org/10.1007/978-1-4939-7030-8_6
Unveilingastrocyticcontrolof
cerebralbloodflowwithoptogenetics
https://doi.org/10.1038/srep11455 (2015)
“AuxiliaryClinical Learning” in medicalsegmentation
ConditioningConvolutionalSegmentation
Architectures withNon-ImagingData
GrzegorzJacenków, Agisilaos Chartsias, Brian Mohr, Sotirios A. Tsaftaris
The Universityof Edinburgh; Canon Medical Research Europe; The AlanTuringInstitute
17 Apr2019(modified: 11 Jun 2019)MIDL 2019
https://openreview.net/forum?id=BklGUoAEcE
We compare two conditioning mechanisms based on concatenation
and feature-wise modulation (FiLM, Perez et al., 2017)
to integrate non-
imaging information into convolutional neural networks for
segmentation of anatomical structures. We apply the concatenation-
based conditioning at three levels: early fusion with spatial replication of
the input-level features, middlefusion at the latent spaceof theencoder-
decoder networks, and late fusion before the last convolutional layer. In
FiLM, our work focuses on applying FiLM layers along the decoder path
(decoderfusion)andbefore the final convolutional layer(latefusion).
As a proof-of-concept we provide the distribution of class labels obtained
from ground truth masks to ensure strong correlation between the
conditioning data and the segmentation maps. We evaluate the methods
on the ACDC dataset, and show that conditioning with non-imaging
data improves performance of the segmentation networks. We
observed conditioning the U-Net architectures was challenging,
where no method gave significant improvement. However, the same
architecture without skip connections outperforms the baseline
with feature-wise modulation, and the relative performance increases as
thetraining size decreases
Perez et al., 2017
Summarysofarfor the “inspiration”
’End-to-EndReconstruction’
TheGoal with everythingjointly learned
IMAGERESTORATION
- Denoise
- Deblur
- Detect ‘broken vessels’
- Inpaint broken vessels
IMAGESEGMENTATION
- 3D “Euclidean’ CNN
for input voxels
GRAPH
RECONSTRUCTION
Enforce connectivity
MESH
RECONSTRUCTION
’Fit shape primitives for
isotropic reconstruction’
Errorpropagation
Heteroscedastic uncertainty
Partialvolume, andnon-
idealPSF causes
problems with edge
localization
‘Anisotropic
Staircasing’
best priors for
isotropic
reconstruction
?
https://doi.org/10.2312/V
CBM%2FVCBM10%2F0
83-090
Motionartifactscause the
uncertainty to be non-
homoscedastic
https://doi.org/10.1038/srep045
07
“Donot let image restorationoversmooth”,“Constrainsegmentationby physiologically plausible vasculartree”
3D Anisotropic
VoxelMask
3D Isotropic
VoxelMask
3D Mesh
forCFD
Bayesian graph convolutional
neuralnetworks
https://arxiv.org/abs/1811.11103
→ https://arxiv.org/abs/1902.10042
Segmentation
Loss
Functions
Dice? Agood “medicalmetric”for us? AVD?
Metricsforevaluating3D
medicalimagesegmentation:
analysis,selection,andtool.
Abdel Aziz Taha andAllan Hanbury TUWien
BMC Medical Imaging 2015 15:29
https://doi.org/10.1186/s12880-015-0068-x
https://github.com/Visceral-Project/EvaluateSegmentati
on
Since metrics have different properties (biases, sensitivities),
selecting suitable metrics is not a trivial task. This paper
provides analysis of the 20 implemented metrics, in particular
of their properties, and suitabilities to evaluate segmentations,
given particular requirements and segmentations with particular
properties.
The Dice coefficient [Dice 1945] (DICE), also called the
overlap index, is the most used metric in validating medical
volume segmentations. In addition to the direct comparison
between automatic and ground truth segmentations, it is
common to use the DICE to measure reproducibility
(repeatability). Zou et al. 2004 used the DICE as a measure of
the reproducibility as a statistical validation of manual annotation
where segmenters repeatedly annotated the same MRI image.
Contour is important: Depending on the individual task, the
contour can be of interest, that is the segmentation algorithms
should provide segments with boundary delimitation as exact as
possible. Metrics that are sensitive to point positions (e.g. HD
and AVD) are more suitable to evaluate such segmentation than
others. Volumetric similarity VS is to be avoided in this case.
TheHausdorffDistance(HD) is generally sensitivetooutliers.
Because noiseandoutliers arecommon in medical segmentations, itis
not recommended to use theHD directly [Zhang and Lu2004].
TheAverageDistance,ortheAverageHausdorffDistance(AVD),
is the HD averaged over all points. TheAVD is known tobe stableand
less sensitiveto outliers than theHD. Itis defined by:
The Hausdorff distance (HD) and the
average Hausdorff distance (AVD) are based
on calculating the distances between all pairs
of voxels. This makes them
computationally very intensive,
especially with large images. herefore, to
efficiently calculate the AVD, we use a
modified version of the nearest neighbor
(NN) algorithm proposed by
Zhao et al. 2014 in which a 3D cell grid is built
on the point cloud.
“Hausdorff’s Distance loss” viadistancetransform
ReducingtheHausdorffDistanceinMedical
ImageSegmentationwithConvolutionalNeural
Networks
Davood Karimi, Septimiu E. Salcudean Universityof BritishColumbia,Vancouver
(Submitted on 22 Apr 2019)
https://arxiv.org/abs/1904.10030
In this paper, we present novel loss functions for training convolutional
neural network (CNN)-based segmentation methods with the goal of
reducing Hausdorff Distance (HD) directly. We propose three methods to
estimate HD from the segmentation probability map produced by a CNN.
One method makes use of the distance transform of the segmentation
boundary. Another method is based on applying morphological erosion
on the difference between the true and estimated segmentation maps. The
third method works by applying circular/spherical convolution kernels
of different radii on the segmentation probability maps. Our results show
that the proposed loss functions can lead to approximately 18 − 45%
reduction in HD without degrading other segmentation performance
criteriasuch as theDicesimilarity coefficient.
To the best of our knowledge, this is the first work to aim at reducing HD in
medical image segmentation. The methods presented in this paper may be
improved in several ways. Faster implementation of the HD-based loss
functions and more accurate implementation of the loss function based on
morphological erosion would be useful. Moreover, extension of the
methods for other applications such as vessel segmentation could also
bepursued.
ComboLoss evaluatedbyHausdorffbutnotoptimizeddirectly
ComboLoss:HandlingInputandOutput
ImbalanceinMulti-OrganSegmentation
Saeid Asgari Taghanaki, Yefeng Zheng, S. Kevin Zhou, Bogdan Georgescu,
Puneet Sharma, Daguang Xu, Dorin Comaniciu, Ghassan Hamarneh
MedicalImage Analysis Lab, Schoolof ComputingScience, Simon Fraser University, Canada; Siemens Healthineers
(Submitted on 8 May 2018) https://arxiv.org/abs/1805.02798
As expected, we note that the final segmentations are
affected by the choice of parameter beta and the best results
in terms of higher Dice and lower Hausdorff distance
were obtained for = 0.7 and = 0.6 for ultrasound and MRI
β β
datasets, respectively. As HD is sensitive to outliers, there
are sometimes relatively large values in the HD results (i.e.,
second column in the figure).
The key advantage of the proposed Combo loss is that it
enforces a desired trade-off between the false positives
and negatives (which results in cutting out post-
processing) and avoids getting stuck in bad local minima as
it leverages Dice term. The Combo loss converges
considerably faster than cross entropy loss during
training. Similar to Focal loss, our Combo loss also has two
parameters that need to be set.
U-Net V-Net U-Net V-Net
For some reason, the Average
Hausdorff distance was not
used that is more robust to
outliers?
Boundaryloss
Boundary lossforhighly unbalancedsegmentation
Hoel Kervadec, Jihene Bouchtiba, Christian Desrosiers, Éric Granger, Jose Dolz, Ismail Ben Ayed
ETS Montreal (Submitted on 17Dec 2018)
https://arxiv.org/abs/1812.07032
https://github.com/LIVIAETS/surface-loss PyTorch
Widely used loss functions for convolutional neural network (CNN) segmentation, e.g.,
Dice or cross-entropy, are based on integrals (summations) over the segmentation
regions. Unfortunately, it is quite common in medical image analysis to have highly
unbalanced segmentations, where standard losses contain regional terms with values
that differ considerably --typically of several orders of magnitude-- across segmentation
classes, which may affect training performance and stability.
The purpose of this study is to build a boundary loss, which takes the form of a
distance metric on the space of contours, not regions. We argue that a boundary loss
can mitigate the difficulties of regional losses in the context of highly unbalanced
segmentation problems because it uses integrals over the boundary between regions
instead of unbalanced integrals over regions. Furthermore, a boundary loss provides
information that is complementarytoregional losses.
Unfortunately, it is not straightforward to represent the boundary points corresponding
to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete
(graph-based) optimization techniques for computing gradient flows of curve
evolution. Following an integral approach for computing boundary variations, we express
a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids
completely local differential computations involving contour points. Our boundary loss
is the sum of linear functions of the regional softmax probability outputs of the
network. Therefore, it can easily be combined with standard regional losses and
implementedwith any existing deepnetwork architecture for N-Dsegmentation.
Our experiments on two
challenging and highly unbalanced
datasets demonstrated the
effectiveness of including the
proposed boundary loss term
during training. It consistently
improved the performance, with a
large margin on one data set, and
enhanced training stability. Even
though we limited the experiments
to 2-D segmentation problems, the
proposed framework can be
trivially extended to3-D, which
could further improve the
performance of deep networks, as
more context is analyzed.
Sorensen-Diceloss fromV-Netwith DistancePenalty
DistanceMapLossPenalty Term forSemantic
Segmentation
Francesco Caliva, Claudia Iriondo, Alejandro Morales Martinez, Sharmila Majumdar, Valentina
Pedoia 17Apr 2019 (modified: 11Jun 2019)MIDL 2019
https://openreview.net/forum?id=B1eIcvS45V
Convolutional neural networks for semantic segmentation suffer from low
performance at object boundaries. In medical imaging, accurate
representation of tissue surfaces and volumes is important for tracking of
disease biomarkers such as tissue morphology and shape features.
In this work, we propose a novel distance map derived loss penalty term
for semantic segmentation. We propose to use distance maps, derived from
ground truth masks, to create a penalty term, guiding the network's focus
towards hard-to-segment boundary regions. We investigate the effects
of this penalizing factor against cross-entropy, Dice, and focal loss, among
others, evaluating performance on a 3D MRI bone segmentation task from
the publicly available Osteoarthritis Initiative dataset. We observe a
significant improvement in the quality of segmentation, with better shape
preservation at bone boundaries and areas affected by partial
volume. We ultimately aim to use our loss penalty term to improve the
extraction of shape biomarkers and derive metrics to quantitatively evaluate
the preservation of shape.
GLOBAL
EDGE
Performance comparison of the
proposed distance map penalizing loss
term against the Dice Loss function,
confident predictions penalizing loss
and the focalloss.
Remember our multi-task formulation before, you couldformulate centerline andboundary detection problems asregression problemswith the distance function
Calibrationissues Brierscore with uncertaintymaps?
Towardsincreasedtrustworthinessof deep
learningsegmentationmethodsoncardiacMRI
Jörg Sander, Bob D. de Vos, Jelmer M. Wolterink, Ivana Išgum
Medical Imaging: Image Processing 2018 DOI: 10.1117/12.2511699
One important reason is the lack of reliability caused by models that fail unnoticed and
often locally produce anatomically implausible results that medical experts would not
make. Combining segmentations and uncertainty maps and employing a human-in-
the-loop setting, we provide evidence that image areas indicated as highly uncertain
regarding the obtained segmentation almost entirely cover regions of incorrect
segmentations. The fused information can be harnessed to increase segmentation
performance. Our results reveal that we can obtain valuable spatial uncertainty maps with
low computational effort
In addition, we reveal that a valuable uncertainty measure can be obtained if the applied
model is well calibrated, i.e. if generated probabilities represent the likelihood of being
correct. The quality of e-maps and u-maps depends on the calibration of the acquired
probabilities. Previous work6 revealed that lossfunctionsdiffer regarding how wellthe
generated probabilities represent the likelihood of being correct. Therefore, we
trained the model with three different loss functions: soft-Dice (SD), cross-entropy (CE),
and the Brier score (BS),10 which is equal to the average gap between softmax
probabilities and the references. This provides information about accuracy and
uncertainty of the model. Computationally the Brier score loss is equal to the squared
error between the one-hot encoding of the correct label and its associated probability.
We observe that baseline segmentation performance is highest when the model is
trained with the Brier score loss, slightly lower for the soft-Dice, and lowest when cross-
entropy is used. Except for the soft-Dice loss we note that u-maps and e-maps follow each
other quite closely, which suggests that both carry similar information.
Reliability diagrams over all tissue classes together for Brier, soft-Dice and
cross-entropy loss functions. Blue (end-diastole) and green (end-
systole) bars quantify the true positive fraction for each probability bin. Red
bars quantify the miscalibration of the model where smaller indicates
better. If the model is perfectly calibrated, the diagram should match the
dashed line.
Howto quantifythe
uncertainty(UQ)
MC Dropout, Deep Ensembles or
Conformal Prediction relatively easy
to use in practice
UncertaintyinBiology stereotypicalcartoonofthemindset
Segmentationby
“custom script”
Ground
Truth
RAW DATA Some FILTERING
Andwe have stats for the
areadifferences between
ourstudy groups
Control
150.00± 25.88pixels
Intervention
72.00± 15.81 pixels
vs.
Torture your data until you get
a significant pvalue here
Stdevs (uncertainty) only comes from differences in
area, the uncertainty how the area was calculated, is
not typically propagated to final estimates
In reality, the uncertainty is probably a lot larger than
you thought, but of course you are happier to operate
with your zero error from your “custom script” :(
As long as we have mean± SD to compare we are happy. Who cares about uncertainty propagation.
UncertaintyinClinicalPractice veryimportant!
https://twitter.com/EricTopol/status/1119626922827
247616?fbclid=IwAR1wcEA94bBp5Z_l0ObV9oST8
2uVDcPaiYP04WVQ63WjUya78OVW1DZC9Qo What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical EndUse
Sana Tonekaboni, Shalmali Joshi, Melissa D. McCradden, Anna Goldenberg https://arxiv.org/abs/1905.05134
MCDropout popularmethod
DoesYourModelKnowtheDigit6Is Nota
Cat?ALess BiasedEvaluationof“Outlier”
Detectors (2018)
AlirezaShafaei, Mark Schmidt, and James J. Little
https://arxiv.org/abs/1809.04729
VGG-backed and Resnet-backed methods
significantly differ in accuracy. The gap
indicates the sensitivity of the methods to the
underlying networks.
This means that the image classification accuracy
may not be the only relevant factor in performance
of these methods. ODIN is less sensitive to the
underlying network.
Despite not enforcing mutual exclusivity, training
the networks with KL loss instead of CE loss
consistently reduces the accuracy of OOD
detection methods on average.
GitHub - yaringal/ConcreteDropout
Cited by 80
DropoutasaBayesianapproximation: Representing
modeluncertainty indeep learning
(2015)Yarin Gal, Zoubin Ghahramani Citedby896
UQAleatoric vs. Epistemic uncertainty?
https://github.com/yaringal/ConcreteDropout/blob/master/concrete-dropout-keras.ipynb:
D = 1
K_test = 20 # “draw” K times, 20 inferences
MC_samples = np.array([model.predict(X_val) for _ in range(K_test)])
means = MC_samples[:, :, :D] # K x N
epistemic_uncertainty = np.var(means, 0).mean(0)
logvar = np.mean(MC_samples[:, :, D:], 0)
aleatoric_uncertainty = np.exp(logvar).mean(0)
https://arxiv.org/abs/1705.07832:
Three types of uncertainty are often encountered in Bayesian modelling.
Epistemic (known also as ‘uncertainty’)
uncertainty captures our ignorance about the
models most suitable to explain our data; Aleatoric (known also as ‘risk’)
uncertainty
captures noise inherent in the environment (remember aleajactaest); Lastly,
predictiveuncertainty conveys the model’s uncertainty in its output.
Epistemic uncertainty reduces as the amount of observed data increases—
hence its alternative name “reducible uncertainty”. Aleatoric uncertainty
captures noise sources such as measurement noise—noises which cannot be
explained away even if more data were available (although this uncertainty can
be reduced through the use of higher precision sensors for example). This
uncertainty is often modelled as part of the likelihood, at the top of the model,
where we place some noise corruption process on the function’s output.
Combining both types of uncertainty gives us the predictive uncertainty—the
model’s confidence in its prediction, taking into account noise it can explain
away and noise it cannot. This uncertainty is often obtained by generating
multiple functions from our model and corrupting them with noise (with
precision τ ).
For some critique of this, see the discussion:
Posted by u/sschoener 1 year ago (2018)
[D] What isthe current state of dropout asBayesianapproximation?
https://www.reddit.com/r/MachineLearning/comments/7bm4b2/d_what_is_the_current_state_of_dropout_as/
with Ian Osband, DeepMind @IanOsband Alternative?
→ https://arxiv.org/abs/1806.03335
WhatUncertaintiesDoWeNeedinBayesianDeep Learningfor
ComputerVision?Alex Kendall, Yarin Gal
https://arxiv.org/abs/1703.04977
In (d) our model exhibits increased aleatoric uncertainty on object boundaries and for objects far from
the camera. Epistemic uncertainty accounts for our ignorance about which model generated our
collected data. This is a notably different measure of uncertainty and in (e) our model exhibits
increased epistemic uncertainty for semantically and visually challenging pixels. The bottom row
shows a failure case of the segmentation model when the model fails to segment the footpath due to
increasedepistemicuncertainty, but not aleatoric uncertainty.
UncertaintyofyourUncertainty Estimate? Can you trust it?
Can You TrustYourModel's
Uncertainty?Evaluating
PredictiveUncertainty Under
DatasetShift
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D
Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji
Lakshminarayanan, Jasper Snoek Google Research, DeepMind
(Submitted on 6 Jun 2019)
https://arxiv.org/abs/1906.02530
Using Distributional Shift to Evaluate
Predictive Uncertainty While previous work has
evaluated the quality of predictive uncertainty on
OOD inputs (Lakshminarayanan et al., 2017), there
has not to our knowledge been a comprehensive
evaluation of uncertainty estimates from different
methods under dataset shift. Indeed, we suggest
that effective evaluation of predictive uncertainty
is most meaningful under conditions of
distributional shift. One reason for this is that post-
hoc calibration gives good results in independent
and identically distributed (i.i.d.) regimes, but can fail
under even a mild shift in the input data. And in real
world applications, distributional shift is widely
prevalent. Understanding questions of risk,
uncertainty, and trust in a model’s output becomes
increasingly critical as shift from the original training
data grows larger.
(SVI) Stochastic Variational Bayesian Inference e.g.
Wu et al. 2019
(Ensembles, M = 10) Ensembles of M networks trained
independently on the entire dataset using random
initialization (Lakshminarayanan et al. 2016 Cited by 245
Subject-wiseCalibrationIssues?Ensemblesthenicest
AssessingReliability and
ChallengesofUncertainty
EstimationsforMedicalImage
Segmentation
Alain Jungo, Mauricio Reyes (Submitted on 7 Jul 2019)y
https://arxiv.org/abs/1907.03338
https://github.com/alainjungo/reliability-challenges-uncertain
ty
Although many uncertainty estimation methods
have been proposed for deep learning, little is known
on their benefits and current challenges for medical
image segmentation. Therefore, we report results of
evaluating common voxel-wise uncertainty measures
with respect to their reliability, and limitations on two
medical image segmentation datasets. Results show
that current uncertainty methods perform similarly and
although they are well-calibrated at the dataset level,
they tend to be miscalibrated at subject-level.
Therefore, the reliability of uncertainty estimates is
compromised, highlighting the importance of
developing subject-wise uncertainty estimations.
Additionally, among the benchmarked methods, we
found auxiliary networks to be a valid alternative to
common uncertainty methods since they can be
applied toany previously trainedsegmentation model.
Unsurprisingly, the ensemble method yields rank-wise the most
reliable results (Tab. 1) and would typically be a good choice (if the
resources allow it). The results also revealed that methods based on MC
dropout are heavily dependent on the influence of dropout on the
segmentation performance. In contrast, auxiliary networks turned out
to be a promising alternative to existing uncertainty measures.
They perform comparable to other methods but have the benefit of being
applicable to any high-performing segmentation network not optimized
to predict reliable uncertainty estimates. No significant differences were
found between using auxiliary feat. and auxiliary segm.. Through a
sensitivity analysis performed over all studied uncertainty methods, we
could confirm our observations that different uncertainty estimation
methodsyielddifferentlevelsofprecision andrecall.
Furthermore, we observed that when using current uncertainty methods
for correcting segmentations, a maximum benefit can be attained
when preferring a combination of low precision segmentation
modelsanduncertainty-basedfalsepositive removal.
Our evaluation has several limitations worth mentioning. First, although
the experiments were performed on two typical and distinctive datasets,
they feature large structures to segment. The findings reported herein
may differ for other datasets, especially if these consists of very small
structures to be segmented. Second, the assessment of the
uncertainty is influenced by the segmentation performance. Even though
we succeeded in building similarly performing models, their differences
cannotbefully decoupled andneglectedwhen analyzingthe uncertainty.
Overall, we aim with these results to point to the existing challenges for a
reliable utilization of voxel-wise uncertainties in medical image
segmentation, and foster the development of subject/patient-level
uncertainty estimation approaches under the condition of HDLSS.
We recommend that utilization of uncertainty methods ideally need to be
coupled with an assessment of model calibration at the subject/patient-
level. Proposed conditions, along with the threshold-free ECE metric
can be adopted to test whether uncertainty estimations can be of benefit
for a given task.
Ensembles. Another way of quantifying uncertainties is by ensembling
multiple models [Lakshminarayanan etal. 2017]. We combined the class
probabilities over all K = 10 networks and used the normalized entropy as
uncertainty measure. The individual networks share the same architecture
but were trained on different subsets (90%) of the training dataset and
different randominitialization to enforcevariability.
Auxiliary network. Inspired by [DeVriesandTaylor2018;
Robinson et al. 2018], where an auxiliary network is used to predict
segmentation performance at the subject-level, we apply an auxiliary
network to predict voxel-wise uncertainties of the segmentation model by
learning from the segmentation errors (i.e., false positives and false
negatives). For the experiments, we considered two opposing types of
auxiliary networks. The first one, named auxiliary feat., consists of
three consecutive 1×1 convolution layers cascaded after the last feature
maps of the segmentation network. The second auxiliary network, named
auxiliary segm., is a completely independent network (same U-Net as
described in Sec. 2.2) that uses as input the original images and the
segmentation masks produced by the segmentation model (generated by
five-fold cross-validation). We normalized the output uncertainty subject-
wise to[0, 1]for comparability purposes
Short intro “Uncertainty and
connection to clinicaldecision
making”
If you would jointly do vascular segmentation and
pathology classification, this would become relevant.
Now your model output have real-life costs in health
economics sense. What if your new screening method
has a sensitivity and specificity of 0.99. Is that actually
useful for both patients and the payer (insurance
company or public healthcare provider)?
MCDropout combinedwithtask-specificUtilityFunction
Loss-CalibratedApproximateInferencein
BayesianNeuralNetworks
Adam D. Cobb, Stephen J. Roberts, Yarin Gal
(Submitted on 10 May 2018)
https://arxiv.org/abs/1805.03901 Cited by 5 - Related articles
https://github.com/AdamCobb/LCBNN
Current approaches in approximate inference for Bayesian neural
networks minimise the Kullback-Leibler divergence to approximate the
true posterior over the weights. However, this approximation is without
knowledge of the final application, and therefore cannot guarantee
optimal predictions for a given task. To make more suitable task-
specific approximations, we introduce a new loss-calibrated
evidence lower bound for Bayesian neural networks in the context of
supervised learning, informed by Bayesian decision theory. By
introducing a lower bound that depends on a utility function, we ensure
that our approximation achieves higher utility than traditional methods
for applications that have asymmetric utility functions.
Calibrating the network to take into account the utility leads to a
smoother transition from diagnosing a patient as healthy to
diagnosing them as having moderate diabetes. In comparison,
weighting the cross entropy to avoid false negatives by making errors
on the healthy class pushes it to ‘moderate’ more often. This
cautiousness, leads to an undesirable transition as shown in Figure 4a.
The weighted cross entropy model only diagnoses a patient as definitely
being disease-free for extremely obvious test results, which is not a
desirable characteristic.
Left: Standard NN model. Middle:
Weighted cross entropy model. Right:
Loss-calibrated model. Each confusion
matrix displays the resulting diagnosis when
averaging the utility function with respect to
the dropout samples of each network. We
highlight that our utility function
captures our preferences by avoiding
false negatives of the ‘Healthy’ class. In
addition, there is a clear performance gain
from the loss-calibrated model, despite the
label noise in the training. This compares to
both the standard and weighted cross
entropy models, where there is a common
failure mode of predicting a patient as being
‘Moderate’ when they are ‘Healthy’.
BrierScorebetterthanROCAUCforclinicalutility? Yes,but...
…Stillsensitivetodiseaseprevalence
TheBrierscoredoesnotevaluatetheclinicalutilityof
diagnostictestsorpredictionmodels
Melissa Assel, DanielD. SjobergandAndrew J. Vickers
MemorialSloan KetteringCancerCenter,NewYork,USA
Diagnostic andPrognostic Research 2017 1:19 https://doi.org/10.1186/s41512-017-0020-3
The Brier score is an improvement over other statistical performance measures, such as
AUC, because it is influenced by both discrimination and calibration simultaneously, with
smaller values indicating superior model performance. The Brier score also estimates a well-
defined parameter in the population, the mean squared distance between the observed and
expected outcomes. The square root of the Brier score is thus the expected distance between
the observed andpredicted value on the probability scale.
However, the Brier score is prevalence dependent i.e. sensitive to class imbalance in machine learning jargon
in such a
way that the rank ordering of tests or models may inappropriately vary by prevalence [
Wu andLee 2014]. For instance, if a disease was rare (low prevalence), but very serious and easily
cured by an innocuous treatment (strong benefit to detection), the Brier score may
inappropriately favor a specific test compared to one of greater sensitivity. Indeed, this is
approximately what was seen in the Zikavirus paper [Braga et al. 2017]
We advocate, as an alternative, the use of decision-analytic measures such as net benefit.
Net benefit always gave a rank ordering that was consistent with any reasonable evaluation of
the preferable test or model in a given clinical situation. For instance, a sensitive test had a
higher net benefit than a specific test where sensitivity was clinically important. It is
perhaps not surprising that a decision-analytic technique gives results that are in accord with
clinical judgment because clinical judgment is “hardwired” into the decision-analytic statistic.
That said, this measure is not without its own limitations, in particular, the assumption that the
benefit and harms of treatment do not vary importantly between patients independently of
preference.
Howshouldweevaluatepredictiontools?Comparisonof
threedifferenttools forpredictionofseminalvesicle
invasionatradicalprostatectomy as atestcase
Giovanni Lughezzani et al. Eur Urol. 2012 Oct; 62(4): 590–596.
https://dx.doi.org/10.1016%2Fj.eururo.2012.04.022
Traditional (area-under-the-receiver-operating-characteristic-
curve (AUC), calibration plots, the Brier score, sensitivity and
specificity, positive and negative predictive value) and novel (risk
stratification tables, the net reclassification index, decision curve
analysis and predictiveness curves) statistical methods quantified
the predictiveabilities ofthethreetested models.
Traditional statistical methods (receiver operating characteristic
(ROC) plots and Brier scores), as well as two of the novel
statistical methods (risk stratification tables and the net
reclassification index) could not provide clear distinction
between the SVI prediction tools. For example, receiver
operating characteristic (ROC) plots and Brier scores seemed
biased against the binary decision tool (ESUO criteria) and gave
discordant results for the continuous predictions of the Partin
tables and the Gallina nomogram. The results of the calibration
plots were discordant with thoseof the ROC plots. Conversely, the
decision curve clearly indicated that the Partin tables (
Zorn et al. 2009) represent the ideal strategy for stratifying
theriskof seminal vesicleinvasion(SVI).
Decisioncurveanalysis(DCA) Emerging utility analysis technique
ASystematicReviewofthe
LiteratureDemonstratesSome
Errorsinthe Use of DecisionCurve
Analysis butGenerallyCorrect
Interpretationof Findings
Paolo Capogrosso, Andrew J. Vickers
Medical Decision Making (February 28, 2019)
https://doi.org/10.1177%2F0272989X19832881
We performed a literature review
to identify common errors in
the application of DCA and
provide practical suggestions for
appropriate use of DCA. Despite
some common errors in the
application of DCA, our finding
that almost all studies correctly
interpreted the DCA results
demonstrates that it is a clear
and intuitive method to
assessclinicalutility.
A common task in medical research is
to assess the value of a diagnostic
test, molecular marker, or prediction
model. The statistical methods
typically used to do so include metrics
such as sensitivity, specificity, and
area under the curve (AUC
Hanley and MacNeil1982
) However, it is difficult
to translate these metrics into
clinical practice: for instance, it is not
at all clear how high AUC needs to be
to justify use of a prediction model or
whether, when comparing 2
diagnostic tests, a given increase in
sensitivity is worth a given decrease in
specificity(Greenland 2008; Vickers and Cronin 2010)
. It
has been generally argued that
because traditional statistical metrics
do not incorporate clinical
consequences—for instance, the
AUC weights sensitivity and
specificity as equally important—they
cannot be used to guide clinical
decisions.
In brief, DCA is a plot of net benefit against
threshold probability. Net benefit is a
weighted sum of true and false positives, the
weighting accounting for differential
consequences of each. For instance, it is
much more valuable to find a cancer (true
positive) than it is harmful to conduct an
unnecessary biopsy (false negative) and so
it is appropriate to give a higher weight to true
positives than false positives. Threshold
probability is the minimum risk at which a
patient or doctor would accept a
treatment and is considered across a range
to reflectvariation in preferences.
In the case of a cancer biopsy, for example, we
might imagine that a patient would refuse a
biopsy for a cancer risk of 1%, accept a biopsy
for a risk of 99%, but somewhere in between,
such as a 10% risk, be unsure one way or the
other. The threshold probability is used to
determine positive (risk from the model under
evaluation of 10%ofmore) v. negative(risk less
than 10%) and as the weighting factor in net
benefit. Net benefit for a model, test, or
marker is compared to 2 default strategies of
‘‘treat all’’ (assuming all patients are positive)
and ‘‘treat none’’ (assume all patients are
negative)
Wehaveassumedthatthe
groundtruthis“groundtruth”
whichisnotactuallyarealistic
assumption
Thinkofinter-annotator
variabilityaswell
i.e.yourlabelsareprobabilistic
themselves
NoisyLabels as ‘annotator confusion’
Learning FromNoisyLabelsBy Regularized EstimationOf
AnnotatorConfusion
Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, DanielC. Alexander, NathanSilberman
UniversityCollegeLondon, UK;ButterflyNetwork,NewYork,USA
Submitted on 10Feb 2019https://arxiv.org/abs/1902.03680
The predictive performance of supervised learning algorithms depends on the quality of labels. In a
typical label collection process, multiple annotators provide subjective noisy estimates of the "truth"
under the influence of their varying skill-levels and biases. Blindly treating these noisy labels as the
ground truth limits the accuracy of learning algorithms in the presence of strong disagreement. This
problem is critical for applications in domains such as medical imaging where both the annotation
cost and inter-observer variability are high. In this work, we present a method for simultaneously
learning the individual annotator model and the underlying true label distribution, using only noisy
observations. Each annotator is modeled by a confusionmatrix that is jointly estimated along with the
classifier predictions. We propose to add a regularization term to the loss function that encourages
convergence to the true annotator confusion matrix. We provide a theoretical argument as to how the
regularization is essential to ourapproach both forthecaseofsingleannotatorand multiple annotators.
Future work shall consider imposing structures on the confusion matrices to broaden up the
applicability to massively multi-class scenarios e.g. introducing taxonomy based sparsity [
Van Horn 2018] and low-rankapproximation. We also assumed that thereisonlyonegroundtruth
for each input; this no longer holds true when the input images are truly ambiguous—recent
advances in modelling multi-modality of label distributions [Saeedi etal.2017, Kohl et al. 2018]
potentially facilitate relaxation of such assumption. Another limiting assumption is the image
independence of the annotator’s label noise. The majority of disagreement between annotators
arise in the difficult cases. Integrating such input dependence of label noise [Raykar et al. 2009,
Xiaoet al. 2015] is also avaluablenextstep.
ProbalisticU-Net buildingthisuncertaintyintothemodel
AProbabilisticU-NetforSegmentationofAmbiguous
ImagesSimon A. A. Kohl, BernardinoRomera-Paredes, ClemensMeyer, JeffreyDe Fauw,
Joseph R. Ledsam, Klaus H. Maier-Hein, S. M. Ali Eslami, Danilo JimenezRezende, Olaf
Ronneberger (Submitted on13Jun2018
https://arxiv.org/abs/1806.05034 -
https://github.com/SimonKohl/probabilistic_unet
“In clinical applications forexample, it mightnot beclearfrom aCT
scan alone which particularregion is cancertissue. Thereforea
groupofgraderstypicallyproducesasetofdiversebut
plausiblesegmentations. Weconsiderthe task oflearning a
distribution oversegmentations given an input. To this end we
propose agenerative segmentation modelbased on a
combination of aU-Net with a conditionalvariational
autoencoder thatis capable ofefficiently producing an unlimited
numberofplausible hypotheses.
All in all we see a large field where our proposed Probabilistic U-
Net can replace the currently applied deterministic U-Nets.
Especially in the medical domain, with its often ambiguous
images and highly critical decisions that depend on the correct
interpretation of the image, our model’s segmentation hypotheses
and their likelihoods could 1) inform diagnosis/classification
probabilities or 2) guide steps to resolve ambiguities. Our
method could prove useful beyond explicitly multi-modal tasks, as
the inspectability of the Probabilistic U-Net’s latent space
could yield insights for many segmentation tasks that are currently
treated as auni-modalproblem.”
ProbalisticU-Net ’HierarchicalLatent’expansion
AProbabilisticU-Net forSegmentationof AmbiguousImages
Simon A. A. Kohl, BernardinoRomera-Paredes, Klaus H. Maier-Hein, Danilo JimenezRezende, S. M. AliEslami, Pushmeet
Kohli, Andrew Zisserman, OlafRonneberger
(Submittedon 30May2019) https://arxiv.org/abs/1905.13077 - coming!
Medical imaging only indirectly measures the molecular identity of the tissue within each voxel, which often
produces only ambiguous image evidence for target measures of interest, like semantic segmentation.
This diversity and the variations of plausible interpretations are often specific to given image regions and
may thus manifest on various scales, spanning all the way from the pixel to the image level. In order to learn a
flexible distribution that can account for multiple scales of variations, we propose the Hierarchical
Probabilistic U-Net, a segmentation network with a conditional variational auto-encoder (cVAE) that
uses a hierarchicallatent space decomposition.
We show that this model formulation enables sampling and reconstruction of segmentations with high
fidelity, i.e. with finely resolved detail, while providing the flexibility to learn complex structured
distributions across scales. We demonstrate these abilities on the task of segmenting ambiguous
medical scans as well as on instance segmentation of neurobiological and natural images. Our model
automatically separatesindependent factorsacrossscales, an inductive bias that we deem beneficial
in structured output prediction tasks beyond segmentation. In terms of KL cost, it is more expensive to
model global aspects locally, which in combination with the hierarchical model formulation itself, is the
mechanism that puts into effect the separation of scales. Disentangled representations are regarded
highly desirable across the board and the proposed model may thus also be interesting for other down-
stream applications or image-to-image translation tasks.
In the medical domain the HPU-Net could be applied in interactive clinical scenarios where a clinician
could either pick from a set of likely segmentation hypotheses or may interact with its flexible latent space to
quickly obtain the desired results. The model’s ability to faithfully extrapolate conditioned on prior
observations could further be employed in spatio-temporal predictions, such as e.g. predicting tumor
therapy response.
Alternatives for Probabilistic U-Net emerging
PHiSeg: CapturingUncertaintyinMedical
ImageSegmentation
Christian F. Baumgartner, Kerem C. Tezcan, Krishna Chaitanya, Andreas
M. Hötker, Urs J. Muehlematter, Khoschy Schawkat, Anton S. Becker,
Olivio Donati, Ender Konukoglu Computer Vision Lab, ETH Zürich; Memorial Sloan Kettering Cancer
Center;Beth Israel Deaconess Medical Center,Harvard Medical School
(Submitted on 7 Jun 2019) https://arxiv.org/abs/1906.04045
https://github.com/baumgach/PHiSeg-code Tensorflow
Segmentation of anatomical structures and
pathologies is inherently ambiguous. For instance,
structure borders may not be clearly visible or
different experts may have different styles of
annotating. The majority of current state-of-the-art
methods do not account for such ambiguities but
rather learn a single mapping from image to
segmentation. In this work, we propose a novel
method to model the conditional probability
distribution of the segmentations given an input
image. We derive a hierarchical probabilistic model, in
which separate latent spaces are responsible for
modelling the segmentation at different resolutions.
Inference in this model can be efficiently performed
using the variational autoencoder framework. We
show that our proposed method can be used to
generate significantly more realistic and diverse
segmentation samples compared to recent related
work, both, when trained with annotations from a
single or multiple annotators.
‘Know-it-all’ ClinicianExpertinyourteam?
Stillmakeserrors,andconsensusofexpertsisabetterapproach
Supervised learningfrommultiple experts:
whomtotrustwheneveryone liesabit
Vikas C. Raykaret al. (2009) SiemensHealthcare
https://doi.org/10.1145/1553374.1553488
Modelling Cognitive Bias in Crowdsourcing
Systems
Farah Saab, Imad H. Elhajj, Ayman Kayssi, Ali Chehab (December 2019)
https://doi.org/10.1016/j.cogsys.2019.04.004
The work reveals a surprising result where confidence-related approaches lack in
performance when compared to other approaches such as simple plurality voting or
approaches which consider respondent competence. Thisinadequacystems froma
psychological phenomenon brought forth by David Dunning and Justin Kruger
related topeople’s bias in assessingtheir own cognitiveabilities.
Comparison of the algorithm, ophthalmologists, and retinal specialists using the adjudicated
reference standard at various DR severity thresholds. The algorithm’s performance is the blue
curve. The 3 retina specialists are represented in shades of orange/red, and the 3
ophthalmologists are in shades of blue. N = 1813 fully gradable images.
Gradervariabilityandtheimportance
of referencestandardsforevaluating
machinelearningmodelsfordiabetic
retinopathy Jonathan Krause,Varun Gulshan, Ehsan
Rahimy,Peter Karth,Kasumi Widner,Greg S. Corrado,LilyPeng, Dale
R.Webster
Google Research,Palo Alto Medical Foundation, Oregon Eye Consultants
https://arxiv.org/abs/1710.01711
https://doi.org/10.1016/j.ophtha.2018.01.034
Dealingwith inter-expert
variabilityinretinopathyof
prematurity:Amachine
learningapproach
Bolón-Canedor et al.(2015)
10.1016/j.cmpb.2015.06.004
+http://dx.doi.org/10.3414/ME13-01-0081
Makingthingsevenmore
difficultinmedicalsettings
Ambiguitiesinyour
phenotypes
In other words, when pathology is not
that well-defined, and you would rather
want to phenotype by clustering rather
than using some antiquated ICD
diagnosis codes
Relevant when you useyoursegmentation aspart ofdiagnostics classification
Clinically applicable deep learningfor
diagnosisandreferralin retinaldisease
Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, HarryAskham, Xavier Glorot,
Brendan O’Donoghue, DanielVisentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena
Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis
Hassabis, GeraintRees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane & Olaf Ronneberger
Nature Medicine 24, 1342–1350 (2018) Citedby147 - Related articles
https://doi.org/10.1038/s41591-018-0107-6
We demonstrate that the tissue segmentationsproduced by our architecture act
as a device-independent representation; referral accuracy is maintained when
using tissue segmentations from a different type of device cross-vendor / cross-modal
. Our
work removes previous barriers to wider clinical use without prohibitive training
data requirements across multiple pathologies in a real-world setting.
The clinical
significance of
segmentation
quality
What if he had problemswithdiagnosis?
Or simply the phenotypes had “intrinsic uncertainty” as is
the case with many neurodegenerative disease (e.g.
Alzheimer’s Disease). What would you use as your gold
standard for diagnosis? Do you think that the current state-
of-art gold standard captures well the “whole pathology”?
Synthetic
Vascular
Data
GenerativeModels forsyntheticvasculaturedata#1
VampireAutomatic Generation of Synthetic
Retinal Fundus Images: Vascular Network
https://doi.org/10.1016/j.procs.2016.07.010
DeepSemanticInstanceSegmentationofTree-like
StructuresUsing SyntheticData
Kerry Halupka,Rahil Garnavi, Stephen Moore
IBM Research, Level 22/60 CityRd, Southbank, Victoria, Australia
(Submitted on 8Nov2018)
https://arxiv.org/abs/1811.03208
Syntheticsamplesalsoas
trainingmaterialfor
vascularsegmentation?
GenerativeModels forsyntheticvasculaturedata#2“Physiologyconstraint”
Tissuemetabolismdriven
arterialtreegeneration
Matthias Schneider,Johannes Reichold,Bruno Weber,Gábor Székely,
SvenHirsch
Medical ImageAnalysis Volume16, Issue 7, October2012, Pages 1397-1414
https://doi.org/10.1016/j.media.2012.04.009 - Cited by 19
We present an approach to generate 3-D arterial tree models based on physiological principles while
at the same time certain morphological properties are enforced at construction time. The driving
force of the construction is a simplified angiogenesis model incorporating case-specific
information about the metabolic demand within the considered domain. The vascular tree is
constructed iteratively by successively adding new segments in chemotactic response to
angiogenic growth factors secreted by ischemic cells. Morphometrically confirmed bifurcation
statistics of vascular networks are incorporated to optimize the synthetic vasculature. The proposed
method is able to generate artificial, yet physiologically plausible, arterial tree models that match
the metabolic demand of the embedding tissue and fulfill the prescribed morphological properties at
the same time.
GenerativeModels forsyntheticvasculaturedata#3“Physiologyconstraint”
Anewmodelfortheemergenceof
bloodcapillarynetworks
P.Aceves-Sanchez,B. Aymard, D.Peurichard, P.Kennel,A. Lorsignol,F.Plouraboue,
L. Casteilla, P. Degond (Submittedon 24 Dec2018)
https://arxiv.org/abs/1812.09992
We propose a new model for the emergence of blood capillary networks. We
assimilate the tissue and extra cellular matrix as a porous medium, using Darcy'slaw
for describing both blood and intersticial fluid flows. Oxygen obeys a convection-
diffusion-reaction equation describing advection by the blood, diffusion and
consumption by the tissue. The coupling between blood, oxygen flow and
capillary elements provides a positive feedback mechanism which triggers the
emergence of a network of channels of high hydraulic conductivity which we
identify as new blood capillaries. We provide two different, biologically relevant
geometrical settings and numerically analyze the influence of each of the capillary
creation mechanism in detail. All mechanisms seem to concur towards a
harmonious network but the most important ones are those involving oxygen
gradientand sheerstress.
As summarized here this new network formation model opens many different exciting
research avenues. It offers a new paradigm for capillary network creation by placing the
flow of blood at thecentral placein the process. This paper provides a proof of concept of
this approach and elaborates a road map by which the model can be gradually improved
towards a fully fledged simulator of blood capillary network formation. Such simulator
would have huge potential for biological or clinical applications in cancer, wound healing,
tissue engineering and regeneration. Besides biological or clinical sciences applications
the approach could also be adapted to plant biology (for leaf venation or root formation),
physics (lightnings of thunder) or engineering (dielectric breakdown).
GenerativeModelsforsyntheticdata: Vasculature
Transferlearningfromsynthetic
datareducesneedforlabelsto
segmentbrainvasculatureand
neuralpathwaysin3D
Johannes C.Paetzold, Oliver Schoppe, Rami Al-Maskari, Giles Tetteh,Velizar Efremov, Mihail I.
Todorov, Ruiyao Cai,Hongcheng Mai, Zhouyi Rong,Ali Ertuerk,Bjoern H.Menze
TranslaTUM and Department of Computer Science, Technical University of Munich / Institute for Stroke and Dementia Research, Ludwig Maximilian University of
Munich
10 Apr 2019 (modified: 11 Jun 2019)
MIDL 2019 Conference
https://openreview.net/forum?id=BJe02gRiY4
Novel microscopic techniques yield high-resolution
volumetric scans of complex anatomical structures
such as the blood vasculature or the nervous
system. Here, we show how transfer learning and
synthetic data generation can be used to train deep
neural networks to segment these structures
successfully in the absence of or with very limited
training data.
A) Synthetic training data was designed to resemble vasculature of human brain in MRI scans. B-D)
Predicted segmentations of 3 different applications: MRI scans of human brain vasculature (B), 3D
LSM of mouse brain vasculature (C), and peripheral nervous system (D; shown here: innervated
musclefibres)
Here, we present results from three widely different applications: human brain
vessels (MRI), mouse brain vessels and the mouse peripheral nervous
system (both 3D Light Sheet Microscopy, LSM). The same network was trained
either on a small labeled set from the respective application (”real data”), on
synthetically generated data, or on a combination of both. The synthetic data used
is identical for all three applications. We chose DeepVesselNet as our
architecture; the schedule for pre-training on synthetic data and refinement on
real data match the methods of (Tetteh et al., 2018). The methods for generation of
synthetic training data is described in (Schneider et al., 2012).
Semi-Supervised
training
Combine unlabeled data
with labeled data (or use a
foundation model, finetuned
or not for your small
dataset)
Semi-supervised learning ina nutshell
Sounds perfect in theory, however in practice has not lived up toits expectations
lab.rockefeller.edu/strickland foil.bme.utexas.edu
Stressed animals exhibited a greater BBB permeability to 40-kDa
dextran, but not to 70-kDa dextran, which is suggestive of weakened
vascular integrity following stress. doi: 10.1038/s41598-018-30875-y
Thousandsofacquired3Dstacks
doi: 10.1038/s41592-018-0115-y
Maybe only4ofthem have
vasculature annotated
Semi-Supervised Model
You would like to have better performance
than using just either
- labeled data (supervised learning)
- unlabeled data (unsupervised learning)
You get fast a lot of “unstructured data”
One-ShotMedicalSegmentationExample MIT#1
Dataaugmentation using learned transformationsfor one-shot medical imagesegmentation Amy Zhao, Guha Balakrishnan, Frédo Durand, John V. Guttag, Adrian V. Dalca
(Submitted on 25 Feb 2019 (v1), last revised 6 Apr 2019 (this version, v2)) https://arxiv.org/abs/1902.09383 - https://github.com/xamyzhao/brainstorm
CNN brainstorm / src/ segmenter_model.py
‘Time-dependent’
i.e. Functional
Multiphoton
Microscopy
so far we have modelled structural stacks
without the temporal dimension. Seek
inspiration from video processing papers
And if you have fast enough microscope
with shallow stacks, you could exploit
successive frames for better structural
stacks as well (super-resolution)
MultiphotonMicroscope samplingrates
Commercialonescan beabit
sluggish ifyouwant invivo optical
electrophysiologytobe done
OlympusFV1000MPE:
XY (not whole FOV): 10-20 Hz;
XY (whole FOV): generally slow ~3 Hz.
linescan: 800-900 Hz;
Custom-builtmicroscopes
betterforspeed
10.1073/pnas.1514209112
Framerates:
500Hz (single cell, 240 x 48 px);
80Hz (population, 450x 300px);
40Hz (600 x 600 px);
200Hz (dendritic imaging; 360 x 120 px).
A bit beyond the scope of this presentation the 2-PM hardware tech, but you can have a look of more efficient
scanning patterns (Lissajous scan), MEMS mirrors instead of bulky galvos (Duan et al. 2018; Li et al. 2017), and
miniaturization/MEMSification in general like the “MEMS-in-the-lens” (Dickensheets et al. 2019)?
When highersamplingrates needed#1
2D planar data is acquired over time in sequential depth (100–400
μm), and then reconstructed to volumes with spatial resolution of
1.6×1.6×3 μm and effective temporal resolution of 0.786 s
(~1.27 Hz)/volume. In paradigm 2, a train of 7 electrical pulses,
played out at 3 Hz, is presented, starting at the third imaging frame
(shown) and again starting at frame 24 (not shown). -
Lindvere et al. 2013
Left Average vertex-wise time course in response to the 7 pulse stimulation, with
dilations in red and constrictions in blue, alongside modeled HRF (black) for quickly
(A), mid-latency (B), and slowly responding vertices (C). Stimulus presentation period
is highlighted in yellow. Right Map of estimated stimulation-induced change in radius
(Δr) for the 7pulse stimulation, dilations(red)and constrictions(blue).
Note thesmall amplitude!
Constriction neversmaller than1%ofbaseline, anddilation neverbiggerthan 2%
When highersamplingrates needed#2
Bouchard etal. (2006)Video-rate
two-photonmicroscopy of
corticalhemodynamics in-vivo
https://doi.org/10.1364/BIO.2006.MI1
Multi-scaleimagingof functional
hemodynamics:Intrinsic imaging of the
hemodynamic response toforepaw
stimulus reveals a localized region of
increased hemoglobin absorption (top-
left). Two-photonmicroscopywas then
usedto closely examine the active region.
In a singleframe, we were able to
repeatedly image an artery,veinand
venuletogetherduring 5 stimulus
repetitions (center-right). From this data
we have extractedthe diametersof the
vesselsasa functionof time(bottom).
The arterioleshowsdistinctdilation
duringthestimulus, in contrasttothe
vein andvenule, which show no
measurable diameter changes.
Wide-fieldimaging allows us toimage the
vessels in the same plane simultaneously,
eliminating the possibility that the
observed dilation is in fact out-of-plane
movement artifact. The same images can
alsobe analyzed to evaluate blood flow,
speed and hematocrit.
Whatyouwouldliketohaveasbloodflowmeasurementspeed?
Imagingsingle-cellblood flow inthe
smallesttolargest vesselsintheliving
retina
Aby Joseph, Andres Guevara-Torres, Jesse Schallek
Institute of Optics,University of Rochester,New York, United States;CenterforVisual Science, University ofRochester, New York,
United States; Flaum Eye Institute, University ofRochester, New York,United States; Department ofNeuroscience, University of
Rochester,New York, United States
https://doi.org/10.7554/eLife.45077.001
The transparency of the mammalian eye provides a
noninvasive view of the microvessels of the retina, a part
of the central nervous system. Despite its clarity,
imperfections in the optics of the eye blur
microscopic retinal capillaries, and single blood cells
flowing within. This limits early evaluation of
microvascular diseases that originate in capillaries. To
break this barrier, we use 15 kHz adaptive optics
imaging using the confocal mode of AOSLO to
noninvasively measure single-cell blood flow, in one of
the most widely used research animals: the C57BL/6J
mouse. Measured flow ranged four orders of magnitude
(0.0002–1.55 mL min–1) across the full spectrum of retinal
vessel diameters (3.2–45.8 mm), without requiring
surgery or contrast dye. Here, we describe the ultrafast
imaging, analysis pipeline and automated measurement
of millions of blood cell speeds.
Rememberyour multimodal /auxiliary measures to helpyousegment
vasculaturebetter,andgetbetter‘insights’
Have your (MRI-compatible) physiological monitoring
setup data combined in your data. ECG (especially) and
respiration helpful in gating your imaging so that the motion
artifacts are minimized already at hardware level. You might
have someone saying that you just use some algorithmic
compensation. Up to you in that situation
https://doi.org/10.1186/2191-219X-2-44
https://www.slideshare.net/PetteriTeikariPhD
/instrumentation-for-in-vivo-intravital-microsc
opy
Whenhigher samplingrates needed: Your dyes willbe tooslow alsoat some point
Whole brain, single-cell resolution calcium transients captured with light-sheet
microscopy (Ahrens et al. [118]). (A) Two spherically focused beams rapidly swept out a
four-micron-thick plane orthogonal to the imaging axis. The beam and objective step
together along the imaging axis to build up three-dimensional volume image at 0.8Hz (B).
(C) Rapid light-sheet imaging of GCaMP5G calcium transients revealed a specific
hindbrain neuron population (D, green traces) traces correlated with spinal cord neuropil
activity(black trace).
Schultz et al. 2016 https://doi.org/10.1101/036632
Spatiotemporal Neuron Segmentationfor2-PM “Optical Electrophysiology”
Fastandrobustactiveneuronsegmentationintwo-photon calcium
imagingusingspatiotemporaldeeplearning
Somayyeh Soltanian-Zadeh, Kaan Sahingur, Sarah Blau, Yiyang Gong, and Sina Farsiu
Department of BiomedicalEngineering, Department of Neurobiology, Duke University
PNASApril23, 2019 116 (17)8554-8563
https://doi.org/10.1073/pnas.1812995116 – https://github.com/soltanianzadeh/STNeuroNet
Two-photon calcium imaging is a standard technique of neuroscience laboratories that
recordsneuralactivity fromindividual neurons over large populations in awake-
behaving animals. Automatic and accurate identification of behaviorally relevant
neurons from these recordings is a critical step toward complete mapping of brain
activity. To this end, we present a fastdeep learning framework which significantly
outperforms previous methods and is the first to be as accurate as human experts in
segmenting active and overlapping neurons.
Here, to exploit the full spatiotemporal information in two-photon calcium imaging
movies, we propose a 3D convolutional neural network to identify and segment
active neurons. By utilizing a variety of two-photon microscopy datasets, we show that
our method outperforms state-of-the-art techniques and is on a par with manual
segmentation. Furthermore, we demonstrate that the network trained on data
recorded at a specific cortical layer can be used to accurately segment active neurons
from another layer with different neuron density. Finally, our work documents
significant tabulation flaws in one of the most cited and active online scientific
challenges in neuron segmentation. As our computationally fast method is an
invaluable tool for a large spectrum of real-time optogenetic experiments, we have
made our open-source software and carefully annotated dataset freely available
online.
‘Intelligent’
Labeling
As already established,
labeling is very laborious and
you want a system that
makes this process easier for
increasing the labeled
examples for your semi-
supervised approach
Helpingthe
Machine
Learning
Help for example the
initial deep learning
result which improves
the algorithm itself and
the resulting
segmentation
Umbralets you instantly view complex 3D images
on any device
DEAN TAKAHASHI@DEANTAKOCTOBER 17, 2017 6:00 AM
https://venturebeat.com/2017/10/17/umbra-lets-you-instantly-view-complex-3d-images-on-any-devi
ce/
Voxeleron Awarded NIH SBIR Grant for Device-independent
Retinal OCT Image Analysis Software Voxeleron will
collaborate with Professor Pablo Villoslada of
UCSF/IDIBAPS and Dr. Pearse Keane of Moorfields Eye
Hospital to validate the algorithms and ensure clinical
utility. February 8, 2017
VoxeleronOrionallows
correctionoflayer boundaries
InteractiveMedical ImageSegmentationusing
DeepLearningwithImage-specificFine-tuning
Guotai Wang,Wenqi Li,Maria A. Zuluaga,Rosalind Pratt,Premal A.Patel,Michael
Aertsen,Tom Doel,Anna L. David,Jan Deprest,Sebastien Ourselin,TomVercauteren
(Submitted on 11 Oct 2017) https://arxiv.org/abs/1710.04043
The proposed interactive segmentation framework (BIFSeg with
PC-Net). 2D images are shown as examples. In the training
stage, each instance is cropped with its bounding box, and the
CNN model is trained for binary segmentation. In the testing
stage, image-specific fine-tuning with optional scribbles and a
weighted loss function is used. Note that the object class (e.g. a
maternal kidney) in the test image may have not been present
in the training set.
Use thesame“mainsegmentor”model asthebasisnetwork
3DU-NETGAN
Semi-Supervised
Your lab runs tons and tons of
experiments with perfectly good
vasculature stacks.
→ No time to label them all, so use
active learning to select the most
useful stacks for labeling.
→ Re-train your model (with the new
labeled stack and bunch of unlabeled
ones) and you should have incremental
improvement for your model now (if
everything goes well).
And if this becomes a standard
workflow you might check some
continuous integration (CI)
software for deep learning
models to make this more
efficient?
e.g. ease.ml/ci by
Renggli et al. (2019)
doi: 10.1038/s41598-018-30875-y
YourInitial“Guess”
Average Hausdorff
Distance (AVD) very bad
You want to quicklywithcoupleofclicks to show
your system where the prediction is wrong (click
outliers / inliers) and have a new prediction) “semi-
automatic”
Note! That the prediction is not actually “that horrible” but there is
a huge discrepancy between ground truth and predicted mask, as
ZNN (continuous value map) finds the faint regions from other
slices (“bad z-sectioning”), and the probabilistic model that accounts
for label noise could help!
https://arxiv.org/abs/1606.02382
Andthisprocess shouldbecomecontinuous
1) Do new experiments
2)Re-train model with the new unlabeled data
3)Select new “hard examples”
4)Show the model its errors for the hard example,
5)Re-train model with your new annotated sample (and new unlabeled data if
you did not re-train yet on step 2)
→ Get a better performing system,
and over time your manual
annotation work should converge to
some some “Bayes error”
annotation time :P
Continual learning (CL) is the ability to learn continually from a stream of
experiential data, building on what was learnt previously, while being able to
reapply, adapt and generalize it to new situations. CL is a fundamental step
towards artificial intelligence, as it allows the learning agent to continually extend
its abilities and adapt them to a continuously changing environment, a hallmark
of natural intelligence.
This iteratedself-improvement illustrated #1
Deep learningforcellular
imageanalysis
Erick Moen, DylanBannon, Takamasa Kudo,William Graf, Markus
Covert and David Van Valen California Institute of Technology / Stanford University
Nature Methods (2019)
https://doi.org/10.1038/s41592-019-0403-1
Here we review the intersection between
deep learning and cellular image analysis
and provide an overview of both the
mathematical mechanics and the
programming frameworks of deep learning
that are pertinent to life scientists. We survey
the field’s progress in four key applications:
image classification, image segmentation,
object tracking, andaugmented microscopy.
Our prior work has shown that it is important to match a model’s
receptive field size with the relevant feature size in order to
produce a well-performing model for biological images. The
Python package Talos is a convenient tool for Keras users that
helps to automate hyperparameter optimization through grid
searches.
We have found that modern sofwared evelopment practices have
substantially improved the programming experience, as well as the
stability of the underlying hardware. Our groups routinely use Git
and Docker to develop and deploy deep learning models. Git is a
version-control sofware, and the associated web platform GitHub
allows code to be jointly developed by team members. Docker is a
containerization tool that enables the production of reproducible
programming environments.
Deeplearning is a data
science, and few know
data better than those
whoacquire it. In our
experience,better
toolsand better
insightsarisewhen
benchscientistsand
computational
scientistsworkside
byside—even
exchanging tasks—to
drive discovery
This iteratedself-improvement illustrated #2
ImprovingDataset
VolumesandModel
Accuracy withSemi-
SupervisedIterativeSelf-
Learning
RobertDupre ; Jiri Fajtl ; Vasileios Argyriou ; Paolo Remagnino
IEEE Transactions on Image Processing (Early Access, May2019 )
https://doi.org/10.1109/TIP.2019.2913986
Within this work a novel semi-
supervised learning technique is
introduced based on a simple iterative
learning cycle together with learned
thresholding techniques and an
ensemble decision support system.
State-of-the-art model performance
and increased training data volume
are demonstrated, through the use of
unlabelled data when training deeply
learned classification models. The
methods presented work
independently from the model
architectures or loss functions,
making this approach applicable to a
wide range of machine learning and
classification tasks.
v
Alternative
Visualization
Old Skool
Init cubes
Labeled by hand
n > 5
Label more
stacks
continuously
Traininitialmodel.
e.,g. somesemi-supervised or transfer
learnt modification, implemented in
PyTorch or Tensorflow
Gives you initial guesses that you want
to have anintelligentcorrection front-
end forMechanical Turkers or students
to work on.
n > 25 n > 500
n > 100
n > 10
As the labeled stacks
increase so does the model
performance as well as the
initial guesses for the
intelligent correction part
n
Deployment/
Reproducibility
issues?
Git(hub) shortintro if you are not yet managing your lab’s code with version control
https://datacarpentry.org/semester-biology/materials/git-in-30-minutes/
Common points of confusion
- Git vs. GitHub
- PrivateGitHub repos (
https://education.github.com/)
https://books.google.fi/boo
ks?id=53OYDwAAQBAJ
https://doi.org/10.1186/1751-0473-8-7
https://www.botany.one/2017/03/sharing-scientific-code-short-introduction-git-github/
Github withDocker
https://techcrunch.com/2018/10/26/microsoft-clo
ses-its-7-5b-purchase-of-code-sharing-platform-gi
thub/
YouPull(“download”)
But you have a Windows
and not allthe libraries
installed as the developer
Putinsidea“container”
(Docker the most
commonly used container
tech) and the code comes
with the OS and the
environment
AReproducibleRNotebookUsing
DockerCarl Boettiger
My name is Carl Boettiger. I'm a theoretical ecologist in
UCBerkeley ESPM working on problems of forecasting and
decision-making in ecological systems. My work involves
developing new computational and frequently data-
intensiveapproaches tothese problems.
https://www.practicereproducibleresearch.org/case-studies/cboettig.html
Theactualsegmentationmodel isatinyblockofwholearchitecture
Only a small fraction of real-world ML systems is composed of the ML code, as shown by the
small black box in the middle. The required surrounding infrastructure is vast and
complex.
Google (2016) at NIPS: “Hidden Technical Debt in Machine Learning Systems” Cited by 143
VeryShortIntroforthe MLOpssideofthings andrunningthingsin practice #1
BuildingProductionMachine
LearningSystems
ManuSuryavanshMay17 2019
Streaming analysis of two-photoncalciumimaging run on
Sparkcluster in cloud, by Jeremy Freemanin collaboration with Karel
Svodoba and NicholasSofroniew. https://www.janelia.org/lab/svoboda-lab
.
https://youtu.be/uUQTSPvD1mc?t=17m
Example of large-scale deploymentof two-photon
machine learning application on “professional
infrastructure” outside the individual desktops of many labs.
http://dx.doi.org/10.1016/j.conb.2015.04.0
02
Kubeflow
Kubeflow is aopen
sourceplatform built
on topon
Kubernetes that
allows scalable
training and serving
ofmachinelearning
models.
Kubeflow can run on
anycloud
infrastructure, and
oneofthe key
advantages of using
Kubeflow is that the
system can then be
deployedon an on-
premise
infrastructure.
VeryShortIntroforthe MLOpssideofthings andrunningthingsin practice #2
ReproducingMachineLearningResearchon
Binder
Jessica Forde, Matthias Bussonnier, Félix-Antoine Fortin, Brian Granger, Tim Head, Chris Holdgraf, Paul Ivanov, Kyle
Kelley, M Pacer, Yuvi Panda, Fernando Perez, Gladys Nalvarte, Benjamin Ragan-Kelley, Zachary Sailer, Steven
Silvester, Erik Sundell, Carol Willing
29Oct2018NIPS2018WorkshopMLOSS
https://openreview.net/forum?id=BJlR6KTE3X
Binder is an open-source project that lets users share interactive, reproducible
science. Binder’s goal is to allow researchers to create interactive versions of
their code utilizing pre-existing workflows and minimal additional effort. It
uses standard configuration files in software engineering to let researchers create
interactive versions of code they have hosted on commonly-used platforms like
GitHub.
Binder’s underlying technology, BinderHub, is entirely open-source and utilizes
entirely open-source tools. By leveraging tools such as Kubernetes and Docker, it
manages the technical complexity around creating containers to capture a
repository and its dependencies, generating user sessions, and providing public
URLs to share the built images with others. BinderHub combines two open-source
projects within the Jupyter ecosystem: repo2docker and JupyterHub.
repo2docker builds the Docker image of the git repository specified by the
user, installs dependencies, and provides various front-ends to explore the image.
JupyterHub then spawns and serves instances of these built images using
Kubernetes to scale as needed Because each of these pieces is open-source
and uses popular tools in cloud orchestration, BinderHub can be deployed on a
variety of cloud platforms, or even on your own hardware.
DeploymentAtopicofitsown,butgoodforyoutoknowthebasics
forefficientcommunicationwiththe‘MLOpsteam’
https://xkcd.com/1629/
FavioVázquez Jun 15
2019
https://towardsdatascience.com/https-towardsdatascience-com-the-data-f
abric-containers-kubernetes-309674527d16
ToSimplifythings..thetakehomemessage(s)
Researchers can pull (download) the
“environments” without having to
worry about Windows/Mac/Linux
issues, and whether all required
libraries are installed
Platform-agnostic way to manage
many Dockers. If you want to switch
from Amazon Cloud to Google
Cloud, or your local server, “just
move” the Kubernetes “config”
ReproducabilityIssues No shortage of articles #1
TowardAReproducible,
ScalableFrameworkfor
ProcessingLarge
NeuroimagingDatasets
Erik C. Johnson et al. (2019)
Johns Hopkins; GeorgiaTech University; Universityof Pennsylvania
https://doi.org/10.1101/615161
https://github.com/aplbrain/saber
Many neuroscience laboratories lack
the computational expertise or
resources to work with datasets of this
size: computer vision tools are often not
portable or scalable, and there is
considerable difficulty in reproducing
results or extending methods. We
developed an ecosystem, Scalable
Analytics forBrain Exploration Research
(SABER), of neuroimaging data
analysis pipelines that utilize open
source algorithms to create
standardized modules and end-to-end
optimized approaches
ReproducabilityIssues No shortage of articles #2
System forQuality AssuredDataAnalysis:
‐
Flexible,reproduciblescientific workflows
Fowler et al. (2018)
https://doi.org/10.1002/gepi.22178
Computingenvironmentsforreproducibility:
Capturingthe“WholeTale”
Brinckman et al. (2019)
https://doi.org/10.1016/j.future.2017.12.029
Qresp,a toolfor curating,discoveringand
exploringreproduciblescientific papers
MarcoGovoni et al. (2019)
https://doi.org/10.1038/sdata.2019.2
Knowledgeandattitudesamonglifescientists
towardsreproducibilitywithinjournalarticles
Samota and Davey (2019)
https://doi.org/10.1101/581033
nf-core:Communitycuratedbioinformatics
pipelines
Ewelsi et al. (2019)
https://doi.org/10.1101/610741
ReproducibleResearchismorethanPublishing
ResearchArtefacts:A Systematic Analysisof
Jupyter Notebooksfrom ResearchArticles
Schröder et al. (2019)
https://arxiv.org/abs/1905.00092
Creating reproducible pharmacogenomic analysis
pipelines
Mammoliti et al. (2019)
https://doi.org/10.1101/614560
Reproducible DataAnalysisPipelinesforPrecision
Medicine
Fjukstad et al. (2019)
https://doi.org/10.1109/EMPDP.2019.8671623
Recommendationsforthe packaging and
containerizing of bioinformatics software
Gruening et al. (2019)
https://doi.org/10.12688/f1000research.15140.2
Preparingnext-generation scientistsfor biomedical
bigdata: artificialintelligence approaches
Moore et al. (2019)
https://doi.org/10.2217/pme-2018-0145
From the Wet Lab to the Web Lab:A Paradigm Shift in
Brain Imaging Research
Keshavan and Poline (2019)
https://dx.doi.org/10.3389%2Ffninf.2019.00003
TowardsEffectiveForagingby DataScientiststo Find
Past AnalysisChoices
Kery et al. (2019)
https://doi.org/10.1145/3290605.3300322
Enhancingand accelerating socialscience via
automation:Challengesand opportunities
Yarkoni et al. (2019)
https://doi.org/10.31235/osf.io/vncwe
Deploying aScalableDataScienceEnvironment
UsingDocker
Martín-Santana et al. (2019)
https://doi.org/10.1007/978-3-319-95651-0_7
BioportainerWorkbench:aversatileanduser-
friendlysystem that integratesimplementation,
management,anduseof bioinformatics
resourcesinDockerenvironments
Menegidioni et al. (2019)
https://doi.org/10.1093/gigascience/giz041
TowardsA MethodologyandFrameworkfor
Workflow-DrivenTeamScience
Altintas et al. (2019)
https://arxiv.org/abs/1903.01403
AmbitiousData ScienceCanBePainless
Altintas et al. (2019) https://arxiv.org/abs/1903.01403
NamingthePaininDevelopingScientific
Software
Wiese et al. (2019)
https://doi.org/10.1109/MS.2019.2899838
trackr:AFrameworkfor Enhancing
Discoverabilityand Reproducibilityof Data
VisualizationsandOtherArtifactsinR
Becker et al. (2019)
https://doi.org/10.1080/10618600.2019.1585259
Summary
Semi-SupervisedVoxelSegmentation
“Multi-task”
3DU-Net
3DU-Net
GAN
Or
see e.g.
Adi et al. (2019)
Ifyou are more excited about
GANsingeneral, gohere
More papers around this
theme, easier to start
Ensemble?
Nicer for
data
augmentation
Whataroundthe “Segmentor” block
Segmentor
e.g. your ensemble
Restore
Denoise, Deblur &
Inpaint Graph
Mesh
CFD
DatabaseofOME-TIFFs
Retro- and prospective
ConstrainImage
Restorationwith
segmentation Mask
Shapepriors for
isotropic mesh from
anistotropic volume?
Vasculaturetreeas
graphGraph
Convolutinal
Networks?
Whatyou wantforthe “Product”
MODEL
Databaseof OME-TIFFs
Retro- and prospective
Proofread
Active
Learning
ANNOTATIONUI
Web-basedinterface
running on a local intranet cluster
or on the cloud
Choosetheunlabeled samplesfor humanannotation
Correct errorsfrom
initialsegmentation
Savetheannotated
andcorrected
volumestothe
database
As in what would you like to have to support your “actual neuroscience / pre-clinical reearch”
NeedforGPUpower
We take some baseline approach and start trying tweaking various things, and
hopefully have an idea after that where the systems bottlenecks are? Is it any point
even in tweaking your activation functions or number of filters per layer, and their sizes?
3DU-NET
Baseline Model
Restore Smooth Loss
Function
Uncertainty
Map
Attention
Noise2Noise
1) Jointly with U-Net
2) As separate
preprocessingstep
None
3) No explicit image
restoration
Smooth2Smooth
1) Jointly with U-Net
2) Jointly with U-Net
and Restore
3) As separate
preprocessingstep
after restore
4) Jointly with restore
but not with U-Net
None
5) No explicit image
smoothing
1) MCDropout
2) BayesianBN
2) None
1) Additive Attention
on skipconnections
2) GFF
3) GCNet
4) Noattention
1) Weighed CE
2) Focal Loss
3) ComboLoss
4) Boundary Loss
1) Vanilla U-Net
2) Make it Multi-taskwith
auxiliary tasks: edge and
centerline detection
3) Hybrid CNN+GRU
3x 5x 3x 4x 3x 4x =1620
training
runs
with
this
naïve
setting
Make some educated guesses of what you think
would work, or what would you like towork :)
Evaluationablationstudy
An investigationoftheeffectoffatsuppressionanddimensionality on
theaccuracy ofbreastMRIsegmentation using U- nets.
Homa Fashandi , GregoryKuling, Ying Li Lu,Hongbo Wu ,and Anne L.Martel
Sunnybrook, Toronto https://doi.org/10.1002/mp.13375
EXAMPLE with only36combinations
Definitelynot
working
Maybesome
CDdiagram
(NemenyiTest)
tohaveanidea
of ‘statistical
p-magic”?
Simpler Hypothesis: Joint restoration and segmentation,
leads to better segmentation performance than network
without explicit restoration or restoration as separate
preprocessing step. Thenyoustillhavesomehyperparameterstooptimize
“Only 3combinations” to train, figure from Diamond et al. (2017)
Youalsorealizewhymostofthe
papershavetestedon“simple
modifications”andnothadtoomany
confoundingvariables

Two-Photon Microscopy Vasculature Segmentation

  • 1.
    Two-Photon Microscopy Vasculature Segmentation Petteri Teikari, PhD PhDin Neuroscience M.Sc Electrical Engineering https://www.linkedin.com/in/petteriteikari/ Version August 2019 (Cleaned and simplified in January 2024, see original)
  • 2.
    Executive Summary #1/2 Highlightingrelevant literature for: ● Automating the 3D voxel-level vasculature segmentation (mainly) for multiphoton vasculature stacks ● Focus on semi-supervised U-Net based architectures that can exploit both unlabeled data and costly-to-annotate labeled data. ● Make sure that “tricks” for thin structure preservation, long-term spatial correlations and uncertainty estimation are incorporated
  • 3.
    Executive Summary #2/2 Thelack of automated robust tools do not go well with large-size datasets and volumes ● See Electron Microscopy segmentation community for inspiration who are having even larger stacks to analyze ● Gamified segmentation annotation tool EyeWire has led for example to this Nature paper, and slot at the AI: More than Human exhibition at Barbican
  • 4.
  • 5.
    Aboutthe Presentation #1 “Quickintro” about vasculature segmentation using deep learning ● Assumed that multiphoton (two-photon mainly) techniques are familiar to you and you want to know what you could do with your data using more robust “measuring tapes” for your vasculature, i.e. data-drivenvascularsegmentation Link coloring for articles, for Github/available code, and for video demos
  • 6.
    Aboutthe Presentation #2 Inspirationfor providing “seeds for all sorts of directions” would be for the reader/person implementing this, finding new avenues and not having to start from scratch. Especially targeted for people coming outside medical image segmentation that might have something to contribute and avoid “the group think” of deep learning community. Also it helps for the neuroscientist to have an idea how to gather the data and design experiments to address both neuroscientific questions and “auxiliary methodology” challenges solvable by deep learning. Domainknowledgestillvaluable.
  • 7.
    Aboutthe Presentation #3:Whysolengthy? If you are puzzled by some slides on non-specifically “vasculature segmentation”, remember that this was designed to be “high school project” friendly or good for tech/computation-savvy neuroscientists not necessarily knowing all the different aspects that could be beneficial for development of successful vasculature network instead of narrowly-focused slideshow
  • 8.
    Aboutthe Presentation #4:Textbookdefs? Alot of the basic concepts are “easily googled” from Stackoverflow/Medium/etc., thus focus here is on recent papers that are published in overwhelming numbers. Some ideas picked from these papers that might or might not be helpful in thinking of your own project tech specifications
  • 9.
    Aboutthe Presentation #5:“History”ofIdeas InarXiv and in peer-published papers, the various approaches taken by team before their winning idea(s) {“history of ideas, and all the possible choices you could have made”} , are hardly ever discussed in detail. So an attempt of “possibility space” is outlined here Towards EffectiveForagingby DataScientiststoFindPast AnalysisChoices Mary Beth Kery,BonnieE. John,PatrickO'Flaherty, AmberHorvath, Brad A.Myers Carnegie MellonUniversity/ Bloomberg L.P., NewYork https://doi.org/10.1101/650259https://github.com/mkery/Verdant Data scientists are responsible for the analysis decisions they make, but it is hard for them to track the process by which they achieved a result. Even when data scientists keep logs, it is onerous to make sense of the resulting large number of history records full of overlapping variants of code, output, plots, etc. We developed algorithmic and visualization techniques for notebook code environments to help data scientists forage for information in their history. To test these interventions, we conducted a think-aloud evaluation with 15 data scientists, where participants were asked to find specific information from the history of another person's data science project. The participants succeed on a median of 80% of the tasks they performed. The quantitative results suggest promising aspects of our design, while qualitative results motivated a number of design improvements. The resulting system, called Verdant, is released as an open-source extension for JupyterLab.
  • 10.
    Summary: “All thestuff”youwishyouknewbefore startingtheprojectwith“seeds”forcross- disciplinarycollaboration TheSecretsofMachineLearning:Ten ThingsYouWishYouHadKnownEarlierto beMoreEffectiveatDataAnalysis CynthiaRudin,David Carlson Electrical and ComputerEngineering,and Statistical Science, Duke University / Civil and Environmental Engineering, Biostatistics and Bioinformatics,Electrical and Computer Engineering,and Computer Science, Duke University (Submitted on 4 Jun 2019)https://arxiv.org/abs/1906.01998
  • 11.
    Curated Literature If youare overwhelmed by all the slides, you could start with these articles ● Haft-Javaherian et al. (2019). Deepconvolutionalneuralnetworksfor segmenting 3Dinvivo multiphotonimagesofvasculatureinAlzheimerdiseasemousemodels. https://doi.org/10.1371/journal.pone.0213539 ● Kisuk Lee et al. (2019) Convolutional netsfor reconstructing neural circuits from brainimages acquired by serialsection electron microscopy https://doi.org/10.1016/j.conb.2019.04.001 ● Amy Zhao et al. (2019) Dataaugmentationusing learned transformations forone-shotmedical image segmentation https://arxiv.org/abs/1902.09383https://github.com/xamyzhao/brainstorm Keras ● Dai et al. (2019) Deep Reinforcement Learningfor SubpixelNeuralTracking https://openreview.net/forum?id=HJxrNvv0JN ● Simon Kohl et al. (2018) A ProbabilisticU-Net for SegmentationofAmbiguousImages https://arxiv.org/abs/1806.05034+ followup https://arxiv.org/abs/1905.13077 https://github.com/SimonKohl/probabilistic_unet ● Hoel Kervadec et al. (2018) Boundary lossforhighly unbalanced segmentation https://arxiv.org/abs/1812.07032 https://github.com/LIVIAETS/surface-loss PyTorch ● Jörg Sander et al. (2018) Towards increased trustworthiness of deep learning segmentation methods on cardiacMRI https://doi.org/10.1117/12.2511699 ● Hongda Wang et al. (2018) Deep learning achievessuper- resolution influorescence microscopy http://dx.doi.org/10.1038/s41592-018-0239-0 ● Yide Zhang et al. (2019) A Poisson-Gaussian Denoising DatasetwithRealFluorescence Microscopy Images https://doi.org/10.1117/12.2511699 ● Trevor Standley et al. (2019) Which TasksShould Be Learned Together inMulti-task Learning? https://arxiv.org/abs/1905.07553
  • 12.
  • 13.
    Imaging brainvasculaturethroughtheskullof amouse/rat MICROSCOPE SET-UP AT THE SKULL AND EXAMPLES OF TWO-PHOTON MICROSCOPYIMAGES ACQUIRED DURINGLIVE IMAGING.BOTH EXAMPLES SHOWNEURONS (GREEN)ANDVASCULATURE (RED).BOTTOMEXAMPLE USES AN ADDITIONAL AMYLOID-TARGETING DYE (BLUE) IN AN ALZHEIMER’S DISEASE MOUSE MODEL. IMAGE CREDIT: ELIZABETH HILLMAN. LICENSED UNDER CC-BY-2.0. http://www.signaltonoisemag.com/allarticles/2018/9/17/dissecting-two-photon-microscopy
  • 14.
    Penetrationdepth dependsonthetheexcitation/emission wavelengths,numberof “nonlinearphotons”,andtheanimal model DeFelipeetal. (2011) http://dx.doi.org/10.3389/fnana.2011.00029 Tischbireketal.(2015): Cal-590, .. improved our ability to image calcium signals ... down to layers 5 and 6 at depths of up to −900 μm below the pia. 3-PM depth = 601 μm 2-PM depth = 429 μm Wang et al. (2015) Better image deeper penetration
  • 15.
    Dyeless vasculatureimaging in“deeplearningsense” nottoo different Third-Harmony Generation (THG) image of blood vessels in the top layer of the cerebralcortex of a live, anesthetized mouse. Emission wavelength = 1/3 of excitation wavelength Witte et al. (2011) Optoacoustic ultrasound bio-microscopy Imaging of skull and brain vasculature (B) was performed by focusing nanosecond laser pulses with a custom-designed gradient index (GRIN) lens and detecting the generated optoacoustic responses by the same transducer used for the US reflection-mode imaging. (C) Irradiation of half of the skull resulted in inhibited angiogenesis in the calvarium microvasculature (blue) of the irradiated hemisphere, but not the non- irradiated one. - prelights.biologists.com (Mariana De Niz) - https://doi.org/10.1101/500017 Third harmonic generation microscopy of cells andtissue organization http://doi.org/10.1242/jcs.152272 Model as cross-vendor or cross-modal problem? As you are imaging the “same vasculature” but it looks a bit different with different techniques
  • 16.
    “Cross-Modal” 3DVasculatureNetworkseventually wouldbevery nice Imaging the microarchitecture of the rodent cerebral vasculature. (A) Wide-field epi-fluorescence image of a C57Bl/6 mouse brain perfused with a fluorescein-conjugated gel and extracted from the skull ( Tsai et al, 2009). Pial vessels are visible on the dorsal surface, although some surface vessels, particularly those that were immediately contiguous to the sagittal sinus, were lost during the brain extraction process. (B) Three- dimensional reconstruction of a block of tissue collected by in vivo two-photon laser scanning microscopy (TPLSM) from the upper layers of mouse cortex. Penetrating vessels plunge into the depth of the cortex, bridging flow from surface vascular networks to capillary beds. (C) In vivo image of a cortical capillary, 200 μm below the pial surface, collected using TPLSM through a cranial window in a rat. The blood serum (green) was labeled by intravenous injection with fluorescein-dextran conjugate ( Table 2) and astrocytes (red) were labeled by topical application of SR101 (Nimmerjahn et al, 2004). (D) A plot of lateral imaging resolution vs. range of depths accessible for common in vivo blood flow imaging techniques. The panels to the right show a cartoon of cortical angioarchitecture for mouse, and cortical layers for mouse and rat in relation to imaging depth. BOLD fMRI, blood-oxygenation level-dependent functional magnetic resonance imaging. Network learns to disentangle the ‘vesselness’ from image formation i.e. how the vascularity looks like when viewed with different modalities Compare this to ‘clinical networks’ e.g. Jeffrey De Fauw et al. 2018 that need to handle cross-vendor differences (e.g. different OCT or MRI machines from different vendors produce slightly different images of the same anatomical structures) Shih et al. (2012)https://dx.doi.org/10.1038%2Fjcbfm.2011.196
  • 17.
    e.g. FunctionalUltrasoundImaging fasterthantypical2Pmicroscopes AlanUrban etal. (2017) Pablo Blinder’s lab https://doi.org/10.1016/j.addr.2017.07.018 Alan Urban et al. (2017) Pablo Blinder’s lab https://doi.org/10.1016/j.addr.2017.07.018 Brunner et al. (2018) https://doi.org/10.1177%2F0271678X18786359
  • 18.
    And keep inmind when going through the slides, the development of “cross- discipline” networks. e.g. 2-PM as “ground truth” for lower quality modalities such as OCT (OCT angiography for retinal microvasculature) or photoacoustic imaging thatarepossibleinclinicalworkforhumans Two-photonmicroscopic imagingofcapillaryred bloodcellfluxinmouse brainreveals vulnerabilityofcerebralwhitemattertohypoperfusion Baoqiang Li, Ryo Ohtomo, Martin Thunemann,Stephen R Adams, Jing Yang,Buyin Fu, Mohammad AYaseen , Chongzhao Ran, Jonathan R Polimeni, David A Boas, Anna Devor,Eng H Lo, Ken Arai,Sava SakadžićFirst Published March 4,2019 https://doi.org/10.1177%2F0271678X19831016 This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. - DOI: 10.1117/12.2289211 SimultaneouslyacquiredPAM,FLM,OCTandODTimagesofamouse ear.(a)PA image (average contrast-to- noise ratio 34dB);(b)OCTB-scan at the location marked in panel (e) by the solid line (displayed dynamic range,40 dB); (c)ODT B-scanatthe locationmarked in panel (e)bythe solid line; (d)FLMimage (average contrast-to-noise ratio 14dB);(e)OCT2Dprojection images generated from the acquired 3D OCT datasets; SG: Sebaceous glands; bar, 100μm.
  • 19.
  • 20.
    ‘Traditional’StructuralVascularBiomarkers #1 i.e. Youwant to analyze the changes in vascular morphology in disease, in response totreatment, etc limited by the imagination of your in-house biologist, e.g. Artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratioand wall-to-lumen length FEMmesh ofthevasculaturedisplaying arteries, capillaries,and veins. Gagnon etal. (2015)doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by 93 “We created the graphs and performed image processing using a suite of custom- designed tools in MATLAB” Classical vascular analysis reveals a decrease in the number of junctions and total vessel length following TBI. (A) An axial AngioTool image where vessels (red) and junctions (blue) are displayed. Whole cortex and specific concentric radial ROIs projecting outward from the injury site (circles 1–3), were analyzed to quantify vascular alterations. (B) Analysis of the entire whole cortex demonstrated a significant reduction in the both number of junctions and in the total vessel length in TBI animals compared to sham animals. (C) TBIanimals also exhibited a significant decline in the number vascular junctions moving radially outward from the injury site (ROIs 1 to 3). Fractal analysis reveals a quantitative reduction in both vascular complexityand frequency in TBI animals. (A) A binary image of the axial vascular network of a representative sham animal with radial ROIs radiating outward from the injury or sham surgery site (ROI1–3). The right panel illustrates the complexity changes in the vasculature from the concentric circles as you move radially outward from the injury site. These fractal images are colorized based on the resultant fractal dimension with a gradient from lower local fractal dimension (LFD) in red (less complex network) to higher LFD in purple (more complex network). Traumaticbraininjuryresultsinacuterareficationof thevascularnetwork. http://doi.org/10.1038/s41598-017-00161-4 Tortuous Microvessels Contribute to Wound Healing via SproutingAngiogenesis (2017) https://doi.org/10.1161/ATVBAHA.117.309993 Multifractal and Lacunarity Analysis of Microvascular Morphology and Remodeling https://doi.org/10.1111/j.1549-8719.2010.00075.x see “Fractal and multifractal analysis: a review”
  • 21.
    ‘Traditional’StructuralVascularBiomarkers #2 Schemeillustratingtheprincipleofvascularcorrosion casts Scheme depictingthe definition of vascularbranchpoints. Each voxel of the vessel center line (black) with more than two neighboring voxels was defined as a vascular branchpoint. This results in branchpoint degrees (number of vessels joining in a certain branchpoint) of minimally three. In addition, two branchpoints were considered as a single one if the distance between them was below 2 mm. Of note, nearly all branchpoints had a degree of 3. Branchpoint degrees of four or even higher accounted together for far less than 1% of all branchpoints Scheme showing the definition of vessel diameter (a), vessel length (a), and vessel tortuosity (b). The segment diameter is defined as the average diameter of all single elements of a segment (a). The segment length is defined as the sum of the length of all single elements between two branchpoints. The segment tortuosity is the ratio between the effective distance le and the shortest distance ls between the two branchpoints associated to this segment. Schematic displaying the parameter extravascular distance, being defined as the shortest distance of any given voxel in the tissue to the next vessel structure. (b) Color map indicating the extravascular distance in the cortex of a P10 WT mouse. Each voxel outside a vessel structure is assigned a color to depict its shortest distance to the nearest vessel structure.
  • 22.
    ‘Traditional’StructuralVascularBiomarkers #3: InClinical context,youcansee that incertaindisease (by vascularpathologies, or by yourpathology Xthatyou are interestedin), the connectivity oftextbook case mightget altered.Andthenyouwantto quantify thischange asa function ofdisease severity,pharmacological treatment,otherintervention. RelationshipbetweenVariations in theCircleofWillis andFlowRates inInternalCarotidandBasilar Arteries DeterminedbyMeans of MagneticResonanceImagingwith SemiautomatedLumen Segmentation:ReferenceData from125 Healthy Volunteers H. Tanaka, N. Fujita, T. Enoki, K. Matsumoto, Y. Watanabe, K. Murase and H. NakamuraAmerican Journal of Neuroradiology September 2006, 27 (8) 1770-1775; https://www.ncbi.nlm.nih.g ov/pubmed/16971634 Cited by 124 - Related articles
  • 23.
    ‘Traditional’FunctionalVascularBiomarkers #1 Blood flow-based biomarkers spatiotemporal (graph) deep learning model needed.See forsome → fMRI literatureorpoach someone from Über. C, Blood flow distribution simulated across the vascular network assuming a global perfusion value of 100 ml/min/100 g. D, Distribution of the partial pressure of oxygen (pO2 ) simulated across the vascular network using the finite element method model. E, TPM experimental measurements of pO2 in vivo using PtP-C343 dye. F, Quantitative comparison of simulated and experimental pO2 and SO2 distributions across the vascular network for a single animal. Traces represent arterioles and capillaries (red) and venules and capillaries (blue) as a function of the branching order from pial arterioles and venules, respectively. doi: 10.1523/JNEUROSCI.3555-14.2015 Cited by93 F, Vessel type. G, Spatiotemporal evolution of simulated SO2 changes following forepaw stimulus.
  • 24.
    ‘Traditional’FunctionalVascularBiomarkers #2 Time-averaged velocitymagnitudes of a measurement region are shown, together with with the corresponding skeleton (black line), branch points (white circles), and end points (gray circles). The flow enters the measurement region from theright. Notethat anon-linearcolor scalewas used forthevelocity magnitude. Multiple parabolic fits at several locations on the vessel centerline were performed to obtain a single characteristic velocity and diameter for each vessel segment. The time-averaged flow rate is assumed constant throughout the vessel segment. The valid region is bounded by 0.5 and 1.5×the median flow rate, and the red-encircled data points were not incorporated, due to a strongly deviating flow rate. Note that the fitted diameters and flow rates for the two data points on the far rightare too large to be visible in the graph. QuantificationofBloodFlowandTopologyinDevelopingVascularNetworks Astrid Kloosterman, Beerend Hierck, Jerry Westerweel, Christian Poelma Published: May 13, 2014 https://doi.org/10.1371/journal.pone.0096856
  • 25.
    Vasculatureimagingandvideooximetry Methods forcalculating retinalbloodvesseloxygen saturation (sO2) by(a)thetraditional LSF,and (b) ourneuralnetwork-based DSLwith uncertainty quantification. Deep spectrallearningfor label-freeopticalimaging oximetrywithuncertaintyquantification RongrongLiu,ShiyiCheng,Lei Tian,Ji Yi https://doi.org/10.1101/650259 Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL) for oximetry to be highly robust to experimental variations, and more importantly to provide uncertainty quantification for each sO2prediction. Two-photon phosphorescence lifetime microscopyofretinalcapillaryplexus oxygenation in mice IkbalSencan; Tatiana V. Esipova;MohammadA. Yaseen;Buyin Fu;DavidA. Boas; Sergei A. Vinogradov; MahnazShahidi; Sava Sakadžic https://doi.org/10.1117/1.JBO.23.12.126501
  • 26.
    NeurovascularDiseaseResearch functioningofthe“neurovascularunit”(NVU) isofinterest Example of two-photon microscopy (TPM). The TPM provides high spatial resolution images such as angiogram (left, scale bar: 100 lm) and multi-channel images, such as endothelial glycocalyx (green) with bloodflow(red,scalebar: 10lm) Intermsofdeep learning, you might think of multimodal/channel models and “context dependent” localization of dye signals Yoon and Yong Jeong (2019) https://doi.org/10.1007/s12272-019-01128-x
  • 27.
  • 28.
    Computationalhemodynamicanalysis requiresegmentationswithnogaps Towardsaglaucomariskindexbasedonsimulatedhemodynamics fromfundusimages José IgnacioOrlando,JoãoBarbosa Breda, Karelvan Keer, Matthew B. Blaschko, PabloJ. Blanco, CarlosA. Bulant https://arxiv.org/abs/1805.10273 (revised27 Jun 2018) https://ignaciorlando.github.io./ It has been recently observed that glaucoma induces changes in the ocular hemodynamics ( Harris et al. 2013; Abegão Pinto et al. 2016). However, its effects on the functional behavior of the retinal arterioles have not been studied yet. In this paper we propose a first approach for characterizing those changes using computational hemodynamics. The retinal blood flow is simulated using a 0D model for a steady, incompressible non Newtonian fluid in rigid domains. Finally, our MATLAB/C++/python code and the LES-AV database are publicly released. To the best of our knowledge, our data set is the first in providing not only the segmentations of the arterio-venous structures but also diagnostics and clinical parameters at an image level. (a)Multiscaledescriptionofneurovascular coupling in theretina. The modelinputsatthe Macroscale (A) are the bloodpressuresatthe inletand outletof the retinalcirculation, Pin andPout. The Mesoscale (B) focuses on arterioles, whosewalls comprise endotheliumandsmooth muscle cells.The Microscale (C) entails the biochemistryatthe cellular levelthatgoverns the change in smooth muscle shape.(b)
  • 29.
    Voxel Mesh → conversion“trivial”withcorrectsegmentation/graphmodel DeepMarchingCubes:LearningExplicitSurface Representations Yiyi Liao, Simon Donńe, Andreas Geiger (2018) https://avg.is.tue.mpg.de/research_projects/deep-marching-cubes http://www.cvlibs.net/publications/Liao2018CVPR.pdf https://www.youtube.com/watch?v=vhrvl9qOSKM Moreover, we showed that surface-based supervision results in better predictions in case the ground truth 3D model is incomplete. In future work, we plan to adapt our method to higher resolution outputs using octrees techniques [Häne et al. 2017; Riegler et al. 2017; Tatarchenko et al. 2017] and integrate our approach with other input modalities Learning3DShapeCompletionfromLaserScanDatawithWeakSupervision David Stutz, Andreas Geiger (2018) http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1708.pdf Deep-learning-assistedVolumeVisualization Hsueh-Chien Cheng, Antonio Cardone, Somay Jain, Eric Krokos, Kedar Narayan, Sriram Subramaniam, Amitabh Varshney IEEE Transactions on Visualization and Computer Graphics ( 2018) https://doi.org/10.1109/TVCG.2018.2796085 Although modern rendering techniques and hardware can now render volumetric data interactively, we still need a suitablefeaturespace that facilitates naturaldifferentiationof target structures andan intuitive and interactive way of designing visualizations
  • 30.
    Motivation Some scriptability availablefor ImageJ in many languages https://imagej.net/Scripti ng Imaris had to listen to their customers but still closed-source with poor → integration to 3rd party code ITK does someone still use? Howabout‘scaling’allyourandothers’ manualworkforanautomaticsolution? → data-driven vascularsegmentation
  • 31.
    ‘Downstream uncertainty’ reducedwith near-perfectvoxelsegmentation Influenceofimagesegmentationonone-dimensional fluiddynamicspredictionsinthemousepulmonary arteries Mitchel J. Colebank, L. Mihaela Paun, M. Umar Qureshi, Naomi Chesler, Dirk Husmeier, Mette S. Olufsen, Laura Ellwein Fix NC State University, UniversityofGlasgow, University of Wisconsin-Madison, Virginia Commonwealth University, (Submitted on 14 Jan 2019 https://arxiv.org/abs/1901.04116 Computational fluid dynamics (CFD) models are emerging as tools for assisting in diagnostic assessment of cardiovascular disease. Recent advances in image segmentation has made subject-specific modelling of the cardiovascular system a feasible task, which is particularly important in the case of pulmonary hypertension (PH), which requires a combination of invasive and non-invasive procedures for diagnosis. Uncertainty in image segmentation can easily propagate to CFD model predictions, making uncertainty quantification crucial for subject-specific models. This study quantifies the variability of one-dimensional (1D) CFD predictions by propagating the uncertainty of network geometry and connectivity to blood pressure and flow predictions. We analyse multiple segmentations of an image of an excised mouse lung using different pre-segmentation parameters. A custom algorithm extracts vessel length, vessel radii, and network connectivity for each segmented pulmonary network. We quantify uncertainty in geometric features by constructing probability densities for vessel radius and length, and then sample from these distributions and propagate uncertainties of haemodynamic predictions using a 1D CFD model. Results show that variation in network connectivity is a larger contributor to haemodynamic uncertainty than vessel radius and length.
  • 32.
    ‘Measurement uncertainties’ propagatetoyourdeeplearningmodelsaswell Arnoldet al. (2017) Uncertainty Quantification in a Patient-Specific One- Dimensional Arterial Network Model: ensemble Kalman filter (EnKF)-Based Inflow Estimator http://doi.org/10.1115/1.4035918 Marquis et al. (2018) Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model https://doi.org/10.1016/j.mbs.2018.07.001 Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). Păun et al. (2018) MCMC methods for inference in a mathematical model of pulmonary circulation https://doi.org/10.1111/stan.12132 The Delayed Rejection Adaptive Metropolis (DRAM) algorithm, coupled with constraint non‐ linear optimization, is successfully used to learn the parameter values and quantify the uncertaintyin the parameter estimates Schiavazzi et al. (2017) A generalized multi-resolution expansion for uncertainty propagation with application to cardiovascular modeling https://dx.doi.org/10.1016%2Fj.cma.2016.09.024 A general stochastic system may be characterized by a large number of arbitrarily distributed and correlated random inputs, and a limited support response with sharp gradients or event discontinuities. This motivates continued research into novel adaptive algorithms for uncertainty propagation, particularly those handling high dimensional, arbitrarily distributed random inputs and non-smoothstochasticresponses. Sankaran and Marsdenal. (2011) A stochastic collocation method for uncertainty quantification and propagation in cardiovascular simulations. http://doi.org/10.1115/1.4003259 In this work, we develop a general set of tools to evaluate the sensitivity of output parameters to input uncertainties in cardiovascular simulations. Uncertainties can arise from boundary conditions, geometrical parameters, or clinical data. These uncertainties result in a range of possible outputs which are quantified using probabilitydensity functions (PDFs). Tran et al. (2019) Uncertainty quantification of simulated biomechanical stimuli in coronary artery bypass grafts https://doi.org/10.1016/j.cma.2018.10.024 Prior studies have primarily focused on deterministic evaluations, without reporting variability in the model parameters due to uncertainty. This study aims to assess confidence in multi- scale predictions of wall shear stress and wall strain while accounting for uncertainty in peripheral hemodynamics and material properties. Boundary condition distributions are computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness are obtained through approximation by a random field. We developed a stochastic submodeling approach to mitigate the computational burden of repeated multi-scale model evaluations to focus exclusively on the bypass grafts. Yin et al. (2019) One-dimensional modeling of fractional flow reserve in coronary artery disease: Uncertainty quantification and Bayesian optimization https://doi.org/10.1016/j.cma.2019.05.005 The computational cost to perform three-dimensional (3D) simulations has limited the use of CFD in most clinical settings. This could become more restrictive if one aims to quantify the uncertainty associated with fractional flow reserve (FFR) calculations due to the uncertainty in anatomic and physiologic properties as a significant number of 3D simulations is required to sample a relatively large parametric space. We have developed a predictive probabilistic model of FFR, which quantifies the uncertainty of the predicted values with significantly lower computational costs. Based on global sensitivity analysis, we first identify the important physiologic and anatomic parameters thatimpact the predictions of FFR
  • 33.
  • 34.
    Neuronalbranching graphs #1 Explicitrepresentation of a neuron model. (left) The network can be represented as a graph structure, where nodes are end points and branch points. Each fiber is represented by a single edge. (right) The same networkisshown withseveral commonerrorsintroduced. Dendrograms Representation of brain vasculature using circular dendrograms A Method for the Symbolic Representation of Neurons Maraver et al. (2018) https://doi.org/10.3389/fnana.2018.00106 NetMets: Software for quantifying and visualizing errors in biological network segmentation Mayerich et al. (2012) http://doi.org/10.1186/1471-2105-13-S8-S7
  • 35.
    Neuronalbranching graphs #2 Topologicalcharacterization of neuronal arbor morphology via sequence representation: I - motif analysis Todd A Gillette and Giorgio A Ascoli BMC Bioinformatics 2015 https://doi.org/10.1186/s12859-015-0604-2 “Grammar model” for deep learning? Tree size and complexity. a. Complexity of trees is limited by tree size. Here are shown the set of possible tree shapes for trees with 1 to 6 bifurcations. Additionally, the number of T nodes (red dots in sample trees) is always 1 more than A nodes (green dots). Thus, size and number or percent of C nodes (yellow dots) fully captures node-type statistics.
  • 36.
    Neuronalbranching graphs #3 NeurphologyJ:An automatic neuronal morphology quantification method and its application in pharmacological discovery Shinn-Ying Ho et al. BMC Bioinformatics201112:230 https://doi.org/10.1186/1471-2105-12-230 Image enhancement process of NeurphologyJ does not remove thin and dim neurites. Shown here is an example image of mouse hippocampal neurons analyzed by NeurphologyJ. Notice that both thick neurites and thin/dim neurites (arrowheads) are preserved after the image enhancement process.The scale bar represents 50 μm. Neuritecomplexity can bededucedfrom neurite attachment pointandending point.Examples of neuronswithdifferent levelsofneurite complexityare shown
  • 37.
    NeuronalCircuitTracing Similartoourchallenges#1 FlexibleLearning-FreeSegmentationand ReconstructionforSparseNeuronalCircuitTracing AliShahbazi, Jeffery Kinnison, Rafael Vescovi, Ming Du, Robert Hill, Maximilian Joesch, Marc Takeno, Hongkui Zeng, Nuno Macarico da Costa, Jaime Grutzendler, Narayanan Kasthuri, Walter J. Scheirer July 06, 2018 https://doi.org/10.1101/278515 FLoRIN reconstructions of the Standard Rodent Brain (SRB) (top) and APEX2-labeled Rodent Brain sample (ALRB) (bottom) µCT X-ray volumes. (A) Within the SRB volume, cells and vasculature are visually distinct in the raw images, with vasculature appearing darker than cells. (B) Individualstructuresmay be extremely close(such as the cells and vasculature in this example), making reconstruction efforts prone to mergeerrors.
  • 38.
    NeuronalCircuitTracing Similartoourchallenges#2 DenseneuronalreconstructionthroughX-rayholographicnano-tomography Alexandra Pacureanu,Jasper Maniates-Selvin, Aaron T. Kuan, Logan A. Thomas, Chiao-Lin Chen, Peter Cloetens, Wei-Chung Allen Lee May 30, 2019. https://doi.org/10.1101/653188 3D U-NET everywhere
  • 39.
  • 40.
    Thinkintermsofsystems The machine learningmodel is just a part of all this in your labs Atonof stacksjust sitting on your hard drives Takesalot of workto annotate the vasculature voxel-by-voxel “AI” buzzword MODEL The following slides will showcase variouswaysof how thisbuzzhas been done “in practice” Aspoiler:We would like to have a semi- supervised model. doi: 10.1038/s41592-018-0115-y doi: 10.1038/s41592-018-0115-y We want to predict the vessel / non- vessel mask* for each voxel * (i.e. foreground- background, binary segmentation)
  • 41.
    PracticalSystemsParts Highlighted later onas well: Active Learning Atonof stacksjust sitting on your hard drives Takesalot of workto annotate the vasculature voxel-by-voxel “AI” buzzword MODEL doi: 10.1038/s41592-018-0115-y doi: 10.1038/s41592-018-0115-y Youwould liketo keep researchersintheloopwith thesystem and make It better as you do more experiments and acquire more data. But you have so many stacks on your hard drive that howdo youselectthestacks/slices thatyoushouldselectin orderto improvethemodel themost? Check the ActiveLearning slides later
  • 42.
    PracticalSystemsParts Highlighted later onas well: Proofreading We want to predict the vessel / non- vessel mask* for each voxel * (i.e. foreground- background, binary segmentation) “AI” buzzword MODEL Yoursegmentationmodel willmake100%some erroneouspredictions and you would like to “show the errors” to the system so it can learn from them and predict better next time
  • 43.
    Proof- reading Labelling Thinkingintermsof a product Ifyou would like to release this all as an open-source software/toolbox or a as a spin-off startup, instead of just sharing your network on Github “AI” buzzword MODEL Active Learning TheFinalMask You could now expose APIs to the parts needed, and get a modular system where you can focus on segmentation and maybe your collaborators are really into building good front-ends for proofreading and labelling?
  • 44.
    Annotateddataasthebottleneck Even with thesemi-supervised approach, you won’t most likely face a situation where you have too many volumes with vasculature ground truths Thus The faster and more intuitive your proofreader / annotator / labelling tool is, The faster you can make progress with your model performance. →UX Matters UX as in User Experience, as most likely your professor has never used this word. https://hackernoon.com/why-ux-design-must-be-the-foundation-of-your-software-product-f66e431cc7b4
  • 45.
    ‘Stealideas’fornicetousesystemsaroundyou Voxeleron OrionWorkflow Advantages https://www.voxeleron.com/orion-workflow-advantages/ Clickon your inliers/outliers interactively and Orion updates the spline fittings for you in real-time Polygon-RNN https://youtu.be/S1UUR4FlJ84 https://github.com/topics /annotation-tool ● wkentaro/ labelme ● Labelbox / Labelbox ● microsoft /VoTT ● opencv /cvat
  • 46.
    Makebiologyagamegamification tickedofffromthebuzzwordbingohere https://doi.org/10.1016/j.chb.2016.12.0 74 Eyewire EliteGameplay | https://eyewire.org/explore https://phys.org/news/2019-06-video-gamers-brand-proteins.html
  • 47.
    Theslidesetto follow willallowmultiplewaystosolvethe segmentationchallenge,andaswellto startbuilding the “product”inmodules “ablation study friendly” ,sononeedto tryto makeitallatonce... necessarily Bio/neuroscientists, can have a look of this classic Can a biologist fix a radio?—Or, what I learned while studying apoptosis https://doi.org/10.1016/S1535-6108(02)00133-2- Citedby 371
  • 48.
    Integratetosomethingand exploittheexistingopen-sourcecode USIDandPycroscopy--Openframeworksforstoringand analyzingspectroscopicandimagingdataSuhas Somnath,ChrisR. Smith, Nouamane Laanait, Rama K. Vasudevan,Anton Ievlev,Alex Belianinov,AndrewR. Lupini, Mallikarjun Shankar, Sergei V.Kalinin, Stephen JesseOak Ridge National Laboratory (Submitted on 22 Mar 2019) https://arxiv.org/abs/1903.09515 https://www.youtube.com/channel/UCyh-7XlL-BuymJD7vdoNOvw pycroscopy https://pycroscopy.github.io/pycroscopy/about.html pycroscopy is a python package for image processing and scientific analysis of imaging modalities such as multi-frequency scanning probe microscopy, scanning tunneling spectroscopy, x-ray diffraction microscopy, and transmission electron microscopy. pycroscopy uses a data-centric model wherein the raw data collected from the microscope, results from analysis and processing routines are all written to standardized hierarchical data format (HDF5) files for traceability, reproducibility,and provenance. OME https://www.openmicroscopy.org/ Har-Gil, H., Golgher, L., Israel, S., Kain, D., Cheshnovsky, O., Parnas, M., & Blinder, P. (2018). PySight: plug and play photon counting for fast continuous volumetric intravital microscopy. Optica, 5(9), 1104-1112. https://doi.org/10.1364/OPTICA.5.001104
  • 49.
    Integratetosomethingand exploittheexistingopen-sourcecode VMTK http://www.vmtk.org/ byOrobix https://github.com/vmtk/vmtk Python3 VMTKADD-ON FORBLENDER November 13, 2017 EPFL has developed an add-on for Blender that loads centerlines generated by VMTKinto Blender,and writes meshes from Blender. http://www.vmtk.org/tutorials/
  • 50.
    Youcould for exampleimprovethesegmentation tobeused with VMTKletVMTK/Blenderstillvisualizethestacks For example, we could start by doing this the “deep learning” way outlined on this slideshow If you feel that this do not really work well for your needs, you can work on this, or ask for improvements from Orobix team
  • 51.
    Blender integrationwithmeshes? BioBlender isa software package built on the open-source 3D modeling software Blender. BioBlender is the result of a collaboration, driven by the SciVis group at the CNR in Pisa (Italy), between scientists of different disciplines (biology, chemistry, physics, computer sciences) and artists, using Blender in a rigorous but at the same time creative way. http://www.bioblender.org/ https://github.com/mcellteam/cellblen der https://github.com/NeuroMorph-EPFL/NeuroM orph/tree/master/NeuroMorph_CenterLines_Cr ossSections Processes center lines generated by the Vascular Modeling Toolkit (VMTK), perform calculations in Blender using these center lines. Includes tools to clean meshes, export meshes to VMTK, and import center lines generated by VMTK. Also includes tools to generate cross-sectional surfaces, calculate surface areas of the mesh along the center line, and project spherical objects (such as vesicles) or surface areas onto the center line. Tools are also provided for detectingbouton swellings. Data can be exportedfor analysis.
  • 52.
  • 53.
    FluorescenceMicroscopy networksexistfor“smallerblobs” DeepFLaSH,adeeplearningpipelineforsegmentationof fluorescentlabelsinmicroscopyimages Dennis Segebarthet al. November 2018 https://doi.org/10.1101/473199 Here we present and evaluate DeepFLaSH, a unique deep learning pipeline to automatize the segmentation of fluorescent labels in microscopy images. The pipeline allows training and validation of label- specific convolutional neural network (CNN) models that can be uploaded to an open-source CNN-model library. As there is no ground truth for fluorescent signal segmentation tasks, we evaluated the CNN with respect to inter-coding reliability. Similarity analysis showed that CNN-predictions highly correlated with segmentations by human experts. DeepFLaSH runs as a guided, hassle-free open-source tool on a cloud-based virtual notebook (Google Colab http://colab.research.google.com, in a Jupyter Notebook) with free access to high computing power and requires no machinelearningexpertise. Label-specific CNN-models, validated on base of inter-coding approaches may become a new benchmark for feature segmentation in neuroscience. These models will allow transferring expert performance in image feature analysis from one lab to any other. Deep segmentation can better interpret feature-to-noise borders, can work on the whole dynamic range of bit-values and exhibits consistent performance. This should increase both, objectivity and reproducibility of image feature analysis. DeepFLaSH is suited to create CNN- models for high-throughput microscopy techniques and allows automatic analysis of large image datasets with expert-like performance and at super- human speed. With a nice notebook deployment example
  • 54.
    VasculatureNetworks Multimodali.e.“multidye” 3DCNNsifpossible HyperDense-Net:A hyper-denselyconnected CNN formulti-modal image segmentation Jose Dolz https://arxiv.org/abs/1804.02967(9 April 2018) https://www.github.com/josedolz/HyperDenseNe t We propose HyperDenseNet, a 3D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems [MRI Modalities: MR-T1, PD MR-T2, FLAIR]. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path, but also between those across different paths. A multimodal imaging platform with integrated simultaneousphotoacousticmicroscopy, optical coherencetomography,optical Doppler tomography and fluorescence microscopy Arash Dadkhah; Jun Zhou; Nusrat Yeasmin; Shuliang Jiao https://sci-hub.tw/https://doi.org/10.1117/12.2289211 (2018) Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. An artery-specificfluorescent dye for studying neurovascularcoupling Zhiming Shen, Zhongyang Lu, Pratik Y Chhatbar, Philip O’Herron, and Prakash Kara https://dx.doi.org/10.1038%2Fnmeth.1857(2012) Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. Astrocytes are intimatelylinked to the function of the inner retinalvasculature. A flat-mounted retina labelled for astrocytes (green) and retinal vasculature (pink). - from Prof Erica Fletcher
  • 55.
    Multimodalsegmentation glialcells,A fibrils,etc.provide β ‘context’for vasculatureandviceversa Diffuse and vascular A deposits induce astrocyte endfeet retraction and swelling in β TG arcA mice, starting at early-stage pathology. β Triple-stained for GFAP, laminin and A /APP. β https://doi.org/10.1007/s00401-011-0834-y In vivo imagingof theneurovascular unit in Stroke,Multiple Sclerosis (MS) and Alzheimer’s Disease. InvivoimagingoftheneurovascularunitinCNS disease https://www.researchgate.net/publication/265418103_In_vivo_i maging_of_the_neurovascular_unit_in_CNS_disease
  • 56.
    NeurovascularUnit(NVU) astrocyte /neuron/vasculatureinterplay (A)Immunostainingdepiction of components of the neurovascularunit (NVU). The astrocytes (stained with rhodamine labeled GFAP)shown in red. The neurons are stained withfluorescein tagged NSE shown in green and the blood vessels are stained with PECAM shown in blue.Note the location of the foot processes around the vasculature. (B)Histochemical localization of - β galactosidase expression in rat brain following lateral ventricular infusion of Ad5/CMV- -galactosidase (magnification β × 1000).Note staining of astrocytes and astrocytic footprocesses surrounding blood vessel emulating the exploded section of the immunostained brain slice A B Schematicrepresentation ofaneurovascular unit with astrocytes being thecentral processorof neuronalsignals as depicted in both panels A and panel B. Harder et al. (2018) Regulationof Cerebral BloodFlow:ResponsetoCytochrome P450LipidMetabolites http://doi.org/10.1002/cphy.c170025
  • 57.
    NVU examplesof dyes/labelsinvolved#1 CALCIUMOGB-1 Neuron CA2+ ASTROCYTE SR-101 Astrocytic CA2+ ARTERY AlexaFluor 633 or FITC/TexasRed Vessel diameter Neuron (OGB-1) and arteriole response (Alexa Fluor 633) to drifting grating in cat visual cortex. https://dx.doi.org/10 .1038%2Fnmeth.185 7 Low-intensity afferent neural activity caused vasodilation in the absence of astrocyte Ca2+ transients. https://dx.doi.org/10.1038%2Fjcbfm.2015.141 Astrocytes trigger rapid vasodilation following photolysis of caged Ca+. https:/ /dx.doi.org/10.3389%2Ffnins .2014.00103
  • 58.
    NVU“PhysicalCheating”forartery-veinclassification https://doi.org/10.1016/j.rmed.2013.02.004 https://doi.org/10.1182/blood-2018-01-824185 https://doi.org/10.1364/BOE.9.002056 http://doi.org/10.5772/intechopen.80888 Traces of relativeHb and HbO2 concentrations for a human subject during three consecutive cycles of cuff inflation and deflation. http://doi.org/10.1063/1.3398450 sO2 and blood flow on four orders of artery-vein pairs http://doi.org/10.1117/1.3594786
  • 59.
    NVUOxygenProbesforMultiphotonMicroscopy Examples of invivo two-photon PLIM oxygen sensing of platinum porphyrin-coumarin- 343 a Maximum intensity projection image montage of a blood vessel entering the bone marrow (BM) from the bone. Bone (blue) and blood vessels (yellow) are delineated with collagen second harmonic generation signal and Rhodamine B-dextran fluorescence, respectively. b Measurement of pO2 in cortical microvasculature. Left: measured pO 2 values in microvasculature at various depths (colored dots), overlaid on the maximum intensity projection image of vasculature structure (grayscale). Right: composite image showing a projection of the imaged vasculature stack. Red arrows mark pO2 measurement locations in the capillary vessels at 240 μm depth. Orange arrows point to the consecutive branches of the vascular tree, from pial arteriole (bottom left arrow) to the capillary and then to the connection with ascending venule (topright arrow). Scale bars: 200 μm. Chelushkin and Tunik (2019) 10.1007/978-3-030-05974-3_6 Devor et al. (2012) Frontiersin opticalimagingof cerebralbloodflowandmetabolism http://doi.org/10.1038/jcbfm.2011.195 Optical imaging of oxygen availability and metabolism. ( A ) Two-photon partial pressure of oxygen (pO2 ) imaging in cerebral tissue. Each plot shows baseline pO2 as a function of the radial distance from the center of the blood vessel—diving arteriole (left) or surfacing venule (right)—for a specific cortical depth range
  • 60.
    DyeEngineering afieldofitsown,andcheckfornewexcitingdyes BrightAIEgen–ProteinHybrid NanocompositeforDeepandHigh‐ ResolutionIn VivoTwoPhotonBrainImaging ‐ Shaowei Wang Fang Hu Yutong Pan Lai Guan Ng Bin Liu Department ofChemical and Biomolecular Engineering,National University ofSingapore Advanced Functional Materials 24 May 2019 https://doi.org/10.1002/adfm.201902717 NIR IIExcitableConjugated PolymerDotswith ‐ BrightNIR IEmissionforDeepInVivoTwo ‐ ‐ PhotonBrainImagingThroughIntactSkull Shaowei Wang Jie Liu Guangxue Feng Lai Guan Ng Bin Liu Department of Chemical and Biomolecular Engineering,National University ofSingapore Advanced Functional Materials 21 January 2019 https://doi.org/10.1002/adfm.201808365
  • 61.
    When Quantum Dotsgets old, enter PolymerDots In vivo vascularimaging in miceafterlabellingwith polymerdots(CNPPV, PFBT, PFPV), fluorescein and QD605 semiconductorquantum dots; scalebars =100µm. (Biomed.Opt.Express 10.1364/BOE.10.000584, Universityof Texas, Ahmed M.Hassan etal. (2019)) https://physicsworld.com/a/polymer-dots-image-deep-into-the-brain/ Furthermore, we justify the use of pdotsover conventional fluorophores for multiphoton imaging experiments inthe 800 – 900 nm excitationrange due to their increased brightness relativeto quantumdots,organic dyes,andfluorescent proteins. An important caveat toconsider, however, is that pdots were delivered intravenously in our studies, and labelingneuralstructureslocatedin high-density extravascular brain tissue couldposeachallenge due to the relativelylargediametersofpdots (~20-30nm). Recent efforts have producedpdot nanoparticles with sub-5 nm diameters, yet the yield from these preparations is still quite low
  • 62.
    Whatif youhavethe ‘dyelabels’from differentexperiments And you would like to combine them into a training of a single network? LearningwithMultitaskAdversariesusingWeaklyLabelled DataforSemanticSegmentationinRetinalImages Oindrila Saha, Rachana Sathish, Debdoot Sheet 13 Dec 2018 (modified: 15 Apr 2019) https://openreview.net/forum?id=HJe6f0BexN In case of retinal images, data driven learning-based algorithms have been developed for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudatesand soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated. We solve this problem by training a single network using separate weakly labelled datasets. Essentially we use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation using FCN, where the objectives of discriminators are to learn to (a) predict which of the classes are actually present in the input fundus image, and (b) distinguish between manual annotations vs. segmentedresults for each of the classes. The first discriminator works to enforce the network to segment those classes which are present in the fundus image although may not have been annotated i.e. all retinal images have vessels while pathology datasets may not have annotated them in the dataset. The second discriminator contributes to making the segmentation result as realistic as possible. We experimentally demonstrate using weakly labelled datasets of DRIVE containing only annotations of vessels and IDRiD containing annotations for lesions and optic disc.
  • 63.
  • 64.
    OverviewoftheMethods blood vessels asspecial example of curvilinear structure object segmentation Bloodvesselsegmentationalgorithms— Reviewof methods,datasetsand evaluationmetrics Sara Moccia, Elena De Momi, Sara El Hadji, Leonardo S.Mattos Computer Methods and Programs in Biomedicine May 2018 https://doi.org/10.1016/j.cmpb.2018.02.001 No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen accordingtothespecifictask.
  • 65.
    U-Netyouwillsee thisrepeatedmanytimes U-Net:Convolutional Networksfor Biomedical ImageSegmentation Olaf Ronneberger, Philipp Fischer, Thomas Brox (Submitted on 18 May 2015) https://arxiv.org/abs/1505.04597 Cited by 77,660 U-Net:deeplearningforcell counting,detection,and morphometry Thorsten Falk et al. (2019) Nature Methods 16, 67–70 (2019) https://doi.org/10.1038/s41592-018-0261-2 Citedby1,496 The ‘vanilla U-Net’ Is typically the baseline to beat in many articles, and its modified version is being proposed as the novel state-of-the-art network https://towardsdatascience.com/u-net-b229b32b4 a71 The architecture looks like a ‘U’ which justifies its name. This architecture consists of three sections: The contraction (encoder, downsampling part), The bottleneck, and the expansion (decoder, upsampling part) section. contraction encoder downsampling expansion decoder upsampling BOTTLENECK Skipconnections
  • 66.
    U-Net 2D Example Image Size noFeatureMaps 4x Downsampling ”Stages”With2x2 Max Pooling 572 x572 px 32 x32px 4xUpsampling ”Stages” ENCODER DECODER First stage decoder filter outputs (activation maps) are passedtothe final4th decoder stage 2ndstage decoder filter outputs (activation maps) are passedtothe 3rd decoder stage 3rd - 2nd 4th- 1st
  • 67.
    2Dretinalvasculature 2DU-Netasthe “baseline” Retinabloodvessel segmentationwitha convolutionneural network(U-net) orobix/retina-unet Keras http://vmtklab.orobix.com/ https://orobix.com/ as in the company from Italy behind the VMTKLab
  • 68.
    Jointsegmentationandvascular reconstruction Marry CNNswith graph (non-euclidean) CNNs, “grammar models” or something even better DeepVesselSegmentationByLearningGraphicalConnectivity Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee https://arxiv.org/abs/1806.02279 (Submitted on 6 Jun 2018) We incorporate a graph convolutional network into a unified CNN architecture, where the final segmentation is inferred by combining the different types of features. The proposed method can be applied to expand any type of CNN-based vessel segmentation methodtoenhance the performance. Learning about the strong relationship that exists between neighborhoods is not guaranteed in existing CNN- based vessel segmentation methods.The proposed vesselgraph network (VGN) utilizes a GCN together with a CNN to address this issue. Overall networkarchitecture of VGN comprising the CNN, graph convolutional network, and inference modules. “Grammar” as in if you know how molecules are composed (e.g. SMILES model), you can constrain the model to have only physically possible connections. Well we donotexactly havethatluxury and we need to learn the graph constraints from data (but have noannotations at the moment for edge nodes) Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules http://doi.org/10.1021/acscentsci.7b00572 some authors from Toronto, including David Duvenaud
  • 69.
    “Grammarmodels”possibletocertainextent Remember that healthyand pathological vasculature might be be “quite different” (highly quantitative term) Mitchell G. Newberry et al. Self-Similar Processes Follow a Power Law in Discrete Logarithmic Space, Physical Review Letters (2019). DOI: 10.1103/PhysRevLett.122.158303 Although blood vessels also branch dichotomously, random asymmetry in branching disperses vessel diameters from any specific ratios. On a database of 1569 blood vessel radii measured from a single mouse lung, αc and αd produced statistically indistinguishable estimates (Table I), independent of the chosen , and are therefore both likely accurate. The mutual consistency between the estimators suggests that the λ distribution ofbloodvesselmeasurementsiseffectivelyscaleinvariant despitetheunderlyingbranching. Quantitating the Subtleties of Microglial Morphology with Fractal Analysis Frontiers in Cellular Neuroscience 7(3):3 http://doi.org/10.3389/fncel.2013.00003
  • 70.
    Grammar as youcan guess, used in languagemodeling Kim Martineau | MIT Quest for Intelligence May 29, 2019 http://news.mit.edu/2019/teaching-language-models-grammar-makes-them-smarter-0529 NeuralLanguage ModelsasPsycholinguisticSubjects: RepresentationsofSyntacticState Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy (Submitted on 8 Mar 2019) https://arxiv.org/abs/1903.03260 We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state. To do so, we examine model behavior on artificial sentences containing a variety of syntactically complex structures. We find evidence that the LSTMs trained on large datasets represent syntactic state over large spans of text in a way that is comparable to the Recurrent Neural Network Grammars (RNNG, Dyer et al. 2016 Cited by157 ), while the LSTM trained on the small dataset does not or does so only weakly. StructuralSupervisionImprovesLearningof Non-Local GrammaticalDependencies Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy (Submitted on 3 Mar 2019) https://arxiv.org/abs/1903.00943 Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus two models that represent hierarchical structure and deploy it in left-to-right processing: Recurrent Neural Network Grammars (RNNGs) (Dyer et al. 2016 Cited by157 ) and a incrementalized version of the Parsing-as-Language-Modeling configuration from Chariak et al., (2016). Structural supervision thus provides data efficiency advantages over purely string-based training of neural language models in acquiring human-like generalizationsabout non-local grammatical dependencies.
  • 71.
    VascularTreeBranchingStatistics constrain witha“Grammarmodel”?#1 TowardsEnd-to-EndImage-to-Treefor VasculatureModeling YunlongHuo and Ghassan S. Kassab Journal of the Royal SocietyInterface Published:15 June2011 https://doi.org/10.1098/rsif.2011.0270- Citedby 87 A fundamental physics-based derivation of intraspecific scaling laws of vascular trees has not been previously realized. Here, we provide such a theoretical derivation for the volume–diameter and flow–length scaling laws of intraspecific vascular trees. In conjunction with the minimum energy hypothesis, this formulation also results in diameter–length, flow–diameter and flow–volume scaling laws. The intraspecific scaling predicts the volume– diameter power relation with a theoretical exponent of 3, which is validated by the experimental measurements for the three major coronary arterial trees in swine. This scaling law as well as others agrees very well with the measured morphometric data of vascular trees in various other organs and species. This study is fundamental to the understanding of morphological and haemodynamic features in a biological vascular treeand has implications forvasculardisease. Relation between normalized stem diameter (Ds /(Ds )max ) and normalized crown volume (Vc /(Vc)max ) for vascular trees of various organs and species corresponding to those trees in table 1. The solid line represents the least-squares fit of all the experimental measurements (exponent of 2.91, r2 = 0.966).
  • 72.
    VascularTreeBranchingStatistics constrain witha“Grammarmodel”?#2 BranchingPatternoftheCerebral ArterialTree Jasper H. G. Helthuis Tristan P. C. van Doormaal Berend Hillen Ronald L. A. W. Bleys Anita A. Harteveld Jeroen Hendrikse Annette van der Toorn Mariana Brozici Jaco J. M. Zwanenburg Albert van der Zwan TheAnatomical Record(17 October2018) https://doi.org/10.1002/ar.23994 Quantitative data on branching patterns of the human cerebral arterial tree are lacking in the 1.0–0.1 mm radius range. We aimed to collect quantitative data in this range, and to study if the cerebral artery tree complies with the principle of minimal work (Lawof Murray). Data showed a large variation in branching pattern parameters (asymmetry ratio, area ratio, length radius ‐ ‐ ‐ ‐ ratio, tapering). Part of the variation may be explained by the variation in measurement techniques, number of measurements and location of measurement in the vascular tree. This study confirms that the cerebral arterial tree complies with the principle of minimum work. These data are essential in the future development of more accuratemathematicalbloodflow models. Relative frequencies of (A) asymmetry ratio, (B) area ratio,(C) length to radius ratio, ‐ ‐ ‐ (D)tapering.
  • 73.
    Branch-basedfunctionalmeasures? Changsi Cai etal. (2018) Stimulation-inducedincreases in cerebral bloodflowandlocalcapillary vasoconstriction dependonconductedvascularresponses https://doi.org/10.1073/pnas.1707702115 Functional vessel dilation in the mouse barrel cortex. (A) A two-photon image of the barrel cortex of a NG2-DsRed mouse at 150 µm depth. The p.a.s branch out a capillary horizontally ( ∼ first order). Further branches are defined as second- and third-order capillaries. Pericytes are labeled with a red fluorophore (NG2-DsRed) and the vessel lumen with FITC-dextran (green). ROIs are placed across the vessel to allow measurement of the vessel diameter (colored bars). (Scale bar: 10 µm.) Changsi Cai et al. (2018) Stimulation-induced increases in cerebralbloodflowandlocal capillaryvasoconstriction dependonconducted vascularresponses https://doi.org/10.1073/pnas.1707702115 Measurement of blood vessel diameter and red blood cell (RBC) flux in the retina.A, Confocal image of a whole-mount retina labeled for the blood vessel marker isolectin (blue), the contractile protein -SMA (red), and the pericyte marker NG2 (green). Blood vessel order in the superficial vascular α layer is indicated. First-order arterioles (1) branch from the central retinal artery. Each subsequent branch (2-5)has a higher order.Venules (V)connect with the central retinal vein. Scale bar,100 μm.
  • 74.
    2Dretinalvasculature datasetsavailable Highlights alsohow availability of freely-available databases DRIVE and STARE with a lot of annotations lead to a lot of methodological papers from “non-retina” researchers De et al. (2016) A Graph-Theoretical Approach for Tracing Filamentary Structures in Neuronal and Retinal Images https://dx.doi.org/10.1109/TMI.2015.2465962
  • 75.
    2DMicrovasculatureCNNswithGraphs TowardsEnd-to-EndImage-to-TreeforVasculatureModeling Manish Sharma, MatthewC.H.Lee,James Batten,Michiel Schaap, Ben Glocker Google, ImperialCollege, Heartflow MIDL2019Conference https://openreview.net/forum?id=ByxVpY5htN This work explores an end-to-end image-to-tree approach for extracting accurate representations of vessel structures which may be beneficial for diagnosis of stenosis (blockages) and modeling of blood flow. Current image segmentation approaches capture only an implicit representation, while this work utilizes a subscale U-Net to extract explicit tree representations from vascularscans.
  • 76.
  • 77.
    SS-OCTVasculatureSegmentation Robustdeeplearningmethodforchoroidalvesselsegmentationon sweptsourceopticalcoherencetomographyimages Xiaoxiao Liu, LeiBi,Yupeng Xu, Dagan Feng, Jinman Kim, and Xun Xu Department of Ophthalmology, Shanghai General Hospital, ShanghaiJiaoTong UniversitySchool ofMedicine BiomedicalOpticsExpressVol. 10, Issue 4, pp.1601-1612(2019) https://doi.org/10.1364/BOE.10.001601 Motivated by the leading segmentation performance in medical images from the use of deep learning methods, in this study, we proposed the adoption of a deep learning method, RefineNet, to segment the choroidal vessels from SS-OCT images. We quantitatively evaluated the RefineNet on 40 SS-OCT images consisting of ~3,900 manually annotated choroidal vessels regions. We achieved a segmentation agreement (SA) of 0.840 ± 0.035 with clinician 1 (C1) and 0.823 ± 0.027 with clinician 2 (C2). These results were higher than inter-observervariability measurein SA between C1 and C2of 0.821 ±0.037. Currently, researchers have limited imaging modalities to obtain information about the choroidal vessels. Traditional indocyanine green angiography (ICGA) is the goldstandard in clinical practice for detecting abnormality in the choroidal vessels. ICGA provide 2D images of the choroid vasculature, which can show the exudation or filling defects. However, ICGA does not provide 3D choroidal structure or the volume of the whole choroidal vessel networks, and the ICGA images overlap retinal vessels and choroidal vessels together, thereby making it hard to independently observe and analyze the choroidal vessels quantitatively. OCT Angiography (OCTA) can clearly show the blood flow from superficial and deep retinal capillary network, as well as retinalpigment epithelium to superficial choroidal vascular network; however, it cannot show the blood flowindeepchoroidalvessels. https://arxiv.org/abs/1806.05034
  • 78.
    Fundus/OCT/OCTA multimodal qualityenhancement Generatingretinalflowmapsfromstructuralopticalcoherencetomography withartificialintelligence Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem, Nicolaas P. DeRuyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang & Aaron Y. Lee Department of Ophthalmology, Universityof Washington, Seattle, WA, USA; eScience Institute, Universityof Washington, Seattle, WA, USA ScientificReportsArticle number: 5694(2019) https://doi.org/10.1038/s41598-019-42042-y Using the human generated annotations as the ground truth limits the learning ability of the AI, given that it is problematic for AI to surpass the accuracy of humans, by definition. In addition, expert-generated labels suffer from inherent inter-rater variability, thereby limiting the accuracy of the AI to at most variable human discriminative abilities. Thus, the use of more accurate, objectively-generated annotations would be a key advance in machine learning algorithms in diverse areas ofmedicine. Given the relationship of OCT and OCTA, we sought to explore the deep learning’s ability to first infer between structure and retinal vascular function, then generate an OCTA-like en-face image from structural OCT image alone. By taking OCT as input and using the more cumbersome, expensive modality, OCTA, as an objective training target, deep learning could overcome limitations with the second modality and circumvent theneedforgeneratinglabels. Unlike current AI models which are primarily targeted towards classification or segmentation of images, to our knowledge, this is the first application of artificial neural networks in ophthalmic imaging to generate a new image based on a different imaging modality data. In addition, this is the first example in medical imaging, to our knowledge, where expert annotations for training deep learning modelsare bypassedbyusingobjective,functional flow measurements. “FITC” in 2-PMContext “QD” in 2-PM Context Learn the mapping from FITC QD → (with QD as supervision) to improve the quality of already acquired FITC stacks unsupervised conditional image-to-image translation possible also, but probably trickier
  • 79.
    Electronmicroscopy similarreconstructionpipeline forvasculature High-precisionautomatedreconstructionof neuronswithflood-fillingnetworksMichałJanuszewski, JörgenKornfeld,PeterH. Li,Art Pope,Tim Blakely,Larry Lindsey,Jeremy Maitin-Shepard,Mike Tyka,Winfried Denk & Viren Jain Nature Methods volume 15, pages 605–610 (2018) https://doi.org/10.1038/s41592-018-0049-4 e introduce a CNN architecture, which is linearly equivariant (a generalization of invariance defined in the next section) to 3D rotations about patch centers. To the best of our knowledge, this paper provides the first example of a CNN with linear equivariance to 3Drotations and 3Dtranslations of voxelized data. By exploiting the symmetries of the classification task, we are able to reduce the numberof trainable parameters using judicious weight tying. We also need less training and test time data augmentation, since some aspects of 3D geometry are already ‘hard-baked’ into the network. As a proof of concept we try segmentation as a 3D problem, feeding 3D image chunks into a 3D network. We use an architecture based on Weiler et al. (2017)’s steerable version of the FusionNet. It is a UNet with added skip connections within the encoder and decoder paths to encourage better gradient flow. Effectiveautomatedpipelinefor3Dreconstructionofsynapsesbasedondeeplearning Chi Xiao, Weifu Li, Hao Deng, Xi Chen, Yang Yang, Qiwei Xie and Hua Han https://doi.org/10.1186/s12859-018-2232-0BMC Bioinformatics (13 July 2018) 19:263 Five basic steps implemented bythe authors 1) Imageregistration, e.g. An Unsupervised Learning Model for Deformable Medical Image Registration 2)ROIDetection, e.g. Weighted Hausdorff Distance: A Loss Function For Object Localization 3)3DCNNs, e.g. DeepMedic for brain tumor segmentation 4a)Dijkstra shortestpath, e.g. shiluyuan/Reinforcement-Learning-in-Path-Finding 4b)Oldschoolalgorithm refinement, e.g. 3D CRF, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation 5)MeshReconstruction, e.g. Robust Surface Reconstruction via Dictionary Learning Deep-learning-assisted Volume Visualization Deep Marching Cubes: Learning Explicit Surface Representations
  • 80.
  • 81.
    VasculatureImagingArtifacts Movement artifact 00 InvivoMPMimagesofacapillary. Because MPM images are acquire by raster scanning, images at different depths (z) are acquired with a time lag (t). Unlabeled red blood cells moving through the lumen cause dark spots and streaks and result in variable patterns within a single vessel. Haft-Javaherian et al.(2019) https://doi.org/10.1371/journal.pone.0213539
  • 82.
    VasculatureImagingArtifacts”Vessel Breakage” /Intensity inhomogeneity Anovelmethodforidentifyingagraph-basedrepresentationof 3-Dmicrovascularnetworksfromfluorescencemicroscopy imagestacks S. Almasi, X. Xu, A.Ben-Zvi, B. Lacoste, C. Guet al. MedicalImage Analysis, 20(1):208–223, February2015. http://dx.doi.org/10.1016/j.media.2014.11.007 Vasculature Image Quality. An example of false fractions in the structure caused by imaging imperfections and an area of more artifacts in a maximum-intensity projection (MIP) slice of a 3-D fluorescent microscopy image of microvasculature Jointvolumetricextractionandenhancementof vasculaturefrom low-SNR3-Dfluorescencemicroscopyimages Sepideh Almasi, AyalBen-Zvi, Baptiste Lacoste, Chenghua Gu, Eric L.Miller, Xiaoyin Xu Pattern Recognition Volume 63, March 2017, Pages710-718 https://doi.org/10.1016/j.patcog.2016.09.031 Highlights *We introduce intensity-based features to directlysegmentartifactedimages ofvasculature. *The segmentation method isshown to be robust tonon-uniformillumination and noise ofmixed type. *This methodis free of apriori statisticalandgeometricalassumptions. For fluorescence signals, adaptive optics, quantum dots and three-photon microscopy not always feasible In this maximum intensity projection of 3- D fluorescence microscopy image of murine cranial tissue, miscellaneous imaging artifacts are visible: uneven illumination (upper vs. lower parts), non-homogenous intensity distribution inside the vessels (visible in the larger vessels located at top right corner), low SNR regions (lower areas), high spatial density or closeness of vessels (majorly in the center-upper parts), reduced contrast at edges (visible as blurs mostly for the central vessels), brokenor faint vessels (lower vessels), and low frequency background variations caused by scattered light (at higher density regions).
  • 83.
    MultidyeExperimentsfor ‘self-supervisedtraining’ CAMvesselfluorescence followedovertime for Q705PEGaand 500kDaFITC–dextran.500kDa FITC–dextran(A) and Q705PEGa(B)were coinjected and images weretaken at the designated times. Theuseof quantumdots for ana lysisofchickCAMvasculature JD Smith,GW Fisher, AS Waggoner… - Microvascularresearch, 2007 - Elsevier Citedby69 Intravitally injected QDs were found to be biocompatible and were kept in circulation over the course of 4days without any observed deleterious effects. QD vascular residence time was tunable through QD surface chemistry modification. We also found that use of QDs with higher emission wavelengths (> 655nm) virtually eliminated all chick- derived autofluorescence andimproved depth- of-field imaging. QDs were compared to FITC– dextrans, a fluorescent dye commonly used for imaging CAM vessels. QDs were found to image vessels as well as or better than FITC– dextrans at 2–3 orders of magnitude lower concentration. We also demonstrated that QDs are fixable with low fluorescence loss and thus can be used in conjunction with histological processing for further sample analysis. i.e. which would give you a nicer mask with Otsu’s thresholding for example? Easier to obtain ground truth labels from QD stacks and use thoseto train forFITC stacks or multimodal FITC+QD networks ifthere arecomplimentary information available? Inpaintingmasks (‘vessel breakage’) from differencebetween theQD and FITC stacks? Quantum dots vs. Fluorescein Dextran (FITC)
  • 84.
    MultidyeExperimentsfor OptimizedSNRfor allvesselsizes Todorovet al. (2019) Automated analysis of whole brain vasculature using machinelearning https://doi.org/10.1101/613257 A-C, Maximum intensity projections of the automatically reconstructed tiling scans of WGA (A) and Evans blue (B) signal in the same sample reveal all details of the perfused vascular network in the merged view (C). D-F: Zoom-ins from marked region in (C) showing fine details. G- L, Confocal microscopy confirms that WGA and EB dyes stain the vascular wall (G-I, maximum intensity projections of 112 µm) and that the vessels retain their tubular shape (J-L, single slice of 1 µm). Furthermore, owing to the dual labeling, we maximized the signal to noise ratio (SNR) for each dye independently to avoid saturation of differently sized vessels when only a single channel is used. We achieved this by independently optimizing the excitation and emission power. For WGA, we reached a higher SNR for small capillaries; bigger vessels, however, were barely visible (Supporting Fig. 3). For EB, he SNR for small capillaries was substantially lower but larger vessels reached a high SNR (Supporting Fig. 3). Thus, integrating the information from both channels allows homogenous staining of the entire vasculature throughout the whole brain, and results in a high SNR for highquality segmentations and analysis.
  • 85.
    Play withyour DextranDaltons? AneNOStag-GFP mouse was injected with two dextransof different sizes (red = Dextran 2 MDa; purple = dextran10 KDa) and Hoechst (blue = 615 Da), and single-plane images are presented here. 10 min after the injection, presence in the blood and extravasation are seen in the same image. Hoechst extravasates almost immediately out of the blood vessels and is taken up by the surrounding cells (CI). Dextran 10 KDa (CII) can be seen in vessels and in the tumor interstitium. Dextran 2 MDa (CIII) can be found in the vessels. 40 min after injection (CIV), Dextran 10 KDa disappears from the blood (CV), and the fluorescent intensity of Dextran 2 MDa was also diminished (CVI). Scale bar = 100 µm - https://dx.doi.org/10.3791%2F55115 (2018) If you have extra channels, and normally you would like to use 10 KDa Dextran, and for some reason cannot use something with stronger fluorescence that stays better inside the vesselness. You could acquire stacks just for the vasculature segmentation, with the higher molecular weights as the “physical labels” for vasculature?
  • 86.
    z / Depthcrosstalkduetosuboptimalopticalsectioning Invivothree-photonmicroscopy ofsubcortical structureswithinanintactmousebrain Nicholas G.Horton,Ke Wang,Demirhan Kobat,Catharine G.Clark, FrankW. Wise,Chris B.Schaffer &Chris Xu NaturePhotonics volume7,pages 205–209(2013) https://doi.org/10.1038/nphoton.2012.336 The fluorescence of three-photon excitation (3PE) falls off as 1/z4 (where z is the distance from the focal plane), whereas the fluorescence of two- photon excitation (2PE) falls off as 1/z2 . Therefore, 3PE dramatically reduces the out-of-focus background in regions far from the focal plane, improving the signal-to-background ratio (SBR) by orders of magnitude when compared to 2PE http://biomicroscopy.bu.edu/research/nonlinear-microsc opy http://parkerlab.bio.uci.edu/microscopy_construction/build_your_own_twophoton_microscope.ht m “Background vasculature” is seen in layers in “front of it”, i.e. the z-crosstalk Nonlinear 2-PM reduces this, and 3-PM even more. When you get the binary mask, how to in the end reconstruct your mesh? From 1-PM, your vessels would most likely look very thick in z-dimension? i.e. way too anistropic reconstruction?
  • 87.
    Depth resolution Westill have labeled in 2D so some boundary ambiguity exists Cannyedge radius = 1 Canny on the ground truth Gamma-corrected of version of the input slice. Now you see better the dimmer vessels The upper part of the slice is clearly behind(on z axis), as it is dimmer, but it has been annotated to be a vessel alsoon this plane. This is not necessarily a problem if some sort of consistency exists in labeling, which isnot thecasenecessarily betweendifferent annotators. Then you might need the label noisesolutions outlinedlater on this slideset. Volumerendering of the ground truth of courselooks now thickerthan the original unsegmented volume Multiplying the input volume with this groundtruth mask gives anice rendering ofcourse. We wantto suppress thebackground noise,andmake the voxel mesh → conversion easier with clean segmentations
  • 88.
    Single-photonconfocalmicroscopesectioning worse than2-PM but still quite good Images captured by confocal microscopy, showing FITC-dextran (green) and DiI- labeledRBCs(red) in a retinalflatmount. (A, C) Merged green/red images from the superficial section of the retina. (B, D) Red RBC fluorescence in the deeper capillary layers of the retina. The arrow in (A) points to an arteriole that branches down from the superficial layerinto the capillarylayersshown in (B) Comparisonof the Fluorescence Microscopy Techniques (widefield, confocal, two- photon) http://candle.am/ microscopy/ Measurement ofRetinal Blood Flow Ratein DiabeticRats: Disparity Between Techniques Dueto Redistribution of Flow Leskova et al (2013) http://doi.org/10.1167/iovs.13-11915 RatRetina SUPERFICIAL Layers RatRetina CAPILLARY Layer Kornfield andNewman (2014) 10.1523/JNEUROSCI.1971-14.2014 Vessel density in the three vascular layers. Schematic of the trilaminar vascular network showing the first- order arteriole (1) and venule (V) and the connectivity of the superficial (S), intermediate (I), and deep (D) vascular layers and their locations within the retina. GCL, Ganglion cell layer; IPL, inner plexiform layer; INL, inner nuclear layer; OPL, outer plexiform layer; ONL, outer nuclear layer; PR,photoreceptors.
  • 89.
    z / DepthAttenuationnoiseasfunctionofdepth Effectsof depth-dependent noise on line-scanning particle image velocimetry (LS-PIV) analysis. A , Three- dimensional rendering of cortical vessels imaged with TPLSM demonstrating depth-dependent decrease in SNR. The blood plasma was labeled with Texas Red-dextran and an image stack over the top 1000 µm was acquired at 1 µm spacing along the z-axis starting from the brain surface. B, 100 µm-thick projections of regions 1–4 in panel (A). RBC velocities were measured along the central axis of vessels shown in red boxes, with redarrows representing orientation offlow. The raw line-scan data (L/S) are depicted tothe right ofeach fieldandlabeledwith their respective SNR. CorrespondingLS-PIV analyses aredepictedto the far right. Accuracy of LS-PIV analysis with noise and increasing speed. Top, simulation line-scan data with a low level of normally distributed noise with SNR of 8 ( A ), 1 ( B ), 0.5 ( C ), and 0.33 ( D ). Middle, LS-PIV analysis of the line-scan data (blue dots). The red line represents actual particle speed. Bottom, percent error of LS-PIV compared with actual velocity. Tyson N Kim et al. (2012) http://doi.org/10.1371/journal.pone.0038590 - Cited by 46
  • 90.
    ‘Intersectingvesselsin2-PM’ even thoughthe centerlines actual vessels in 3D do not intersect the vessel masks might #1 Calivá et al. (2015) A new tool to connect blood vessels in fundus retinal images https://doi.org/10.1109/EMBC.2015.7319356 - Cited by 8 In 2D case, the vessel crossings are harder to resolve than in our 3D case Slice#10/26Seems like that theBig and Smaller vessel are going tojoin? Slice#19/26Seems like Smallvessel actually was touching the Biggerone?
  • 91.
    Cross-channelspectralcrosstalk Newred-fluorescent calciumindicatorsfor optogenetics,photoactivationand multi-colorimaging OheimM, van ’tHoff M, FeltzA,ZamaleevaA,Mallet J-M,Collot M. 2014 Biochimica et Biophysica Acta (BBA) - Molecular Cell Research 1843. Calcium Signaling in Health and Disease:2284–2306. http://dx.doi.org/10.1016/j.bbamcr.2014.03.010 https://github.com/petteriTeikari/mixedImageSeparation https://github.com/petteriTeikari/spectralSeparability/wiki
  • 92.
    Color Preprocessing: SpectralUnmixingformicroscopy See “spectral crosstalk” slide above. Or in more general terms you want to do (blind) source separation, “the cocktail party problem” for 2-PM microscopy data, i.e. you might have some astrocyte/calcium/etc signal on your “vasculature channel”. You could just apply ICA here and hope for perfect unmixing or think of something more advanced. Again, seek inspiration from elsewhere. Hyperspectral imaging field is having the same challenge to solve. ImprovedDeepSpectralConvolution NetworkForHyperspectralUnmixingWith MultinomialMixtureKernelandEndmember Uncertainty Savas Ozkan, and Gozde Bozdagi Akar (Submitted on 27 Mar 2019) https://arxiv.org/abs/1904.00815 https://github.com/savasozkan/dscn We propose a novel framework for hyperspectral unmixing by using an improved deep spectral convolution network (DSCN++) combined with endmember uncertainty. DSCN++ is used to compute high-level representations which are further modeled with Multinomial Mixture Model to estimate abundance maps. In the reconstruction step, a new trainable uncertainty term based on a nonlinear neural network model is introduced to provide robustness to endmember uncertainty. For the optimization of the coefficients of the multinomial model and the uncertainty term, Wasserstein Generative Adversarial Network (WGAN) is exploited to improve stability.
  • 93.
    AnisotropicVolumes z-resolutionnotasgoodas xy 3DAnisotropicHybridNetwork:TransferringConvolutional Features from2DImages to3DAnisotropicVolumes Siqi Liu, Daguang Xu, S. Kevin Zhou, Thomas Mertelmeier, Julia Wicklein, Anna Jerebko, Sasa Grbic, Olivier Pauly, Weidong Cai, Dorin Comaniciu (Submitted on 23 Nov 2017 https://arxiv.org/abs/1711.08580 Elastic Boundary Projection for 3D Medical Image Segmentation Tianwei Ni etal. (CVPR 2019) http://victorni.me/pdf/EBP_CVPR2019/1070.pdf In this paper, we bridge the gap between 2D and 3D using a novel approach named Elastic Boundary Projection (EBP). The key observation is that, although the object is a 3D volume, what we really need in segmentation is to find its boundary which is a 2D surface. Therefore, we place a number of pivot points in the 3D space, and for each pivot, we determine its distance to the object boundary along a dense set of directions. This creates an elastic shell around each pivot which is initialized as a perfect sphere. We train a 2D deep network to determine whether each ending point falls within the object, andgradually adjust the shellsothatit graduallyconverges tothe actualshape of the boundaryand thus achievesthe goalofsegmentation From voxel-based tricks NURBS -like parametrization for “subvoxel” MESH/CFD Analysis? →
  • 94.
    Not a lotof papers addressingspecifically (multiphoton)microscopy (micro)vasculature thus most of the slides are outside vasculature processing but relevant if you want to work on “next generation” vascular segmentation networks
  • 95.
    Non-DL ‘classical approaches’ SegmentationofVasculatureFromFluorescentlyLabeledEndothelial CellsinMulti-PhotonMicroscopyImages Russell Bates ; Benjamin Irving ; Bostjan Markelc ; JakobKaeppler ; Graham Brown ; Ruth J. Muschel ; et al. Department of EngineeringScience, Institute of BiomedicalEngineering, University of Oxford, Oxford,U.K. IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 1 , Jan. 2019 ) https://doi.org/10.1109/TMI.2017.2725639 Here, we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopic images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markovrandom fieldapproach. A potential area for future improvement is the limitations imposed by our edge potentials in the MRF which are tuned rather than learned. The expectation of the existenceof fully annotated training sets formany applications is unrealistic.Future work will focus on the suitability of semi-supervised methods to achieve fully supervised levels of performance on sparse annotations. It is possible that this may be donein thecurrentframework using label-transduction methods. Interesting work in the transduction and interactive learning for sparsely labeled superpixel microscopy images has also been undertaken by Suetal.(2016). A method that can take sparse image annotations and use them to leverage information from large set of unlabeled parts of the image to create high quality segmentations would be an extremely powerful tool. This would have very broad applications in novel imaging experiments where large training sets are not readily availableandwherethereis ahigh time-cost in producingsuch atrainingset.
  • 96.
    InitialEffort with hybrid“2D/3D ZNN” with CPU acceleration DeepLearningConvolutionalNetworksforMultiphotonMicroscopy VasculatureSegmentation Petteri Teikari, Marc Santos, Charissa Poon, Kullervo Hynynen (Submitted on 8 Jun 2016) https://arxiv.org/abs/1606.02382
  • 97.
    MicrovasculatureCNNs #1 MicrovasculaturesegmentationofarteriolesusingdeepCNN Y. M.Kassimetal. (2017) ComputationalImaging and Vis Analysis (CIVA) Lab https://doi.org/10.1109/ICIP.2017.8296347 Accurate segmentation for separating microvasculature structures is important in quantifying remodeling process. In this work, we utilize a deep convolutional neural network (CNN) framework for obtaining robust segmentations of microvasculature from epifluorescence microscopy imagery of mice dura mater. Due to the inhomogeneous staining of the microvasculature, different binding properties of vessels under fluorescence dye, uneven contrast and low texture content, traditional vessel segmentation approaches obtain sub-optimal accuracy. We proposed an architecture of CNN which is adapted to obtaining robust segmentation of microvasculature structures. By considering overlapping patches along with multiple convolutional layers, our method obtains good vessel differentiation for accurate segmentations.
  • 98.
    MicrovasculatureCNNs #2 Extracting3DVascularStructuresfromMicroscopy ImagesusingConvolutionalRecurrentNetworks Russell Bates,BenjaminIrving, Bostjan Markelc,Jakob Kaeppler, Ruth Muschel,VicenteGrau, JuliaA. Schnabel Institute of BiomedicalEngineering, Department of EngineeringScience, University of Oxford, United Kingdom CRUK/MRCOxford Centre for Radiation Oncology, Department of Oncology, Universityof Oxford, United Kingdom Division of ImagingSciences and BiomedicalEngineering, Kings College London, United Kingdom. PerspectumDiagnostics, Oxford, United Kingdom (Submitted on 26May 2017) https://arxiv.org/abs/1705.09597 In tumors in particular, the vascularnetworksmaybe extremelyirregularand theappearanceofthe individual vesselsmaynotconformto classicaldescriptionsof vascularappearance. Typically, vessels areextracted by eitherasegmentation and thinningpipeline,or bydirect tracking. Neitherof these methods are wellsuited to microscopy images of tumorvasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.
  • 99.
    MicrovasculatureCNNs #3 AutomaticGraph-basedModelingof Brain MicrovesselsCapturedwithTwo-PhotonMicroscopy RafatDamseh;PhilippePouliot ;Louis Gagnon ; Sava Sakadzic; David Boas ; FaridaCheriet et al. (2018) Institute of Biomedical Engineering, Ecole Polytechnique de Montreal https://doi.org/10.1109/JBHI.2018.2884678 Graph models of cerebral vasculature derived from 2- photon microscopy have shown to be relevant to study brain microphysiology. Automatic graphing of these microvessels remain problematic due to the vascular network complexity and 2-photon sensitivity limitations with depth. In this work, we propose a fully automatic processing pipeline to address this issue. The modeling scheme consists of a fully-convolutional neural network (FCN) to segment microvessels, a 3D surface model generator and a geometry contraction algorithm to produce graphical models with a single connected component. Quantitative assessment using NetMets metrics, at a tolerance of 60 μm, false negative and false positive geometric error rates are 3.8% and 4.2%, respectively, whereas false negative and false positive topological error rates are6.1%and 4.5%, respectively. One important issue that could be addressed in a future work is related to the difficulty in generating watertight surface models. The employed contraction algorithm is not applicable to surfaces lacking such characteristicsin generating watertight surface models. Introducing a geometric contraction not restricted to such conditions on the obtained surface model could be an area of further investigation.
  • 100.
    MicrovasculatureCNNs #4 FullyConvolutionalDenseNetsforSegmentationof MicrovesselsinTwo-photonMicroscopy RafatDamseh etal. (2019) https://doi.org/10.1109/EMBC.2018.8512285 Segmentation of microvessels measured using two-photon microscopy has been studied in the literature with limited success due to uneven intensities associated with optical imaging and shadowing effects. In this work, we address this problem using a customized version of a recently developed fully convolutional neural network, namely, FC-DensNets (see DenseNet Cited by 3527 ). To train and validate the network, manual annotations of 8 angiogramsfrom two-photon microscopy was used. However, this study suggests that in order to exploit the output of our deep model in further geometrical and topological analysis, further investigations might be needed to refine the segmentation. This could be done by either adding extra processing blocks on the output of the model orincorporating 3D information in its trainingprocess.
  • 101.
    MicrovasculatureCNNs #5 A DeepLearning Approach to 3D Segmentation of Brain Vasculature Waleed Tahir, Jiabei Zhu, Sreekanth Kura,Xiaojun Cheng,DavidBoas, and Lei Tian (2019) Department of Electrical and Computer Engineering, Boston University https://www.osapublishing.org/abstract.cfm?uri=BRAIN-2019-BT2A.6 The segmentation of blood-vessels is an important preprocessing step for the quantitative analysis of brain vasculature. We approach the segmentation task for two-photon brain angiograms using a fully convolutional 3D deep neural network. We employ a DNN to learn a statistical model relating the measured angiograms to the vessel labels. The overall structure is derived from V-net [Milletari et al.2016] which consists of a 3D encoder-decoder architecture. The input first passes through the encoder path which consists of four convolutional layers. Each layer comprises of residual connections which speed up convergence, and 3D convolutions with multi-channel convolution kernels which retain 3D context. Loss functions like mean squared error (MSE) and mean absolute error (MAE) have been used widely in deep learning, however, they cannot promote sparsity and are thus unsuitable for sparse objects. In our case, less than 5% of the total volume in the angiogram comprises of blood vessels. Thus, the object under study is not only sparse, there is also a large class-imbalance between the number for foreground vs. background voxels. Thus we resort to balanced cross entropy as the loss function [HED, 2015], which not only promotes sparsity, but also caters fortheclass imbalance
  • 102.
    MicrovasculatureCNNs #6: State-of-the-Art(SOTA)? Deepconvolutionalneuralnetworksforsegmenting3Dinvivo multiphotonimagesofvasculatureinAlzheimerdiseasemouse modelsMohammad Haft-Javaherian, Linjing Fang,Victorine Muse, Chris B. Schaffer, Nozomi Nishimura,Mert R.Sabuncu Meinig Schoolof BiomedicalEngineering, Cornell University, Ithaca, NY, United States of America March2019 https://doi.org/10.1371/journal.pone.0213539 https://arxiv.org/abs/1801.00880 Data: https://doi.org/10.7298/X4FJ2F1D (1.141 Gb) Code: https://github.com/mhaft/DeepVess (Tensorflow / MATLAB) We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture with a customized loss function, which we call DeepVess, yielded a segmentation accuracy that was better than state-of-the-art methods, while also being orders of magnitude fasterthan themanual annotation While DeepVess offers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together, such as the method proposed by Bates etal. (2017) in a single complete end-to-end learning framework might achieve higher centerline accuracy levels. Comparison ofDeepVess and the state-of- the-art methods 3D rendering of (A) the expert’s manual and (B) DeepVess segmentation results. Comparison ofDeepVess and the gold standard human expertsegmentation results We used 50% dropout during test-time [MCDropout] and computed Shannon’s entropy for the segmentation prediction at each voxel to quantify the uncertainty in the automatedsegmentation.
  • 103.
    MicrovasculatureCNNs #7: Dual-DyeNetwork for vasculature Automatedanalysisof wholebrain vasculature usingmachinelearning Mihail Ivilinov Todorov, Johannes C. Paetzold, Oliver Schoppe, Giles Tetteh, Velizar Efremov, Katalin Völgyi, Marco Düring, Martin Dichgans, Marie Piraud, Bjoern Menze, Ali Ertürk (Posted April 18, 2019) https://doi.org/10.1101/613257 http://discotechnologies.org/VesSAP Tissue clearing methods enable imaging of intact biological specimens without sectioning. However, reliable and scalable analysis of such large imaging data in 3D remains a challenge. Towards this goal, we developed a deep learning-based framework to quantify and analyze the brain vasculature, named Vessel Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a fully convolutional network with a transfer learning approach for segmentation. We systematically analyzed vascular features of the whole brains including their length, bifurcation points and radius at the micrometer scale by registering them to the Allen mouse brain atlas. We reported the first evidence of secondary intracranial collateral vascularization in CD1-Elite mice and found reduced vascularization in the brainstem as compared to the cerebrum. VesSAP thus enables unbiased and scalable quantifications for the angioarchitecture of the cleared intact mouse brain and yields newbiological insights related to the vascular brain function.
  • 104.
  • 105.
    WellwhatabouttheNOVELTYtoADD? Depends a biton what the benchmarks reveal? The DeepVess does not seem out from this world in terms of their specs so possible to beat it with “brute force”, by trying different standard things proposed in the literature Keep this in mind, and have a look on the following slides INPUT SEGMENTATION UNCERTAINTYMC Dropout While DeepVess offers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) i.e. what the hGRU did can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together multi-task learning (MTL) , such as the method proposed by Bates et al. [25] in a single complete end-to-end learning framework might achieve higher centerline accuracy levels.
  • 106.
    VasculatureNetworks Future While DeepVessoffers very high accuracy in the problem we consider, there is room for further improvement and validation, in particular in the application to other vasiform structures and modalities. For example, other types of (e.g., non-convolutional) architectures such as long short-term memory (LSTM) i.e. what the hGRU did can be examined for this problem. Likewise, a combined approach that treats segmentation and centerline extraction methods together multi-task learning (MTL) , such as the method proposed by Bates et al. [25] in a single complete end-to-end learning framework might achieve higher centerline accuracy levels. FC-DensNets However, this study suggests that in order to exploit the output of our deep model in further geometrical and topological analysis, further investigations might be needed to refine the segmentation. This could be done by either adding extra processing blocks on the output of the model or incorporating 3D information in its training process. http://sci-hub.tw/10.1109 /jbhi.2018.2884678 One important issue that could be addressed in a future work is related to the difficulty in generating watertight surface models. The employed contraction algorithm is not applicable to surfaces lacking such characteristics.
  • 107.
  • 108.
  • 109.
  • 110.
    NoiseModels biomedical imagestypicallydonotfollow AWGN model Rician distribution is far from being Gaussian for small SNR (A/σ≤1). For ratios as small as A/σ=3, however, it starts to approximate the Gaussian distribution. Gudbjartsson and Samuel Patz (1995) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2254141/ MRI Additive WhiteGaussian Noise (AWGN).AWGN issignal-independent,whereas with 2-PM,atlowphotoncounts thenoise dependsonthesignal(“Poisson noise”) Making noise ‘Gaussian’ Using a nonlinear variance-stabilizing transformation (VST, e.g. generalized Anscombe transformationMäkitalo andFoi2013, see github/petteriTeikari/.../denoise_anscombeTransform.m ) to convert the Poisson-Gaussian denoising problem into a Gaussian noise removal problem. This allows you to use denoising algorithms designed for AWGN noise. https://doi.org/10.1117/12.22165 62 Ultrasound Existing work for image restoration mainly deals with additive Gaussian noise, impulse noise, and Poisson noise. Multiplicative noise, a different noise model, is commonly found in laser images, synthetic aperture radar (SAR), ultrasoundimaging, etc. The associated mathematical model assumes that the original image u has been blurred by a blurring operator K and then corrupted by some multiplicative noise (e.g., Gamma noise) . η Results ofdifferent restoring methods on real ultrasoundimage (“Kidney”).(a) Real image; (b) RLOmethod; (c) SRAD method; (d) JY method; (e) Algorithm 1. Lu et al. (2018) https://doi.org/10.1016/j.apm.2018.05.007
  • 111.
    Introtomicroscopy noiseandimage formationmodels Theeffects of decreasing signal-to-noise ratio in fluorescence microscopy is illustrated by the series of digital images presented in Figure 1. The specimen is an adherent culture of opossum kidney proximaltubule epithelial cells (OKcell line) stained with SYTOX Green to image the nuclei. At high signal-to-noise ratios, a pair of interphase nuclei (Figure 1(a)) is imaged with sharp contrast and good definition of fine detail on a black background. As the signal-to-noise ratio decreases (Figures 1(b) and 1(c)), the definition and contrast of the nuclei also decrease until they almost completely blend into the noisy background (Figure 1(d)) as the SNR approaches unity. Three primaryundesirable signalcomponents(noise) aretypicallyconsideredin calculatingoverallsignal-to-noise ratios: ● Photon noise results from the inherent statistical variation in the arrival rate of photons incident on the detector. The interval between photon arrivals is governed by Poisson statistics. In general, the term shot noise is applied to any noise component reflectinga similarstatisticalvariation, or uncertainty. ● Dark noise arises from statistical variation in the number of electrons thermally generated within the silicon structure of the detector, which is independent of photon-induced signal, but highly dependent on device temperature. In similarity to photon noise, dark noise follows a Poisson relationshiptodark current. ● Read noise is a combination of system noise components inherent to the process of converting CCD charge carriers into a voltage signal for quantification, and the subsequent processing and analog-to-digital (A/D) conversion. The major contribution to read noise usuallyoriginates with the on-chip preamplifier, andthisnoise isadded uniformlytoeveryimage pixel. CCDNoise Sources and Signal-to-Noise Ratio https://micro.magnet.fsu.edu/primer/digitalimaging/concepts/ccdsnr.html
  • 112.
    Hownoisemakes your lifeharderinpractice AToolforAlignmentandAveragingof SparseFluorescenceSignalsinRod-Shaped Bacteria Joris M.H. Goudsmits, Antoine M. van Oijen,Andrew Robinson Biophysical Journal (April 2016) https://doi.org/10.1016/j.bpj.2016.02.039 Analyzing the spatial distribution of low-abundance proteins within cells is highly challenging because information obtained from multiple cells needs to be combined to provide well-defined maps of protein locations. We present (to our knowledge) a novel tool for fast, automated, and user-impartial analysis of fluorescent protein distribution across the short axis of rod- shaped bacteria. To demonstrate the strength of our approach in extracting spatial distributions and visualizing dynamic intracellular processes, we analyzed sparse fluorescence signals from single-molecule time-lapse images of individual Escherichia coli cells. StatisticalDenoisingforsinglemoleculefluorescence microscopic imagesJi Won Yoon(Submittedon 7 Jun 2013) https://arxiv.org/abs/1306.1619 PhotonShot NoiseLimits on OpticalDetectionofNeuronal Spikes andEstimationofSpike Timing Brian A.Wilt,James E.Fitzgerald, MarkJ.Schnitzer Biophysical Journal (January 2013) https://doi.org/10.1016/j.bpj.2012.07.058 Optical approaches for tracking neural dynamics are of widespread interest, but a theoretical framework quantifying the physical limits of these techniques has been lacking. We formulate such a framework by using signal detection and estimation theory to obtain physical bounds on the detection of neural spikes and the estimation of their occurrence times as set by photon counting statistics (shot noise). These bounds are succinctly expressed via a discriminability index that depends on the kinetics of the optical indicator and the relative fluxes of signal and background photons. Finally, the ideas presented here may be applicable to the inference of time-varying spike rates from fluorescence data. Denoising Two-Photon Calcium Imaging Data Malik et al. (2011) https://doi.org/10.1371/journal.pone.0020490
  • 113.
    Quantifyingtheperformanceof microscopysystems UsingtheNoiSeeworkflowtomeasure signal-to-noiseratios ofconfocal microscopes AlexiaFerrand,Kai D. Schleicher, Nikolaus Ehrenfeuchter,Wolf Heusermann &Oliver Biehlmaier ScientificReportsvolume 9, Article number: 1165 (2019) https://doi.org/10.1038/s41598-018-37781-3 https://imagej.net/NoiSee By design, alarge portion of thesignal is discarded in confocal imaging, leading to a decreased signal- to-noise ratio (SNR) which in turn limits resolution. A well-aligned system and high performance detectors are needed in order to generate an image of best quality. However, a convenient method to address system status and performance on the emission side is still lacking. Here, we present a complete method to assess microscope and emission light path performance in terms of SNR, with a comprehensive protocol alongside NoiSee, an easy-to-usemacro forFiji(ImageJ). Our method reveals differences in microscope performance and highlights the various detector types used (multialkali photomultiplier tube (PMT), gallium arsenide phosphide (GaAsP) PMT, and Hybrid detector). Altogether, our method will provide useful information to research groups and facilities to diagnosetheirconfocal microscopes. Image quality as a function ofSNR. Cells with actin fibres imaged on different confocal systems and showing decreasing SNR scores from left to right. The detectors were chosen to represent the range of SNR score: (a,f) Zeiss LSM800 GaAsP2; (b,g) LSM700up PMT2; (c,h) SP8M PMT3; (d,I) SP5II PMT1; (e,j)SP5 MP HyD2. Scale bar 10µm. Influence ofSNRand signal-to-background ratio (SBR) on image quality
  • 114.
    Nicetohavesome realisticnoise-free2-PMvolumes/slices asbenchmarksfordenoising performanceevenifyouusethe‘statisticalnoisemodel’ approach GRDN:GroupedResidualDenseNetworkforRealImageDenoising andGAN- based Real-worldNoiseModeling Dong-WookKim, Jae Ryun Chung, Seung-Won Jung (Submitted on 27 May 2019) https://arxiv.org/abs/1905.11172 Toward real-world image denoising, there have been two main approaches. The first approach is to find a better statistical model of real-world noise rather than the additive white Gaussian noise[e.g. Brooks et al. 2018, Guo et al. 2018, Plötzand Roth 2018] . In particular, a combination of Gaussian and Poisson distributions was shown to closely model both signal-dependent and signal-independent noise. The networks trained using these new synthetic noisy images demonstrated the superiority in denoising real-world noisy images. One clear advantage of this approach is that we can have infinitely many training image pairs by simply adding the synthetic noise to noise-free ground-truth images. The second approach is thus in an opposite direction. From real-world noisy images, nearly noise- freeground-truthimagescan be obtained by inverting an image acquisition procedure We improve the previous GAN-based real-world noise simulation technique [Chen et al. 2018] by including conditioning signals such as the noise-free image patch, ISO, and shutter speed as additional inputs to the generator. The conditioning on the noise-free image patch can help generating more realistic signal-dependent noise and the other camera parameters can increase controllability and variety of simulated noise signals. We also change the discriminator of the previous architecture [Chen et al. 2018] by using a recent relativistic GAN [Jolicoeur-Martineau 2018 Cited by 38 ]. Unlike conventional GANs, the discriminator of the relativistic GAN learns to determine which is morerealistic between real data and fake data We thus plan to apply the proposed image denoising network to other image restoration tasks. We also could not fully and quantitatively justify the effectiveness of the proposed real-world noise modeling method. A more elaborate design is clearly necessary for better real-world noise modeling. We believe that our real-world noise modeling method can be extended to other real- worlddegradations such as blur, aliasing, and haze, which will be demonstratedin our future work
  • 115.
    If the imagerestoration is successful with the image segmentation target, we could train another network for estimating image quality and even optimize that in real-time in the 2-PM Setup [*] ? [*] Assuming that one has the time to acquire multiple times the same slices if they are deemed sub-quality. Might work for histopathology, butforanesthetized rodents? Adeepneural networkforimagequalityassessment Bosse et al. (2016) https://doi.org/10.1109/ICIP.2016.7533065 ExploitingUnlabeledData inCNNs bySelf-supervisedLearningto Rank Xialei Liu et al. (2019) https://doi.org/10.1109/TPAMI.2019.2899857 JND-SalCAR:ANovelJND-basedSaliency-ChannelAttention ResidualNetworkforImageQualityPrediction Seo et al. (2016) https://arxiv.org/abs/1902.05316 Real-TimeQualityAssessmentofPediatricMRI viaSemi- SupervisedDeepNonlocalResidualNeuralNetworks Siyuan Liu et al. (2019) https://arxiv.org/abs/1904.03639 GANs-NQM:AGenerativeAdversarialNetworks basedNo ReferenceQualityAssessmentMetricforRGB-DSynthesized Views Suiyi Ling et al. (2019) https://arxiv.org/abs/1903.12088 Quality-awareUnpairedImage-to-ImageTranslation Lei Chen et al. (2019) https://arxiv.org/abs/1903.06399
  • 116.
    Fluorescent MicrosopyDatasets Therecould more benchmarking data around APoisson-GaussianDenoising DatasetwithReal Fluorescence MicroscopyImages Yide Zhang,Yinhao Zhu,Evan Nichols,Qingfei Wang,Siyuan Zhang, CodySmith,ScottHoward University of Notre Dame (Submitted on 26 Dec 2018(v1), last revised 5 Apr 2019) https://arxiv.org/abs/1812.10366 http://tinyurl.com/y6mwqcjs https://github.com/bmmi/denoising-fluorescence The dataset consists of 12,000 real fluorescence microscopy images obtained with commercial confocal, two-photon, and wide-field microscopes and representative biological samples such as cells, zebrafish, and mouse brain tissues. We use image averaging to effectively obtain ground truth images and 60,000 noisy images with different noise levels. We have made our FMD dataset publicly available as a benchmark for Poisson/Gaussian denoising research, which, we believe, will be especially useful for researchers that are interested in improving the imaging quality of fluorescence microscopy. Examples of images with different noise levels and ground truth. The single-channel (gray) images are acquired with two- photon microscopy on fixed mouse brain tissues. The multichannel (color) images are obtained with two-photon microscopy on fixed BPAE cells. The ground truth images are estimated by averaging 50 noisy raw images. Estimatednoise parameters (aand b) ofaveragedimages obtained with different rawimage numbers in theaverage. The estimation is performed on the secondFOVofeach imagingconfiguration.
  • 117.
    Ifyouhada custom-builtmicroscope makeitmore‘intelligent’ ReducingUncertaintyinUndersampledMRIReconstructionwithActive Acquisition ZizhaoZhang, Adriana Romero, Matthew J. Muckley, Pascal Vincent, Lin Yang, MichalDrozdzal (Submittedon 8Feb2019) https://arxiv.org/abs/1902.03051 LearningFastMagnetic ResonanceImaging TomerWeiss, Sanketh Vedula, OrtalSenouf, Alex Bronstein, Oleg Michailovich, MichaelZibulevsky (Submittedon 22 May2019) https://arxiv.org/abs/1905.09324 Self-supervisedlearningofinverseproblem solvers inmedicalimaging Ortal Senouf, Sanketh Vedula, Tomer Weiss, Alex Bronstein, OlegMichailovich, MichaelZibulevsky (Submittedon 22May2019) https://arxiv.org/abs/1905.09325 CompressedSensing:From ResearchtoClinicalPracticewithData-Driven Learning Joseph Y. Cheng, Feiyu Chen, Christopher Sandino, Morteza Mardani, JohnM. Pauly, ShreyasS. Vasanawala (Submittedon 19 Mar 2019) https://arxiv.org/abs/1903.07824 Deep LearningMethods forParallelMagneticResonanceImage Reconstruction Florian Knoll, Kerstin Hammernik, Chi Zhang, Steen Moeller, ThomasPock, DanielK. Sodickson, MehmetAkcakaya (Submittedon 1 Apr 2019) https://arxiv.org/abs/1904.01112 ● Sample so that you get the best image quality possible even before any of the image restoration networks (and jointly optimize the whole thing) ● Also take into account your other physiological sensors as well (e.g. blood pressure measurement, cardiac gating ECG lead, EEG, etc.), design the triggering, DAQ systems to integrate to your microscope (commercial Olympus or custom-built) https://www.slideshare.net/PetteriTeikariPhD/instrumentation -for-in-vivo-intravital-microscopy Instrumentation forinvivo intravital microscopy
  • 118.
    End-to-endrestoration+segmentationwith deepsupervision? Not abad “side-product” if you get a cleaned volume for visualization along with the segmentation? Especially for clinical use? RawImage Swaminathan etal.(2012) http://doi.org/10.1515/bmt-2012-0055 CLAHE Contourlettransform method+CLAHE You are trying to segment the vasculature, thus having the labels for vasculature allowing “weighed sharpening” within the network Clinician should be able to make more sense from this than from the “raw image”?
  • 119.
    Create aproof-readingtoolfor ‘2-PMquality’estimation? HistoQC:AnOpen-SourceQuality ControlToolforDigitalPathology Slides AndrewJanowczyk;Ren Zuo; Hannah Gilmore; Michael Feldman; and Anant Madabhushi http://doi.org/10.1200/CCI.18.00157 Here we present HistoQC, a tool for rapidly performing quality control to not only identify and delineate artefacts but also discover cohort-level outliers (eg, slides stained darker or lighter than others in the cohort). This open-source tool employs a combination of image metrics (eg, color histograms, brightness, contrast), features (eg, edge detectors), and supervised classifiers (eg, pen detection) to identify artefact-free regions on digitized slides. These regions and metrics are presented to the user via an interactive graphical user interface, facilitating artefact detection through real-time visualization and filtering. These same metrics afford users the opportunity to explicitly define acceptable tolerancesfortheirworkflows. And get multiple “ground truth” masks now in your database (MongoDB Cited by 14 or whatever your DevOps expert recommends for your HDF5s / OME-TIFFs)
  • 120.
    HDR:Morebits useful forvascularsegmentation? At least dimmer vessels should beeasier to separate from “dark noise”, assuming that no additional artifacts are introduces. But if you are studying NeurovascularUnit (NVU), yourcalciumanalysis from neurons and astrocytes, you get obvious benefits? Real-timehighdynamicrangelaserscanningmicroscopy C. Vinegoni, C. Leon Swisher, P. Fumene Feruglio, R. J. Giedt, D. L. Rousso, S. Stapleton & R. WeisslederCenterfor Systems Biology,Massachusetts General Hospital and Harvard Medical School, Richard B. Simches Research Center Nature Communications (2016) https://doi.org/10.1038/ncomms11077 Principle for HDR imaging. Only a restricted portion of the detector dynamic range can be effectively used for signal quantization (R2 ). The dark noise (blue area, R1 ) limits the low signal detection, while the high intensity signal near the detector’s maximum threshold is saturated (red) and is also disregarded (R3 ). By combining multiple images (LDR1, LDR2, LDR3) with different sensitivities (α0 , α1 , α2 ) the quantization range can be increased giving rise to a high dynamic range image (HDR). This paricular setup “sacrifices” three PMTs for the same “color channel” (dye) with different neutral density filters. In vivo intravascular dye kinetics. In vivo intravascular real-time quantification of the time–intensity variations demonstrating the vascular pharmacokinetics of a fluorescent probe across multiple regions of interests (ROIs). A bolus of 2 MDa FITC-Dextran was injected intravenously through the lateral tail vein and vascular kinetics were captured by collecting a time sequence of real-timeHDR images. Automated segmentation of rHDR images allowed for the identification of vascular features that agreed with values obtained using ground truth manual segmentation. Conversely, LDR image segmentation resulted in a high degree of vasculature fragmentation (low branch length) due to the low SNR presentwithinthe image. “Shortexposure” “Longexposure” Midexposure” Volumetric vasculature HDR confocalimaging. LDRs(a–c)Images of a cleared Dil stained heart, and (d) corresponding rHDR image reconstruction. (e) Projection of the three-dimensional rHDR acquisition of the vasculature where colours represent different imaging depths and brightness is related to the fluorescence (Fluo.) signal amplitude. Scale bar, 150 μm.
  • 121.
    HDR Vascular Researchillustration, more bits better resolution for permeability → Drugdeliverytothebrainbyfocused ultrasound inducedblood–brainbarrierdisruption:Quantitativeevaluationof enhanced permeabilityofcerebral vasculatureusingtwo-photonmicroscopyTam Nhan, Alison Burgess, Eunice E.Cho, BojanaStefanovic, Lothar Lilge, Kullervo, Hynynen Journal of ControlledRelease Volume 172, Issue 1,28 November 2011) https://doi.org/10.1016/j.jconrel.2013.08.029 Data analysisof 2PFM data(FV1000MPE, Olympus) capturing fluorescent dye leakage upon BBBD. A) Depth projection images illustrate the transient BBBD induced by MBs & FUS at 0.6 MPa (scale bar: 100 μm). Sonication and MB injection occurred during the first 2 min while the vessels remained impermeable to dextran conjugated TexasRed TR10kDa. As soon as sonication ceased, disruption started at multiple vessels within the imaging FOV and extravascular signal increases over time. B) Quantitative measurement of averaged fluorescent signalintensities associated with intravascular and extravascular compartments over time. C) Permeability was evaluated accordingly Olympus FV1000MPE 12-bit PMTs (4,096 intensity levels), and no the study has not used the full dynamic range with none of the intensities exceeding 2,048 (11-bit) arbitrary units Intravascular Space ExtravascularSpace 8-bit → 256 levels 0.39%smallest change detectable in intensity 12-bit → 4,096 levels 0.0244% smallest change detectable in intensity 16-bit → 65,536 levels 0.0015% smallest change detectable in intensity 24-bit → 16,777,216 levels 0.00000596% smallest change detectable in intensity Assuming that this is “effective DR”, and you are not just sampling noise more accurately with 24-bit ADC connected to a noisy PMT.
  • 122.
    HowaboutthePSFmodeling in2-PMsystems? Fordeblurring OptimalMultivariateGaussianFittingforPSFModeling inTwo-Photon Microscopy Tim Tsz-Kit, Emilie Chouzenoux, Claire Lefort, Jean-Christophe Pesquet http://doi.org/10.1109/ISBI.2018.8363621f The mathematical representation of the light distribution of this spread phenomenon (of infinitesimally pointsource) is well-known and described by the Point Spread Function (PSF). The implementation of an efficient deblurring strategy often requires a preliminary step of experimental data acquisition, aiming at modeling the PSF whose shapedepends on the optical parameters of themicroscope. The fitting model is chosen as a trade-off between its accuracy and its simplicity. Several works in this field have been inventoried and specifically developed for fluorescence microscopy [Zhang et al. 2007; Kirshner et al. 2012, 2013] . In particular, Gaussian models often lead to both tractable and good approximations of PSF [Anthonyand Granick2009; Zhu and Zhang 2013] . Although there exists an important amount of works regarding Gaussian shape fitting [Hagen and Dereniak2008; Roonizi 2013] , to the best of our knowledge, these techniques remain limited to the 1D or 2D cases. Moreover, only few of them take into account explicitly the presence of noise. Finally, a zero background value is usually assumed (for instance, in the famous Caruanaet al. (1986) Cited by 46 ’s approach). All the aforementioned limitations severely reduce the applicability of existing methods for processingreal3Dmicroscopy datasets. In this paper, a novel optimization approach called FIGARO (FijiPlugin) has been introduced for multivariate Gaussian shape fitting, with guaranteed convergence properties. Experiments have clearly illustrated the applicative interest of FIGARO, in the context of PSF identification in two-photon imaging. The versatility of FIGARO makes it applicable to a wide range of applicative areas, in particular other microscopy modalities. The deblurringstepis performedusingthe OPTIMISMtoolboxfrom Fiji
  • 123.
    Deblurring groundtruths fromAdaptiveOptics(AO) Systems Adaptiveoptical fluorescencemicroscopy Na Ji NatureMethodsvolume14, pages374–380(2017) https://doi.org/10.1038/nmeth.4218 -Citedby89 Jiet al. (2012) Cited by 122 “Characterization and adaptive optical correction of aberrationsduringin vivoimaging inthe mouse cortex” Lateral and axial images of GFP-expressing dendritic processes (mousecortex, 2-PM, 170 μm Future directions Though research efforts on AO microscopy have been largely focused on the most common modalities of single- or two- photon excitation fluorescence, similar approaches can improve the performance of optical microscopy in aberrating samples in general. Correcting aberrations is especially important for microscopy involving higher-order nonlinear optical processes, such as in three- photon excitation fluorescence (Sinefeld etal.2015) thirdharmonicgeneration (Jesacheretal.2009) . Ultimately, the applications ofAOtomicroscopyneedtogo beyond technical, proof-of-principle demonstrations. We need to make existing methods simple to use and robust in performance, as well as prove that AO can enable biological discoveries, which requires close collaborations between microscopists and bio­logists, as demonstrated recently (Sunetal.2016) . With the rapid incorporation into both diffraction-limited and super-resolution microscopy, one envisions that adaptiveopticswill soon bean essential elementfor allhigh-resolution imagingdeepin multicellular specimens.
  • 124.
    ‘PhysicalDeepLearning’ for derivingZernikeCoefficients Deeplearningwavefront sensing Yohei Nishizaki, Matias Valdivia, Ryoichi Horisaki, Katsuhisa Kitaguchi, Mamoru Saito, Jun Tanida, and Esteban Vera Optics Express Vol. 27, Issue 1, pp. 240-251 (2019) https://doi.org/10.1364/OE.27.000240 We present a new class of wavefront sensors by extending their design space based on machine learning. This approach simplifies both the optical hardware and image processing in wavefront sensing. We experimentally demonstrated a variety of image-based wavefront sensing architectures that can directly estimate Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. We also demonstrated that the proposed deep learning wavefront sensor can be trained to estimate wavefront aberrations stimulated by a point source and even extended sources. In this paper, we experimentally demonstrated the DLWFS with three preconditioners: overexposure, defocus, and scatter, for a point source and extended sources. The results showed that all of them can vastly improve the estimation accuracy obtained when performing in- focus image-based estimation. The applicability of the DLWFS to practical situations, e.g. cases with a large number of Zernike coefficients, a low luminous flux, and an extended field of view, should be investigated. The concept of the generalized preconditioner allows the design of innovative wavefront sensors (WFSs). In particular, the choice and optimization of the preconditioner for the DLWFS, which can be any optical transformation, is an open research question. Even other optical elements that are already used in traditional WFSs could be potentially used, such as a lenslet array or a pyramid. Nonetheless, the proposed DLWFS scheme has the advantage that it can be trained asmounted, without requiring further alignment or precision optics. Therefore, our proposed framework simplifies and rationalizes WFSs. Schematic and experimental diagram of the deep learning wavefront sensor. LED: light emitting diode. P: Polarizer. SLM: Spatial light modulator. Xception: A convolutional neural network. DO: Dropout layer. FC: Fully connected layer. The Human Eye andAdaptive Optics Fuensanta A.Vera-Díaz, NathanDoble (2012) Thorlabs Shack-Hartmann Wavefront Sensors https://www.thorlabs.com/newgrouppage9.cf m?objectgroup_id=5287
  • 125.
    ‘PhysicalModel’ +‘Deep Learning’e.g.for OCTAngiography OCTMonteCarlo&DeepLearning https://www.slideshare.net/PetteriTeikariPhD/oct-monte-carlo-deep-lear ning (a)Typical light propagation in a macrovessel (top) and a capillary (bottom); (b) left: en faceMIP (over 200μm in ∼ Z) of regular OCTA obtained after averaging 20 images; right: XZ cross-sectional image shows the “tail” artifacts in axial direction; and (c) g1 (τ)with time lags spanning 4 ms showing the decorrelation at selected positions (black, above vessel; red, inside vessel; and magenta, beneath vessel); top: g1 (τ)decorrelation for the large vein marked in (b); bottom:g1 (τ) decorrelation for the capillary marked in (b). Normalizedfieldautocorrelationfunction-based opticalcoherencetomographythree-dimensional angiography Jianbo Tang; Sefik Evren Erdener; Smrithi Sunil; David A. Boas (2019). https://doi.org/10.1117/1.JBO.24.3.036005
  • 126.
    “Physical GroundTruths” Choiceof objective and lens parameters? Comparisonofobjectivelensesformultiphoton microscopyinturbidsamples Avtar Singh, JesseD. McMullen, Eli A. Doris, and Warren R. Zipfel Biomed Opt Express. 2015 Aug 1; 6(8): 3113–3127. https://dx.doi.org/10.1364%2FBOE.6.003113 Optimization of illumination and detection optics is pivotal for multiphoton imaging in highly scattering tissue and the objective lens is the central component in both of these pathways. To better understand how basic lens parameters (NA, magnification, field number) affect fluorescence collection and image quality, a two- detector setup was used with a specialized sample cell to separate measurement of total excitation from epifluorescence collection. Our data corroborate earlier findings that low-mag lenses can be superior at collecting scattered photons, and we compare a set of commonly used multiphoton objective lenses in terms of their ability to collect scattered fluorescence, providing guidance for the design of multiphoton imaging systems. For example, our measurements of epi- fluorescence beam divergence in the presence of scattering reveal minimal beam broadening, indicating that often-advocated over-sized collection optics are not asadvantageousaspreviouslythought. Experimentalapparatus used to measurelens transmittance. Experimental setup for two-channel detection of epi- collected and transmitted fluorescence. Laser illumination was focused through a scattering medium into a solution of fluorescein. Emissions were collected in both epifluorescence and transmission channels. A confocal pinhole in the lower path was used to reject any back-scattered light from the bead layer. An iris in the upper channel was adjusted to controllably vignette the beam in order to measure the emission beam divergence. Epi-collection objective lens characteristics in scattering media. (a) Ratios of counts in the epifluorescence channel to counts in the transmission channel for each lens at zs = 0 (water), 3 and 5, showing the decrease in epi-collection efficiencies as a function of sample scattering. (b) Normalized ratios (relative to zs = 0 value for each lens) with data taken over a larger number ofzs values. Error bars are SEM Onlyto 900 nm!
  • 127.
    “Physical GroundTruths” Beyond900 nm for three photons: Objectives? TransmittanceCharacterizationofObjective LensesCoveringallFourNearInfrared Optical Windowsand itsApplicationtoThree-Photon Microscopy Excitedat1820nm Ke Wang ; Wenhui Wen ; Hongji Liu ; Yu Du ; Ziwei Zhuang ; Ping Qiu IEEE Photonics Journal ( Volume: 10 , Issue: 3 , June 2018 ) https://doi.org/10.1109/JPHOT.2018.2828435 The transmittance data of objective lenses covering all four optical windows (The 800-nm, the 1300-nm, the 1700-nm, and the 2200-nm window) are unknown, nor could they be provided by manufacturers. This poses a notable obstacle for imaging especially at the III and IV window. Here through experimental measurement, we establish a detailed transmittance database covering all four windows, and further demonstrate its guidance to imagingat1820 nm. High- numerical aperture (NA) objective lenses are needed in optical microscopy to both deliver sufficient excitation light and collect efficient signal light to enable deep-tissue imaging. The transmittance performances of objective lenses are of vital importance. However, there is a lack of experimental characterization of the transmittance, especially at long wavelengths, which poses a dramatic obstacle for lens selection in imaging experiments. Here, we demonstrate detailed measurement results of the transmittance performance of air, water-immersion, and oil-immersion objectives available to us, covering all the four NIR optical windows. UPLSAPO40X2, UPLFLN 40X,N PLAN,XLPLN25XWMP2-SP1700,XLPLN25XWMP2,XLPL25XVMP2,UPLSAPO 30X SIR,UPLSAPO 60XO We can easily find that the customized objective lens XLPLN25XWMP2- SP1700 has the highest transmittance at 1820 nm. Besides, it has high transmittance at both the 3-photon fluorescence (89.6% at 645 nm) and THG (92.9% at 607 nm) signal wavelengths, making it efficient for both excitation and signal delivery. .olympus-lifescience.com
  • 128.
    “Physical GroundTruths” Beyond900 nm for three photons: PMT? ComparisonofSignalDetection ofGaAsPand GaAsPMTsfor MultiphotonMicroscopy atthe 1700-nmwindow Yuxin Wang ; Kai Wang ; Wenhui Wen ; Ping Qiu ; Ke Wang IEEE Photonics Journal ( Volume: 8 , Issue: 3 , June 2016 ) https://doi.org/10.1109/JPHOT.2016.2570005 Signal depletion is currently the limiting factor for imaging depth at the 1700-nm window in MPM. Thus, efficient signal detection is an effective means to further boost imaging depth. GaAsP and GaAs PMTs are commonly used for signal detection. Our results show that with a 1667-nm excitation, GaAsP PMT is more efficient for signal detection of a 3- photon fluorescence of quantum dot Qtracker 655, third-harmonic generation signal, and a 4- photon fluorescence of fluorescein, whereas GaAs PMT is far superior in detecting a second- harmonic generation signal. The measured results are in good agreement with theoretical calculations based on wavelength-dependent cathode radiant sensitivities. We expect that our results will offer guidelines for PMT selection for MPM at the 1700-nm window.
  • 129.
    “Physical GroundTruths” Beyond900 nm for three photons:Immersion Medium? Order-of-magnitudemultiphotonsignal enhancementbased oncharacterizationof absorptionspectraofimmersionoilsatthe 1700-nmwindow Ke Wang, Wenhui Wen, Yuxin Wang, Kai Wang, Jiexing He, Jiaqi Wang, Peng Zhai, Yanfu Yang, and Ping Qiu OpticsExpress (2017) https://doi.org/10.1364/OE.25.005909 Seealso https://doi.org/10.1002/jbio.201800263 for 3-PM Based on these measured results, glycerol/D2 O mixture immersion shows lower absorption than glycerol/water mixture immersion. For oil immersion, within the 1700- nm window, 1600-nm excitation should be selected due to the much smaller absorption by the immersion medium. . We further note that compared with 800-nm and 1300-nm excitation, 1700-nm excitation suffers from more water absorption in biological samples, which could potentially lead to more heating and temperature rise. However, due to the low excitation power used (a few mW on the sample surface and even smaller at the focus), based on our calculation and previous experiments on the mouse brain (22 mW on the surface), we expect the temperature rise would be only afraction of 1 K and will not lead to thermal damage. Measured absorption spectra αA (cm-1 ) of glycerol,water,D2 O and mixtures of glycerol and water or D2 O with different volume ratios 3D THG third harmonic generation imaging stacks of the mouse ear with 1.8-mw 1600-nm excitation (a) and 2.3-mW 1700-nm excitation (b) after the objective lens and before the immersion oil. 2D images corresponding to (a) and (b) at different depths (60 µm, 92 µm, 154 µm, and 200 µm below the surface) are show in (c) and (d), respectively, with THG (red) and SHG (green) signals acquired simultaneously. The arrow, arrow head, and circles in (c) indicate corneocytes, sebaceous gland, and the same adipocyte, respectively. Scale bars: 30 µm.
  • 130.
    “Physical GroundTruths” Effectof the laser beam shape Volumetric two-photon microscopy with a non-diffracting Airy beam Xiao-Jie Tan, Cihang Kong, Yu-Xuan Ren, Cora S. W. Lai, Kevin K. Tsia, and Kenneth K. Y. Wong Optics Letters Vol. 44, Issue 2, pp. 391-394 (2019) https://doi.org/10.1364/OL.44.000391 + PDF We demonstrate a volumetric two-photon microscopy (TPM) using the non-diffracting Airy beam as illumination. Direct mapping of the imaging trajectory shows that the Airy beam extends the axial imaging range around six times longer than a traditional Gaussian beam does along the propagation direction, while maintaining a comparable lateral width. Benefiting from its non-diffracting nature, the TPM with Airy beam illumination is able not only to capture a volumetric image within a single frame, but also to acquire image structures behind a strongly scatteredmedium. Meanwhile, unlike the traditional Gaussian TPM, which is very sensitive to the position of the sample, Airy TPM is more robust against the axial motion of the samples, since it captures all the axial images within the effective focal length. The skipped axial scan avoids additional noise as well as possible time decay. Moreover, the diffraction-free nature of Airy beams assures deep penetration and less scattering while imaging deep scattering samples. In all, these advantages make Airy TPM a potential tool for real-time monitoring of the deep biological activities in a large volume, e. g., the transient reactions of neurons in large and deep brain tissue. We anticipate that the current study will promote the application of Airy beams in optical imaging and contribute to biologicalresearch.
  • 131.
    Whyallthisphysics/optics insteadofdata/computerscience? https://xkcd.com/793/ The “softwareguy” necessarily won’t be obnoxious, but you can advance more your workflow by ‘systemsapproach’ Optimally involving people from the start that understand the optics, neuroscience and the deep learning involved? Everyone a bit understanding each other’s challenges? Rather than as a neuroscientist just “pressing the button” of the microscope and throwing the stack(s) to the software guy and ask him/her to find the vessels?
  • 132.
    “Physical GroundTruths” withsome hardware/software help for your microscope PySight: plug andplay photon counting for fastintravital microscopy Hagai Har-Gil, Lior Golgher, Shai Israel, David Kain, Ori Cheshnovsky, Moshe Parnas, Pablo Blinder Optica(2018) https://doi.org/10.1364/OL.44.000391 https://www.biorxiv.org/content/10.1101/316125v3.abstract https://github.com/PBLab/python-pysight Imaging increasingly large neuronal populations at high rates pushed multi-photon microscopy into the photon- deprived regime. We present PySight, an add-on hardware and software solution tailored for photon- deprived imaging conditions. PySight more than triples the median amplitude of neuronal calcium transients in awake mice, and facilitates single-trial intravital voltage imaging in fruit flies. Its unique data streaming architecture allowed us to image a fruit fly’s olfactory response over 234 ×; 600 ×; 330µm3 at 73 volumes per second, outperforming top-tier imaging setups while retaining over 200 times lower data rates. PySight requires no electronics expertise nor custom synchronization boards, and its open-source software is extensible to any imaging method based on single-pixel (bucket) detectors. PySight offers an optimal data acquisition scheme for ever increasing imaging volumes of turbid living tissue. The imaging setup of the proposed system and representative in-vivo images taken from an awake mouse expressing a genetically encoded calcium indicator under a neuronal promoter (Thy1-GCaMP6f) a) A typical multi-photon imagingsetup, depicted in gray, can be easily upgraded to encompass the multiscaler and enable photon-counting acquisition (blue). The output of the PMTs, after optional amplification by fast preamplifiers, is relayed to the multiscaler’s analog inputs (STOP1 and STOP2) where it’s discretized, time- stamped and logged. Finally, the PySight software package, provided with this article, processes the logged data into multi-dimensional time series. Additionally, the multiscaler’s SYNC port can output the discriminated signal for a specific PMT, enabling simultaneous digital acquisition and monitoring of the discriminated signal through the analog imaging setup. b) Images produced by analog and digital acquisition schemes. Images were summed over 200 frames taken at 15 Hz. Scale bar is 50 µm. DM - dichroic mirror. PMT - photomultiplier tube. Preamp - preamplifier. ADC - analog todigitalconverter. Electrical pulses following photon detections in each PMT are optionally amplified with a high- bandwidth preamplifier ( TA1000B-100-50, Fast ComTec). The amplified pulses are then conveyed to an ultrafast multiscaler (MCS6A, Fast Comtec) where a software-controlled discriminator threshold determines the voltage amplitude that will be accepted as an event. The arrival time of each event is registered at a temporal resolution of 100 picoseconds, with nodeadtime between events
  • 133.
    Howmanyphotonsinpractice? Adaptive optics inmultiphoton microscopy: comparison of two, three and four photon fluorescenceDavid Sinefeld, Hari P. Paudel, Dimitre G. Ouzounov, Thomas G. Bifano, and Chris Xu Optics Express Vol. 23, Issue 24, pp. 31472-31483 (2015) https://doi.org/10.1364/OE.23.031472 We showed experimentally that the effect of aberrations on the signal increases exponentially with the order of nonlinearity in a thick fluorescent sample. Therefore, the impact of AO on higher order nonlinear imaging is much more dramatic. We anticipate that the signal improvement shown here, will serve as a significant enhancement to current 3PM, and perhaps for future 4PM systems, allowing imagingdeeper and with betterresolution in biologicaltissues. Phase correction for a 2-m-focallength cylindricallensfor 2-, 3- and 4- photon excited fluorescence of Alexa Fluor 790, Sulforhodamine 101 and Fluorescein. (a) Left – 4- photon fluorescence convergence curve showing a signal improvement factor of × 320. Right – final phase applied on the SLM (b) left – 3-photon fluorescence convergence curve showing a signal improvement factor of × 40. Right – final phase applied on the SLM. (c) Left – 2-photon fluorescence convergence curve showing a signalimprovement factor of×2.1. (left) The spectral response of oxygenated hemoglobin, deoxygenated hemoglobin, and water as a function of wavelength. The red highlighted area indicates the biological optical window where adsorption due to the body is at a minimum Doane and Burda(2012) (right) Wavelength-dependent attenuation length in brain tissue and measured laser characteristics. Attenuation spectrum of a tissue model based on Mie scattering and water absorption, showing the absorption length of water(la, blue dashed line), the scattering length of mouse brain cortex (ls, red dashed-dotted line), and the combined effective attenuation length (le, green solid line). The red stars indicate the attenuation lengths reported for mouse cortex in vivo from previous work[Kobat et al., 2009] . The figure hows that the optimum wavelength window (for three-photon microscopy) in terms of tissue penetration is near 1,700 nm when both tissue scattering and absorption areconsidered. Horton et al. (2013) Why not4-PM ifthesectioningimproveswithincreased nonlinearorder?There are spectral absorbers in the brain, “near-infrared” (NIR) optical windows for optimal wavelengths
  • 134.
    ImageSmoothing with ImageRestoration? Intheory, additional “deep intermediate target” could help the final segmentation result as you want your network “to pop out” the vasculature, without the texture, from the background. In practice then, think of how to either get the intermediate target in such a way that you do not throw any details away (see Xu et al. 2015), or employ a Noise2Noise type of network for edge-aware smoothing as well. And check the use of bilateral kernels in deep learning (see e.g. Barron and Poole 2015; Jampani et al. 2016; Gharbi et al. 2017; Su et al. 2019 ). The proposal of Suetal.2019 seems like a good starting point if you are into making this happen? RAW After IMAGERESTORATION Edge-AwareIMAGESMOOTHING
  • 135.
  • 136.
    Vesselness (tubular)filtersfor“early”segmentationapproaches DetectingIrregularCurvilinearStructuresin Gray ScaleandColorImageryusingMulti- DirectionalOrientedFluxMDOF Engin Turetken, Carlos Becker, Przemysław Głowacki, Fethallah Benmansour, Pascal Fua CVLab, EPFL,Lausanne,Switzerland (2013) https://doi.ieeecomputersociety.org/10.1109/ICCV.2013.196 Gradient-basedenhancementoftubular structuresinmedicalimages LDOG Rodrigo Moreno, Örjän Smedby (2015) School of Technology and Health, KTH Royal Institute of Technology; Center for Medical Image Science and Visualization (CMIV),Linköping University, Sweden https://doi.org/10.1016/j.media.2015.07.001 Vesselnessestimationthrough higher-order orientationtensors Rodrigo Moreno, Örjän Smedby School of Technology and Health, KTH Royal Institute of Technology; Center for Medical Image Science and Visualization (CMIV), Linköping University,Sweden (2016) https://doi.ieeecomputersociety.org/10.1109/ICCV.2013.196 We presented and validated a new tubularity measure that performs better than existing approaches on irregular structures whose cross sections deviate from circular symmetric profiles. This is important because many imaging modalities produce irregular structures as a result of noise, due to point spread function blur, and non-uniform staining, among others It is worthwhile to pointout thatthe proposed generalization of the relationship between spherical harmonics and higher-order tensors is applicable to other applications where the spherical harmonics transform Archontis Politis, Aalto is required. For example, the vesselness method proposed in Rivest-Henault and Cheriet,2013, which is also based on spherical harmonics, could also benefit of the proposed method. Our current research includes a more exhaustive evaluation with more datasets. Frangi, Oriented Flow(OOF) variants such as plain OOF, OOF-OFA, and MDOF, etc.
  • 137.
    Vesselness visualizedfor2-PM Maxium IntensityProjection (MIP) raw input image MaxiumIntensity Projection (MIP) “OOF Vesselness” Law, Max WK, and Albert CS Chung. "Three dimensional curvilinear structure detection using optimally oriented flux." European conference on computer vision. Springer, Berlin, Heidelberg, 2008. https://doi.org/10.1007/978-3-540-88693-8_27 Classical vesselnessenhancement Image (Poisson) noise now affects the vesselness enhancement End-to-end deep learning network to learn representation (segmentor) that is invariant “automagically” to all possible artifacts and image noise. You do not want to play with the “millions of tunable parameters” of these old- school algorithms But you can think if some of the “expert knowledge” could be incorporated into the deep learning loss functions?
  • 138.
  • 139.
    RecurrentModels Learninglong-rangespatialdependencieswith horizontalgatedrecurrentunits https://arxiv.org/abs/1805.08315 ->https://github.com/serre-lab/hgru_share See also https://arxiv.org/abs/1811.11356 Whyis it that a CNN can accurately detect contours in a natural scene like Fig. 1a but also struggle to integrate paths in the stimuli shown in Fig. 1b? In principle, the ability of CNNs to learn such long-range spatial dependencies is limited by their localized receptive fields (RFs) – hence the need to consider deeper networks because they allow the buildup of larger and more complex RFs. Here, we use a large-scale analysis of CNN performance on the Pathfinder challenge to demonstrate that simply increasing depth in feedforward networks constitutes an inefficient solution to learning the long-range spatial dependencies needed to solve the Pathfinder challenge. Vasculature tree Morelikethis defining ‘vesselness’ easier using the whole volume rather than small subvolume e.g. the flower does not require such long-range sptial analysis Can we design more efficient models for elongated thin structures with some recurrence instead of relying on feed-forward models?
  • 140.
    ”Long-rangespatialdependencies” whatdoesthismean? i.e. whatis the network “looking at” when making a decision of the class of that given voxel? https://arxiv.org/abs/1603.059 59 Easier to get the “correct ground truth” withlarger receptivefield Contrast quite low already for such a smallreceptive field Helps the segmentation quality to consider the “wholespatial range” of the 3D stack.
  • 141.
  • 142.
    Centerline algorithmbasics MultiscaleCenterlineDetection Amos Sironi,Engin Türetken, Vincent Lepetit, and Pascal Fua CVLab, EPFL,Lausanne,Switzerland (2016) https://doi.org/10.1109/TPAMI.2015.2462363 Citedby 78 We have introduced an efficient regression-based approach to centerline detection, which we showed to outperform both methods based on hand-designed filters and classification-based approaches. The output of our method can be used in combination with tracing algorithms requiring a scale-space tubularity measure as input, increasing accuracy also on this task. Our approach is very general and applicable to other linear structure detection tasks when training data is available. For example, we obtained an improvement over the state-of- the-art when training it to detect boundaries on natural images.
  • 143.
    Acloserlook of DeepCenterline:Task formulations DeepCenterline: a Multi-task Fully Convolutional Network for Centerline Extraction Zhihui Guo, Junjie Bai, Yi Lu, Xin Wang, Kunlin Cao, Qi Song, Milan Sonka, Youbing Yin (Submitted on 25 Mar 2019) https://arxiv.org/abs/1903.10481 Due to the large variations in branch radius (coronary artery proximal radius can be five times bigger than the distal radius), a straightforward Euclidean distance transform computation generates centerline distance map with largely variable range of values at different sections of the branch. To obtain a centerline consistently well-positioned in the “center” from beginning to end requires tricky balancing of cost image contrast between thick and thin sections. To achieve the desired scale- invariance property, we propose to use FCN to generate a locally normalized centerlinedistancemap. Branch endpoint detection Different from centerline distance map which consists of continuous values inside the whole segmentation mask, branch endpoints are just a few isolated points. Directly predicting these points using a voxel-wise classification or segmentation framework is not feasible due to the extreme class imbalance. To tackle the class imbalance problem, a voxel-wise endpoint confidence map is generated by constructing a Gaussian distribution around each endpoint to occupy a certain area spatially. The FCN is then trained to predict the endpoint confidence map, which has a more balanced ratio between nonzero and zero voxels.
  • 144.
    SegmentationasaRegressionProblem DeepCenterline DeepCenterline:a Multi-taskFully ConvolutionalNetworkfor Centerline Extraction Zhihui Guo, Junjie Bai, Yi Lu, Xin Wang, Kunlin Cao, Qi Song, Milan Sonka, Youbing Yin (Submitted on 25 Mar 2019) https://arxiv.org/abs/1903.10481 We will come back to the use of distance transformation in medical segmentation, so remember this! Comparison of centerline distance map prediction with and without attention. a) Coronary artery segmetnation mask. b) A cross- sectional view of segmentation mask note the “staircased profile” and possibility to have a subvoxel NURBS fitting . c) Centerline distance map without attention module. d) Centerline distance map with attention module. e) Centerline distance map values at the profile line shown as double-arrowed line in b). With attention, the centerline distance map shows a high peak around centerline instead of a plateau by the model without attention. Distance Transform of vessel mask in Fiji with the centerline having the lowest (highest) value The attention seems to really help localizing the centerline instead of broader “lumen” detection without. Gaussian Blur for the branch endpoint GaussianBluroverlaidon the boundarymap of the vessel mask,for comparison of your options Petteri Toy demo: ”Scale invariance” for thedistance map with the hope of having same intensity on the centerline of big and small vessels with some intensity gradient (not that visible on large vessels anymore after LOG transform) Distance transform CLAHE LOG10 Transform → →
  • 145.
    TopologyEnforcement with deeplearning IterativeDeepRetinalTopologyExtraction Carles Ventura, Jordi Pont-Tuset, Sergi Caelles, Kevis-Kokitsi Maninis, Luc Van Gool (2018) https://doi.org/10.1007/978-3-030-00500-9_15 https://github.com/carlesventura/iterative-deep-learning Download graph annotations for DRIVEdataset from website: http://people.duke.edu/~sf59/Estrada_TMI_2015_dataset.htm Building on top of a global model that performs a dense semantical classification of the pixels of the image, we design a Convolutional Neural Network (CNN) that predicts the local connectivity between the central pixel of an input patch and its border points. By iterating this local connectivity we sweep the whole image and infer the global topology of the filamentary network, inspired by a human delineating acomplex network with the tip of their finger.
  • 146.
    Vasculartopology asGraph Two-Photon Imagingof CorticalSurface MicrovesselsRevealsa Robust Redistribution in Blood Flow after Vascular Occlusion Chris B Schaffer, Beth Friedman, Nozomi Nishimura, Lee F Schroeder, Philbert S Tsai, Ford F Ebner, Patrick D Lyden, David Kleinfeld January 3, 2006 https://doi.org/10.1371/journal.pbio.0040022 In humans, damage to microvessels is a known pathological condition. In particular, occlusion of small-scale arterioles is a likely cause of clinically silent lacunar infarcts that are correlated with an increased risk of dementia and cognitive decline. It is thus interesting that the Rotterdam Scan study, which identified clinically silentlacunar infarcts through magnetic resonant imaging, found that few cortical infarcts were located near the surface and thus where the vasculature appears to be most redundant. Our results for surface blood flow dynamics suggest an emerging relation between vascular topology andsusceptibility tostroke in different regions of the brain. ExamplesofFlowChangesthatResultfromLocalizedOcclusion ofaCorticalSurfaceArteriole (A–C) On the left and right are TPLSM images taken at baseline and after photothrombotic clotting of an individual vessel, respectively. Left center and right center are diagrams of the surface vasculature with RBC speeds (in mm/s) and directions indicated. The red X indicates the location of the clot, and vessels whose flow direction has reversed are indicated with red arrows and labels. In the examples of panels (A) and (B) we show maximal projections of image stacks whereas the example in panel (C) shows single TPLSM planar images; the streaks evident in the vessels in these latter frames are due to RBC motion, and the dashed box in the diagrams represents the area shown in the images.
  • 147.
    VascularMesh Modeling Mainly for clinicalpatient-specific modeling from MRA, for surgical planning, 3D printing personalized vasculature for surgical training, etc.
  • 148.
    Exampleofa 2-PMMeshAnalysiscase Cerebralmicrovascularnetworkgeometrychangesinresponsetofunctionalstimulations Liis Lindvere,Rafal Janik,Adrienne Dorr,DavidChartash, BhupinderSahota,John G. Sled, BojanaStefanovic Imaging Research, Sunnybrook Research Institute; MouseImaging Centre, The Hospital for Sick Children; Departmentof Medical Biophysics, University of Toronto NeuroImage71 (2013) 248–259 http://dx.doi.org/10.1016/j.neuroimage.2013.01.011 The anatomical data were segmented using semi-automated analysis via commercially available software (Imaris, Bitplane, Zurich). Prior to segmentation, the data were subjected to edge- preserving 3D anisotropic diffusion filtering. Thereafter, the intravascular space was identified based on a range of user supplied signal intensity thresholds corresponding to the background and foreground signal intensity ranges. The labor intensive semi-automated segmentation was followed by removal of hair-like terminal branches. The resulting volumes were next skeletonized, with the network sampled roughly every 1 μm, and the aforementioned graph data structure produced. The local tangents to the vessel were evaluated at each vertex following spline interpolation to the vertices' locations Voxel presentation of the volume Mesh presentation of the volume With mean radius quantified along the vessels You couldtry Screened “Poisson Reconstruction” fromMeshLab ifyou do not have Imaris license And“ShapeDiameter Function”from CGAL for some segmentation ofthe mesh
  • 149.
    Patient-specific Mesh models ApplicationofPatient-SpecificComputational FluidDynamicsinCoronaryandIntra-Cardiac FlowSimulations:ChallengesandOpportunities Liang Zhong, Jun-Mei Zhang, Boyang Su, Ru San Tan, John C. Allen and Ghassan S. Kassab NationalHeart Centre Singapore, NationalHeart Research Institute of Singapore, Singapore, Singapore; Duke-NUS MedicalSchool, Singapore, Singapore; CaliforniaMedicalInnovations Institute, San Diego, CA, United States ) https://doi.org/10.3389/fphys.2018.00742 The emergence of patient-specific computational fluid dynamics (CFD) has paved the way for the new field of computer-aided diagnostics. This article provides a review of CFD methods, challenges and opportunities in coronary and intra-cardiac flow simulations. It includes a review of market products and clinical trials. Key components of patient-specific CFD are covered briefly which include image segmentation, geometry reconstruction, mesh generation, fluid- structure interaction, and solver techniques. Distributions of (A)P (Pressure), (B)WPG (wall pressure gradient), (C) WSS (wall shear stress), (D)OSI (oscillatory shear index), (E)RRT (relative residence time), and (F)SPA (stress phase angle) on the virtually healthy and diseased left coronary artery trees respectively.
  • 150.
    NURBS andVasculatureReconstruction? The airfoilsgenerated with BézierGAN do not lookthat alientovasculatureanymore when you consider cross-sections of vessels? Cubic curves (1D) -> BiCubic Surfaces (2D) Bezier curve -> Bezier surface NURBS Surface Approximation Using Rational B-spline Neural Networks July 2011 Tawfik El-Midany, Mohammed Elkhateeb, et. Al www.researchgate.net Onur Rauf Bingol, Adarsh Krishnamurthy Department ofMechanical Engineering, Iowa State University, United States https://doi.org/10.1016/j.softx.2018.12.005 https://github.com/orbingol/NURBS-Python We introduce NURBS-Python, an object-oriented, open-source, Pure Python NURBS evaluation library with no external dependencies. The library is capable of evaluating single or multiple NURBS curves and surfaces, provides a customizable visualization interface, and enables importing and exportingdata usingpopular CADfile formats. 2D surface ”3D volume”
  • 151.
    Remember ourproblemwithstaircasedvasculature ‘Anisotropic Staircasing’best priors for isotropic reconstruction? https://doi.org/10.2312/VCBM%2FVCBM10%2F083-090 You would like to have data- driven smooth surface fitted along the staircased lego Corgi Lego corgi : corgi Reddit Same “asadog”
  • 152.
    DeepSpline for unsupervised3D surface reconstruction DeepSpline:Data-Driven reconstructionofParametricCurves andSurfaces Jun Gao,Chengcheng Tang, VigneshGanapathi-Subramanian,Jiahui Huang,Hao Su,Leonidas J. Guibas UniversityofToronto;VectorInstitute; TsinghuaUniversity;StanfordUniversity;UCSan Diego (Submitted on 12 Jan 2019) https://arxiv.org/abs/1901.03781 Reconstruction of geometry based on different input modes, such as images or point clouds, has been instrumental in the development of computer aided design and computer graphics. Optimal implementations of these applications have traditionally involved the use of spline-based representations at their core. Most such methods attempt to solve optimization problems that minimize an output-target mismatch. However, these optimization techniques require an initialization that is close enough, as they are local methods by nature. We propose a deep learning architecture that adapts to perform spline fitting tasks accordingly, providing complementary results to the aforementioned traditional methods. To tackle challenges with the 2D cases such as multiple splines with intersections, we use a hierarchical Recurrent Neural Network (RNN) Krause et al. 2017 trained with ground truth labels, to predict a variable number of spline curves, each with an undetermined number of control points. In the 3D case, we reconstruct surfaces of revolution and extrusion without sel-fintersection through an unsupervised learning approach, that circumvents the requirement for ground truth labels. We use the Chamfer distance to measure the distance between the predicted point cloud and target point cloud. This architecture is generalizable, since predicting other kinds of surfaces (like surfaces of sweeping or NURBS), would require only a change of this individual layer, with the rest of the model remaining the same.
  • 153.
    Inspiration for Ensemble Models foreach “block” (restoration, segmentation, etc.) you could have multiple models and you use some sort of consensus of those outputs as the “ensembled output” for the next block
  • 154.
    High-level SegmentationCNNEnsemble Model A ModelB Model C Average Voxelwise (continuous value) predictions Ensemble 3 independent models Textbook Ensemble methods: Bagging Boosting Stacking BinaryMask Vessel vs. non-vessel Uncertainty via MC Dropout or “Bayesian”Batch normalization Or model as (Bayesian) SensorFusion problem?
  • 155.
    Inspiration for Multi-Task Approaches within each“block”, we could have multiple tasks (learn simultaneously e.g. centerline, vessel edges and segmentation mask)
  • 156.
    High-level SegmentationCNNMulti-Task? Multi-task havee.g. 3 different targets Predict the vessel mask “imagesegmentation”task Predict the edge mask “edgedetection”task Predict the distance map “centerlinedetection”task TaskA: Segment TaskB: Edges Task C: Distance Map
  • 157.
    Whatfeaturescanyou“detect”fromthestack? MainTask: 3DSemantic Segmentation Vessel Mask “STRONG” Auxiliarytasks: Vessel Edges “1st derivative”Distance Map Bifurcations “WEAK” Auxiliarytasks: Branch end point s Graph constraints for watertight segmentation for CFDanalysis where? https://doi.org/10.3389/fninf.2011.000 03 Inpainting masks ”Outlier detection” See. Photoshop Content-Aware Fill GAN Inpainting i.e. hallucinate the missing vessels (“vessel breakage”) to be continuous Idea: We hope to get better vessel mask with the help of auxiliary tasks compared to just trying to obtain vessel mask with one loss function i.e. are these that useful, and theserequire manual annotation work “Curvature” “2nd derivative” second-order smoothness prior Nofurtherlabellingneeded,getthesefrombinarymask
  • 158.
    Auto-LabellingImageJ/FijiExample 1st derivateof intensity “Edge”/ Gradient Magnitude 2nd derivateofintensity “Curvature” / GradientMagnitudeof Gradient Magnitude Apply Gradient Magnitude from Differentials plugin Process – Binary – Distance Map Distance Map Process – EnhanceLocal Contrast(CLAHE) Quick’n’dirty localnormalizationof Distance Map Maximum slope: 1.50 Maximum slope: 3.00
  • 159.
    Using these “auxiliaryscalar measures” in multi-task learning Auxiliary Tasksin Multi-task Learning Lukas Liebel,MarcoKörner(Submitted on 16May 2018) https://arxiv.org/abs/1805.06334 We extend multi-task learning by adding auxiliary tasks, which are of minor relevance for the application, to the set of learned tasks. As a kind of additional regularization, they are expected to boost the performance of the ultimately desired main tasks. Advanced driver assistance systems (ADAS) and autonomous vehicles need to gather information about the surrounding of the vehicle, in order to be able to safely guide the driver or vehicle itself through complex traffic scenes. Apart from traffic signs and lane markings, typical components of such scenes that need to be considered are other road users, i.e., primarily vehicles and pedestrians. When it comes to decision making, additional important local or global parameters will certainly be taken into account. Those include, i.a., object distances and positions, or the current time of day and weather conditions. The application of vision-based methodology seems natural in the context of RSU, since the rules of the road and signage were designed for humans who mainly rely on visual inspection in this regard. Figure 1 shows an overview of our network architecture and illustrates the concept of auxiliary tasks, with SIDE and semantic segmentation serving as main tasks and the estimation of thetimeofdayandweatherconditionsas auxiliary tasks.
  • 160.
    “AuxiliaryMicroscope Learning” inmulti-tasklearningsetting CouldknowingtheBBB permeability(i.e.focused ultrasoundacousticpressure,MPa) measure,help thesegmentation process by providing auxiliary information aboutthe signal-to- background-ratio? Mostlikely, yes, but how much, another thing? Nhan et eal. (2013)https://doi.org/10.1016/j.jconrel.2013.08.029 ContinuousBloodPressure(NIBPorinvasive) for monitoringanesthesialevel and “vasculartone”. Helpfulforthe segmentor network “understanding” the vessel shape? Neededinmostcasesifyou aredoingNeurovascularUnit studies, oryou wantto quantify forfunctional hyperemiafunction studies with Retinal TrilaminarVascularNetwork in flickeringlight protocol Tess E. Kornfield and Eric A. Newman (2014) ? MeasuringBloodPressureUsingaNoninvasive TailCuffMethodinMice https://doi.org/10.1007/978-1-4939-7030-8_6 Unveilingastrocyticcontrolof cerebralbloodflowwithoptogenetics https://doi.org/10.1038/srep11455 (2015)
  • 161.
    “AuxiliaryClinical Learning” inmedicalsegmentation ConditioningConvolutionalSegmentation Architectures withNon-ImagingData GrzegorzJacenków, Agisilaos Chartsias, Brian Mohr, Sotirios A. Tsaftaris The Universityof Edinburgh; Canon Medical Research Europe; The AlanTuringInstitute 17 Apr2019(modified: 11 Jun 2019)MIDL 2019 https://openreview.net/forum?id=BklGUoAEcE We compare two conditioning mechanisms based on concatenation and feature-wise modulation (FiLM, Perez et al., 2017) to integrate non- imaging information into convolutional neural networks for segmentation of anatomical structures. We apply the concatenation- based conditioning at three levels: early fusion with spatial replication of the input-level features, middlefusion at the latent spaceof theencoder- decoder networks, and late fusion before the last convolutional layer. In FiLM, our work focuses on applying FiLM layers along the decoder path (decoderfusion)andbefore the final convolutional layer(latefusion). As a proof-of-concept we provide the distribution of class labels obtained from ground truth masks to ensure strong correlation between the conditioning data and the segmentation maps. We evaluate the methods on the ACDC dataset, and show that conditioning with non-imaging data improves performance of the segmentation networks. We observed conditioning the U-Net architectures was challenging, where no method gave significant improvement. However, the same architecture without skip connections outperforms the baseline with feature-wise modulation, and the relative performance increases as thetraining size decreases Perez et al., 2017
  • 162.
  • 163.
    TheGoal with everythingjointlylearned IMAGERESTORATION - Denoise - Deblur - Detect ‘broken vessels’ - Inpaint broken vessels IMAGESEGMENTATION - 3D “Euclidean’ CNN for input voxels GRAPH RECONSTRUCTION Enforce connectivity MESH RECONSTRUCTION ’Fit shape primitives for isotropic reconstruction’ Errorpropagation Heteroscedastic uncertainty Partialvolume, andnon- idealPSF causes problems with edge localization ‘Anisotropic Staircasing’ best priors for isotropic reconstruction ? https://doi.org/10.2312/V CBM%2FVCBM10%2F0 83-090 Motionartifactscause the uncertainty to be non- homoscedastic https://doi.org/10.1038/srep045 07 “Donot let image restorationoversmooth”,“Constrainsegmentationby physiologically plausible vasculartree” 3D Anisotropic VoxelMask 3D Isotropic VoxelMask 3D Mesh forCFD Bayesian graph convolutional neuralnetworks https://arxiv.org/abs/1811.11103 → https://arxiv.org/abs/1902.10042
  • 164.
  • 165.
    Dice? Agood “medicalmetric”forus? AVD? Metricsforevaluating3D medicalimagesegmentation: analysis,selection,andtool. Abdel Aziz Taha andAllan Hanbury TUWien BMC Medical Imaging 2015 15:29 https://doi.org/10.1186/s12880-015-0068-x https://github.com/Visceral-Project/EvaluateSegmentati on Since metrics have different properties (biases, sensitivities), selecting suitable metrics is not a trivial task. This paper provides analysis of the 20 implemented metrics, in particular of their properties, and suitabilities to evaluate segmentations, given particular requirements and segmentations with particular properties. The Dice coefficient [Dice 1945] (DICE), also called the overlap index, is the most used metric in validating medical volume segmentations. In addition to the direct comparison between automatic and ground truth segmentations, it is common to use the DICE to measure reproducibility (repeatability). Zou et al. 2004 used the DICE as a measure of the reproducibility as a statistical validation of manual annotation where segmenters repeatedly annotated the same MRI image. Contour is important: Depending on the individual task, the contour can be of interest, that is the segmentation algorithms should provide segments with boundary delimitation as exact as possible. Metrics that are sensitive to point positions (e.g. HD and AVD) are more suitable to evaluate such segmentation than others. Volumetric similarity VS is to be avoided in this case. TheHausdorffDistance(HD) is generally sensitivetooutliers. Because noiseandoutliers arecommon in medical segmentations, itis not recommended to use theHD directly [Zhang and Lu2004]. TheAverageDistance,ortheAverageHausdorffDistance(AVD), is the HD averaged over all points. TheAVD is known tobe stableand less sensitiveto outliers than theHD. Itis defined by: The Hausdorff distance (HD) and the average Hausdorff distance (AVD) are based on calculating the distances between all pairs of voxels. This makes them computationally very intensive, especially with large images. herefore, to efficiently calculate the AVD, we use a modified version of the nearest neighbor (NN) algorithm proposed by Zhao et al. 2014 in which a 3D cell grid is built on the point cloud.
  • 166.
    “Hausdorff’s Distance loss”viadistancetransform ReducingtheHausdorffDistanceinMedical ImageSegmentationwithConvolutionalNeural Networks Davood Karimi, Septimiu E. Salcudean Universityof BritishColumbia,Vancouver (Submitted on 22 Apr 2019) https://arxiv.org/abs/1904.10030 In this paper, we present novel loss functions for training convolutional neural network (CNN)-based segmentation methods with the goal of reducing Hausdorff Distance (HD) directly. We propose three methods to estimate HD from the segmentation probability map produced by a CNN. One method makes use of the distance transform of the segmentation boundary. Another method is based on applying morphological erosion on the difference between the true and estimated segmentation maps. The third method works by applying circular/spherical convolution kernels of different radii on the segmentation probability maps. Our results show that the proposed loss functions can lead to approximately 18 − 45% reduction in HD without degrading other segmentation performance criteriasuch as theDicesimilarity coefficient. To the best of our knowledge, this is the first work to aim at reducing HD in medical image segmentation. The methods presented in this paper may be improved in several ways. Faster implementation of the HD-based loss functions and more accurate implementation of the loss function based on morphological erosion would be useful. Moreover, extension of the methods for other applications such as vessel segmentation could also bepursued.
  • 167.
    ComboLoss evaluatedbyHausdorffbutnotoptimizeddirectly ComboLoss:HandlingInputandOutput ImbalanceinMulti-OrganSegmentation Saeid AsgariTaghanaki, Yefeng Zheng, S. Kevin Zhou, Bogdan Georgescu, Puneet Sharma, Daguang Xu, Dorin Comaniciu, Ghassan Hamarneh MedicalImage Analysis Lab, Schoolof ComputingScience, Simon Fraser University, Canada; Siemens Healthineers (Submitted on 8 May 2018) https://arxiv.org/abs/1805.02798 As expected, we note that the final segmentations are affected by the choice of parameter beta and the best results in terms of higher Dice and lower Hausdorff distance were obtained for = 0.7 and = 0.6 for ultrasound and MRI β β datasets, respectively. As HD is sensitive to outliers, there are sometimes relatively large values in the HD results (i.e., second column in the figure). The key advantage of the proposed Combo loss is that it enforces a desired trade-off between the false positives and negatives (which results in cutting out post- processing) and avoids getting stuck in bad local minima as it leverages Dice term. The Combo loss converges considerably faster than cross entropy loss during training. Similar to Focal loss, our Combo loss also has two parameters that need to be set. U-Net V-Net U-Net V-Net For some reason, the Average Hausdorff distance was not used that is more robust to outliers?
  • 168.
    Boundaryloss Boundary lossforhighly unbalancedsegmentation HoelKervadec, Jihene Bouchtiba, Christian Desrosiers, Éric Granger, Jose Dolz, Ismail Ben Ayed ETS Montreal (Submitted on 17Dec 2018) https://arxiv.org/abs/1812.07032 https://github.com/LIVIAETS/surface-loss PyTorch Widely used loss functions for convolutional neural network (CNN) segmentation, e.g., Dice or cross-entropy, are based on integrals (summations) over the segmentation regions. Unfortunately, it is quite common in medical image analysis to have highly unbalanced segmentations, where standard losses contain regional terms with values that differ considerably --typically of several orders of magnitude-- across segmentation classes, which may affect training performance and stability. The purpose of this study is to build a boundary loss, which takes the form of a distance metric on the space of contours, not regions. We argue that a boundary loss can mitigate the difficulties of regional losses in the context of highly unbalanced segmentation problems because it uses integrals over the boundary between regions instead of unbalanced integrals over regions. Furthermore, a boundary loss provides information that is complementarytoregional losses. Unfortunately, it is not straightforward to represent the boundary points corresponding to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations involving contour points. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implementedwith any existing deepnetwork architecture for N-Dsegmentation. Our experiments on two challenging and highly unbalanced datasets demonstrated the effectiveness of including the proposed boundary loss term during training. It consistently improved the performance, with a large margin on one data set, and enhanced training stability. Even though we limited the experiments to 2-D segmentation problems, the proposed framework can be trivially extended to3-D, which could further improve the performance of deep networks, as more context is analyzed.
  • 169.
    Sorensen-Diceloss fromV-Netwith DistancePenalty DistanceMapLossPenaltyTerm forSemantic Segmentation Francesco Caliva, Claudia Iriondo, Alejandro Morales Martinez, Sharmila Majumdar, Valentina Pedoia 17Apr 2019 (modified: 11Jun 2019)MIDL 2019 https://openreview.net/forum?id=B1eIcvS45V Convolutional neural networks for semantic segmentation suffer from low performance at object boundaries. In medical imaging, accurate representation of tissue surfaces and volumes is important for tracking of disease biomarkers such as tissue morphology and shape features. In this work, we propose a novel distance map derived loss penalty term for semantic segmentation. We propose to use distance maps, derived from ground truth masks, to create a penalty term, guiding the network's focus towards hard-to-segment boundary regions. We investigate the effects of this penalizing factor against cross-entropy, Dice, and focal loss, among others, evaluating performance on a 3D MRI bone segmentation task from the publicly available Osteoarthritis Initiative dataset. We observe a significant improvement in the quality of segmentation, with better shape preservation at bone boundaries and areas affected by partial volume. We ultimately aim to use our loss penalty term to improve the extraction of shape biomarkers and derive metrics to quantitatively evaluate the preservation of shape. GLOBAL EDGE Performance comparison of the proposed distance map penalizing loss term against the Dice Loss function, confident predictions penalizing loss and the focalloss. Remember our multi-task formulation before, you couldformulate centerline andboundary detection problems asregression problemswith the distance function
  • 170.
    Calibrationissues Brierscore withuncertaintymaps? Towardsincreasedtrustworthinessof deep learningsegmentationmethodsoncardiacMRI Jörg Sander, Bob D. de Vos, Jelmer M. Wolterink, Ivana Išgum Medical Imaging: Image Processing 2018 DOI: 10.1117/12.2511699 One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. Combining segmentations and uncertainty maps and employing a human-in- the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort In addition, we reveal that a valuable uncertainty measure can be obtained if the applied model is well calibrated, i.e. if generated probabilities represent the likelihood of being correct. The quality of e-maps and u-maps depends on the calibration of the acquired probabilities. Previous work6 revealed that lossfunctionsdiffer regarding how wellthe generated probabilities represent the likelihood of being correct. Therefore, we trained the model with three different loss functions: soft-Dice (SD), cross-entropy (CE), and the Brier score (BS),10 which is equal to the average gap between softmax probabilities and the references. This provides information about accuracy and uncertainty of the model. Computationally the Brier score loss is equal to the squared error between the one-hot encoding of the correct label and its associated probability. We observe that baseline segmentation performance is highest when the model is trained with the Brier score loss, slightly lower for the soft-Dice, and lowest when cross- entropy is used. Except for the soft-Dice loss we note that u-maps and e-maps follow each other quite closely, which suggests that both carry similar information. Reliability diagrams over all tissue classes together for Brier, soft-Dice and cross-entropy loss functions. Blue (end-diastole) and green (end- systole) bars quantify the true positive fraction for each probability bin. Red bars quantify the miscalibration of the model where smaller indicates better. If the model is perfectly calibrated, the diagram should match the dashed line.
  • 171.
    Howto quantifythe uncertainty(UQ) MC Dropout,Deep Ensembles or Conformal Prediction relatively easy to use in practice
  • 172.
    UncertaintyinBiology stereotypicalcartoonofthemindset Segmentationby “custom script” Ground Truth RAWDATA Some FILTERING Andwe have stats for the areadifferences between ourstudy groups Control 150.00± 25.88pixels Intervention 72.00± 15.81 pixels vs. Torture your data until you get a significant pvalue here Stdevs (uncertainty) only comes from differences in area, the uncertainty how the area was calculated, is not typically propagated to final estimates In reality, the uncertainty is probably a lot larger than you thought, but of course you are happier to operate with your zero error from your “custom script” :( As long as we have mean± SD to compare we are happy. Who cares about uncertainty propagation.
  • 173.
    UncertaintyinClinicalPractice veryimportant! https://twitter.com/EricTopol/status/1119626922827 247616?fbclid=IwAR1wcEA94bBp5Z_l0ObV9oST8 2uVDcPaiYP04WVQ63WjUya78OVW1DZC9Qo WhatClinicians Want: Contextualizing Explainable Machine Learning for Clinical EndUse Sana Tonekaboni, Shalmali Joshi, Melissa D. McCradden, Anna Goldenberg https://arxiv.org/abs/1905.05134
  • 174.
    MCDropout popularmethod DoesYourModelKnowtheDigit6Is Nota Cat?ALessBiasedEvaluationof“Outlier” Detectors (2018) AlirezaShafaei, Mark Schmidt, and James J. Little https://arxiv.org/abs/1809.04729 VGG-backed and Resnet-backed methods significantly differ in accuracy. The gap indicates the sensitivity of the methods to the underlying networks. This means that the image classification accuracy may not be the only relevant factor in performance of these methods. ODIN is less sensitive to the underlying network. Despite not enforcing mutual exclusivity, training the networks with KL loss instead of CE loss consistently reduces the accuracy of OOD detection methods on average. GitHub - yaringal/ConcreteDropout Cited by 80 DropoutasaBayesianapproximation: Representing modeluncertainty indeep learning (2015)Yarin Gal, Zoubin Ghahramani Citedby896
  • 175.
    UQAleatoric vs. Epistemicuncertainty? https://github.com/yaringal/ConcreteDropout/blob/master/concrete-dropout-keras.ipynb: D = 1 K_test = 20 # “draw” K times, 20 inferences MC_samples = np.array([model.predict(X_val) for _ in range(K_test)]) means = MC_samples[:, :, :D] # K x N epistemic_uncertainty = np.var(means, 0).mean(0) logvar = np.mean(MC_samples[:, :, D:], 0) aleatoric_uncertainty = np.exp(logvar).mean(0) https://arxiv.org/abs/1705.07832: Three types of uncertainty are often encountered in Bayesian modelling. Epistemic (known also as ‘uncertainty’) uncertainty captures our ignorance about the models most suitable to explain our data; Aleatoric (known also as ‘risk’) uncertainty captures noise inherent in the environment (remember aleajactaest); Lastly, predictiveuncertainty conveys the model’s uncertainty in its output. Epistemic uncertainty reduces as the amount of observed data increases— hence its alternative name “reducible uncertainty”. Aleatoric uncertainty captures noise sources such as measurement noise—noises which cannot be explained away even if more data were available (although this uncertainty can be reduced through the use of higher precision sensors for example). This uncertainty is often modelled as part of the likelihood, at the top of the model, where we place some noise corruption process on the function’s output. Combining both types of uncertainty gives us the predictive uncertainty—the model’s confidence in its prediction, taking into account noise it can explain away and noise it cannot. This uncertainty is often obtained by generating multiple functions from our model and corrupting them with noise (with precision τ ). For some critique of this, see the discussion: Posted by u/sschoener 1 year ago (2018) [D] What isthe current state of dropout asBayesianapproximation? https://www.reddit.com/r/MachineLearning/comments/7bm4b2/d_what_is_the_current_state_of_dropout_as/ with Ian Osband, DeepMind @IanOsband Alternative? → https://arxiv.org/abs/1806.03335 WhatUncertaintiesDoWeNeedinBayesianDeep Learningfor ComputerVision?Alex Kendall, Yarin Gal https://arxiv.org/abs/1703.04977 In (d) our model exhibits increased aleatoric uncertainty on object boundaries and for objects far from the camera. Epistemic uncertainty accounts for our ignorance about which model generated our collected data. This is a notably different measure of uncertainty and in (e) our model exhibits increased epistemic uncertainty for semantically and visually challenging pixels. The bottom row shows a failure case of the segmentation model when the model fails to segment the footpath due to increasedepistemicuncertainty, but not aleatoric uncertainty.
  • 176.
    UncertaintyofyourUncertainty Estimate? Canyou trust it? Can You TrustYourModel's Uncertainty?Evaluating PredictiveUncertainty Under DatasetShift Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek Google Research, DeepMind (Submitted on 6 Jun 2019) https://arxiv.org/abs/1906.02530 Using Distributional Shift to Evaluate Predictive Uncertainty While previous work has evaluated the quality of predictive uncertainty on OOD inputs (Lakshminarayanan et al., 2017), there has not to our knowledge been a comprehensive evaluation of uncertainty estimates from different methods under dataset shift. Indeed, we suggest that effective evaluation of predictive uncertainty is most meaningful under conditions of distributional shift. One reason for this is that post- hoc calibration gives good results in independent and identically distributed (i.i.d.) regimes, but can fail under even a mild shift in the input data. And in real world applications, distributional shift is widely prevalent. Understanding questions of risk, uncertainty, and trust in a model’s output becomes increasingly critical as shift from the original training data grows larger. (SVI) Stochastic Variational Bayesian Inference e.g. Wu et al. 2019 (Ensembles, M = 10) Ensembles of M networks trained independently on the entire dataset using random initialization (Lakshminarayanan et al. 2016 Cited by 245
  • 177.
    Subject-wiseCalibrationIssues?Ensemblesthenicest AssessingReliability and ChallengesofUncertainty EstimationsforMedicalImage Segmentation Alain Jungo,Mauricio Reyes (Submitted on 7 Jul 2019)y https://arxiv.org/abs/1907.03338 https://github.com/alainjungo/reliability-challenges-uncertain ty Although many uncertainty estimation methods have been proposed for deep learning, little is known on their benefits and current challenges for medical image segmentation. Therefore, we report results of evaluating common voxel-wise uncertainty measures with respect to their reliability, and limitations on two medical image segmentation datasets. Results show that current uncertainty methods perform similarly and although they are well-calibrated at the dataset level, they tend to be miscalibrated at subject-level. Therefore, the reliability of uncertainty estimates is compromised, highlighting the importance of developing subject-wise uncertainty estimations. Additionally, among the benchmarked methods, we found auxiliary networks to be a valid alternative to common uncertainty methods since they can be applied toany previously trainedsegmentation model. Unsurprisingly, the ensemble method yields rank-wise the most reliable results (Tab. 1) and would typically be a good choice (if the resources allow it). The results also revealed that methods based on MC dropout are heavily dependent on the influence of dropout on the segmentation performance. In contrast, auxiliary networks turned out to be a promising alternative to existing uncertainty measures. They perform comparable to other methods but have the benefit of being applicable to any high-performing segmentation network not optimized to predict reliable uncertainty estimates. No significant differences were found between using auxiliary feat. and auxiliary segm.. Through a sensitivity analysis performed over all studied uncertainty methods, we could confirm our observations that different uncertainty estimation methodsyielddifferentlevelsofprecision andrecall. Furthermore, we observed that when using current uncertainty methods for correcting segmentations, a maximum benefit can be attained when preferring a combination of low precision segmentation modelsanduncertainty-basedfalsepositive removal. Our evaluation has several limitations worth mentioning. First, although the experiments were performed on two typical and distinctive datasets, they feature large structures to segment. The findings reported herein may differ for other datasets, especially if these consists of very small structures to be segmented. Second, the assessment of the uncertainty is influenced by the segmentation performance. Even though we succeeded in building similarly performing models, their differences cannotbefully decoupled andneglectedwhen analyzingthe uncertainty. Overall, we aim with these results to point to the existing challenges for a reliable utilization of voxel-wise uncertainties in medical image segmentation, and foster the development of subject/patient-level uncertainty estimation approaches under the condition of HDLSS. We recommend that utilization of uncertainty methods ideally need to be coupled with an assessment of model calibration at the subject/patient- level. Proposed conditions, along with the threshold-free ECE metric can be adopted to test whether uncertainty estimations can be of benefit for a given task. Ensembles. Another way of quantifying uncertainties is by ensembling multiple models [Lakshminarayanan etal. 2017]. We combined the class probabilities over all K = 10 networks and used the normalized entropy as uncertainty measure. The individual networks share the same architecture but were trained on different subsets (90%) of the training dataset and different randominitialization to enforcevariability. Auxiliary network. Inspired by [DeVriesandTaylor2018; Robinson et al. 2018], where an auxiliary network is used to predict segmentation performance at the subject-level, we apply an auxiliary network to predict voxel-wise uncertainties of the segmentation model by learning from the segmentation errors (i.e., false positives and false negatives). For the experiments, we considered two opposing types of auxiliary networks. The first one, named auxiliary feat., consists of three consecutive 1×1 convolution layers cascaded after the last feature maps of the segmentation network. The second auxiliary network, named auxiliary segm., is a completely independent network (same U-Net as described in Sec. 2.2) that uses as input the original images and the segmentation masks produced by the segmentation model (generated by five-fold cross-validation). We normalized the output uncertainty subject- wise to[0, 1]for comparability purposes
  • 178.
    Short intro “Uncertaintyand connection to clinicaldecision making” If you would jointly do vascular segmentation and pathology classification, this would become relevant. Now your model output have real-life costs in health economics sense. What if your new screening method has a sensitivity and specificity of 0.99. Is that actually useful for both patients and the payer (insurance company or public healthcare provider)?
  • 179.
    MCDropout combinedwithtask-specificUtilityFunction Loss-CalibratedApproximateInferencein BayesianNeuralNetworks Adam D.Cobb, Stephen J. Roberts, Yarin Gal (Submitted on 10 May 2018) https://arxiv.org/abs/1805.03901 Cited by 5 - Related articles https://github.com/AdamCobb/LCBNN Current approaches in approximate inference for Bayesian neural networks minimise the Kullback-Leibler divergence to approximate the true posterior over the weights. However, this approximation is without knowledge of the final application, and therefore cannot guarantee optimal predictions for a given task. To make more suitable task- specific approximations, we introduce a new loss-calibrated evidence lower bound for Bayesian neural networks in the context of supervised learning, informed by Bayesian decision theory. By introducing a lower bound that depends on a utility function, we ensure that our approximation achieves higher utility than traditional methods for applications that have asymmetric utility functions. Calibrating the network to take into account the utility leads to a smoother transition from diagnosing a patient as healthy to diagnosing them as having moderate diabetes. In comparison, weighting the cross entropy to avoid false negatives by making errors on the healthy class pushes it to ‘moderate’ more often. This cautiousness, leads to an undesirable transition as shown in Figure 4a. The weighted cross entropy model only diagnoses a patient as definitely being disease-free for extremely obvious test results, which is not a desirable characteristic. Left: Standard NN model. Middle: Weighted cross entropy model. Right: Loss-calibrated model. Each confusion matrix displays the resulting diagnosis when averaging the utility function with respect to the dropout samples of each network. We highlight that our utility function captures our preferences by avoiding false negatives of the ‘Healthy’ class. In addition, there is a clear performance gain from the loss-calibrated model, despite the label noise in the training. This compares to both the standard and weighted cross entropy models, where there is a common failure mode of predicting a patient as being ‘Moderate’ when they are ‘Healthy’.
  • 180.
    BrierScorebetterthanROCAUCforclinicalutility? Yes,but... …Stillsensitivetodiseaseprevalence TheBrierscoredoesnotevaluatetheclinicalutilityof diagnostictestsorpredictionmodels Melissa Assel,DanielD. SjobergandAndrew J. Vickers MemorialSloan KetteringCancerCenter,NewYork,USA Diagnostic andPrognostic Research 2017 1:19 https://doi.org/10.1186/s41512-017-0020-3 The Brier score is an improvement over other statistical performance measures, such as AUC, because it is influenced by both discrimination and calibration simultaneously, with smaller values indicating superior model performance. The Brier score also estimates a well- defined parameter in the population, the mean squared distance between the observed and expected outcomes. The square root of the Brier score is thus the expected distance between the observed andpredicted value on the probability scale. However, the Brier score is prevalence dependent i.e. sensitive to class imbalance in machine learning jargon in such a way that the rank ordering of tests or models may inappropriately vary by prevalence [ Wu andLee 2014]. For instance, if a disease was rare (low prevalence), but very serious and easily cured by an innocuous treatment (strong benefit to detection), the Brier score may inappropriately favor a specific test compared to one of greater sensitivity. Indeed, this is approximately what was seen in the Zikavirus paper [Braga et al. 2017] We advocate, as an alternative, the use of decision-analytic measures such as net benefit. Net benefit always gave a rank ordering that was consistent with any reasonable evaluation of the preferable test or model in a given clinical situation. For instance, a sensitive test had a higher net benefit than a specific test where sensitivity was clinically important. It is perhaps not surprising that a decision-analytic technique gives results that are in accord with clinical judgment because clinical judgment is “hardwired” into the decision-analytic statistic. That said, this measure is not without its own limitations, in particular, the assumption that the benefit and harms of treatment do not vary importantly between patients independently of preference. Howshouldweevaluatepredictiontools?Comparisonof threedifferenttools forpredictionofseminalvesicle invasionatradicalprostatectomy as atestcase Giovanni Lughezzani et al. Eur Urol. 2012 Oct; 62(4): 590–596. https://dx.doi.org/10.1016%2Fj.eururo.2012.04.022 Traditional (area-under-the-receiver-operating-characteristic- curve (AUC), calibration plots, the Brier score, sensitivity and specificity, positive and negative predictive value) and novel (risk stratification tables, the net reclassification index, decision curve analysis and predictiveness curves) statistical methods quantified the predictiveabilities ofthethreetested models. Traditional statistical methods (receiver operating characteristic (ROC) plots and Brier scores), as well as two of the novel statistical methods (risk stratification tables and the net reclassification index) could not provide clear distinction between the SVI prediction tools. For example, receiver operating characteristic (ROC) plots and Brier scores seemed biased against the binary decision tool (ESUO criteria) and gave discordant results for the continuous predictions of the Partin tables and the Gallina nomogram. The results of the calibration plots were discordant with thoseof the ROC plots. Conversely, the decision curve clearly indicated that the Partin tables ( Zorn et al. 2009) represent the ideal strategy for stratifying theriskof seminal vesicleinvasion(SVI).
  • 181.
    Decisioncurveanalysis(DCA) Emerging utilityanalysis technique ASystematicReviewofthe LiteratureDemonstratesSome Errorsinthe Use of DecisionCurve Analysis butGenerallyCorrect Interpretationof Findings Paolo Capogrosso, Andrew J. Vickers Medical Decision Making (February 28, 2019) https://doi.org/10.1177%2F0272989X19832881 We performed a literature review to identify common errors in the application of DCA and provide practical suggestions for appropriate use of DCA. Despite some common errors in the application of DCA, our finding that almost all studies correctly interpreted the DCA results demonstrates that it is a clear and intuitive method to assessclinicalutility. A common task in medical research is to assess the value of a diagnostic test, molecular marker, or prediction model. The statistical methods typically used to do so include metrics such as sensitivity, specificity, and area under the curve (AUC Hanley and MacNeil1982 ) However, it is difficult to translate these metrics into clinical practice: for instance, it is not at all clear how high AUC needs to be to justify use of a prediction model or whether, when comparing 2 diagnostic tests, a given increase in sensitivity is worth a given decrease in specificity(Greenland 2008; Vickers and Cronin 2010) . It has been generally argued that because traditional statistical metrics do not incorporate clinical consequences—for instance, the AUC weights sensitivity and specificity as equally important—they cannot be used to guide clinical decisions. In brief, DCA is a plot of net benefit against threshold probability. Net benefit is a weighted sum of true and false positives, the weighting accounting for differential consequences of each. For instance, it is much more valuable to find a cancer (true positive) than it is harmful to conduct an unnecessary biopsy (false negative) and so it is appropriate to give a higher weight to true positives than false positives. Threshold probability is the minimum risk at which a patient or doctor would accept a treatment and is considered across a range to reflectvariation in preferences. In the case of a cancer biopsy, for example, we might imagine that a patient would refuse a biopsy for a cancer risk of 1%, accept a biopsy for a risk of 99%, but somewhere in between, such as a 10% risk, be unsure one way or the other. The threshold probability is used to determine positive (risk from the model under evaluation of 10%ofmore) v. negative(risk less than 10%) and as the weighting factor in net benefit. Net benefit for a model, test, or marker is compared to 2 default strategies of ‘‘treat all’’ (assuming all patients are positive) and ‘‘treat none’’ (assume all patients are negative)
  • 182.
  • 183.
    NoisyLabels as ‘annotatorconfusion’ Learning FromNoisyLabelsBy Regularized EstimationOf AnnotatorConfusion Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, DanielC. Alexander, NathanSilberman UniversityCollegeLondon, UK;ButterflyNetwork,NewYork,USA Submitted on 10Feb 2019https://arxiv.org/abs/1902.03680 The predictive performance of supervised learning algorithms depends on the quality of labels. In a typical label collection process, multiple annotators provide subjective noisy estimates of the "truth" under the influence of their varying skill-levels and biases. Blindly treating these noisy labels as the ground truth limits the accuracy of learning algorithms in the presence of strong disagreement. This problem is critical for applications in domains such as medical imaging where both the annotation cost and inter-observer variability are high. In this work, we present a method for simultaneously learning the individual annotator model and the underlying true label distribution, using only noisy observations. Each annotator is modeled by a confusionmatrix that is jointly estimated along with the classifier predictions. We propose to add a regularization term to the loss function that encourages convergence to the true annotator confusion matrix. We provide a theoretical argument as to how the regularization is essential to ourapproach both forthecaseofsingleannotatorand multiple annotators. Future work shall consider imposing structures on the confusion matrices to broaden up the applicability to massively multi-class scenarios e.g. introducing taxonomy based sparsity [ Van Horn 2018] and low-rankapproximation. We also assumed that thereisonlyonegroundtruth for each input; this no longer holds true when the input images are truly ambiguous—recent advances in modelling multi-modality of label distributions [Saeedi etal.2017, Kohl et al. 2018] potentially facilitate relaxation of such assumption. Another limiting assumption is the image independence of the annotator’s label noise. The majority of disagreement between annotators arise in the difficult cases. Integrating such input dependence of label noise [Raykar et al. 2009, Xiaoet al. 2015] is also avaluablenextstep.
  • 184.
    ProbalisticU-Net buildingthisuncertaintyintothemodel AProbabilisticU-NetforSegmentationofAmbiguous ImagesSimon A.A. Kohl, BernardinoRomera-Paredes, ClemensMeyer, JeffreyDe Fauw, Joseph R. Ledsam, Klaus H. Maier-Hein, S. M. Ali Eslami, Danilo JimenezRezende, Olaf Ronneberger (Submitted on13Jun2018 https://arxiv.org/abs/1806.05034 - https://github.com/SimonKohl/probabilistic_unet “In clinical applications forexample, it mightnot beclearfrom aCT scan alone which particularregion is cancertissue. Thereforea groupofgraderstypicallyproducesasetofdiversebut plausiblesegmentations. Weconsiderthe task oflearning a distribution oversegmentations given an input. To this end we propose agenerative segmentation modelbased on a combination of aU-Net with a conditionalvariational autoencoder thatis capable ofefficiently producing an unlimited numberofplausible hypotheses. All in all we see a large field where our proposed Probabilistic U- Net can replace the currently applied deterministic U-Nets. Especially in the medical domain, with its often ambiguous images and highly critical decisions that depend on the correct interpretation of the image, our model’s segmentation hypotheses and their likelihoods could 1) inform diagnosis/classification probabilities or 2) guide steps to resolve ambiguities. Our method could prove useful beyond explicitly multi-modal tasks, as the inspectability of the Probabilistic U-Net’s latent space could yield insights for many segmentation tasks that are currently treated as auni-modalproblem.”
  • 185.
    ProbalisticU-Net ’HierarchicalLatent’expansion AProbabilisticU-Net forSegmentationofAmbiguousImages Simon A. A. Kohl, BernardinoRomera-Paredes, Klaus H. Maier-Hein, Danilo JimenezRezende, S. M. AliEslami, Pushmeet Kohli, Andrew Zisserman, OlafRonneberger (Submittedon 30May2019) https://arxiv.org/abs/1905.13077 - coming! Medical imaging only indirectly measures the molecular identity of the tissue within each voxel, which often produces only ambiguous image evidence for target measures of interest, like semantic segmentation. This diversity and the variations of plausible interpretations are often specific to given image regions and may thus manifest on various scales, spanning all the way from the pixel to the image level. In order to learn a flexible distribution that can account for multiple scales of variations, we propose the Hierarchical Probabilistic U-Net, a segmentation network with a conditional variational auto-encoder (cVAE) that uses a hierarchicallatent space decomposition. We show that this model formulation enables sampling and reconstruction of segmentations with high fidelity, i.e. with finely resolved detail, while providing the flexibility to learn complex structured distributions across scales. We demonstrate these abilities on the task of segmenting ambiguous medical scans as well as on instance segmentation of neurobiological and natural images. Our model automatically separatesindependent factorsacrossscales, an inductive bias that we deem beneficial in structured output prediction tasks beyond segmentation. In terms of KL cost, it is more expensive to model global aspects locally, which in combination with the hierarchical model formulation itself, is the mechanism that puts into effect the separation of scales. Disentangled representations are regarded highly desirable across the board and the proposed model may thus also be interesting for other down- stream applications or image-to-image translation tasks. In the medical domain the HPU-Net could be applied in interactive clinical scenarios where a clinician could either pick from a set of likely segmentation hypotheses or may interact with its flexible latent space to quickly obtain the desired results. The model’s ability to faithfully extrapolate conditioned on prior observations could further be employed in spatio-temporal predictions, such as e.g. predicting tumor therapy response.
  • 186.
    Alternatives for ProbabilisticU-Net emerging PHiSeg: CapturingUncertaintyinMedical ImageSegmentation Christian F. Baumgartner, Kerem C. Tezcan, Krishna Chaitanya, Andreas M. Hötker, Urs J. Muehlematter, Khoschy Schawkat, Anton S. Becker, Olivio Donati, Ender Konukoglu Computer Vision Lab, ETH Zürich; Memorial Sloan Kettering Cancer Center;Beth Israel Deaconess Medical Center,Harvard Medical School (Submitted on 7 Jun 2019) https://arxiv.org/abs/1906.04045 https://github.com/baumgach/PHiSeg-code Tensorflow Segmentation of anatomical structures and pathologies is inherently ambiguous. For instance, structure borders may not be clearly visible or different experts may have different styles of annotating. The majority of current state-of-the-art methods do not account for such ambiguities but rather learn a single mapping from image to segmentation. In this work, we propose a novel method to model the conditional probability distribution of the segmentations given an input image. We derive a hierarchical probabilistic model, in which separate latent spaces are responsible for modelling the segmentation at different resolutions. Inference in this model can be efficiently performed using the variational autoencoder framework. We show that our proposed method can be used to generate significantly more realistic and diverse segmentation samples compared to recent related work, both, when trained with annotations from a single or multiple annotators.
  • 187.
    ‘Know-it-all’ ClinicianExpertinyourteam? Stillmakeserrors,andconsensusofexpertsisabetterapproach Supervised learningfrommultipleexperts: whomtotrustwheneveryone liesabit Vikas C. Raykaret al. (2009) SiemensHealthcare https://doi.org/10.1145/1553374.1553488 Modelling Cognitive Bias in Crowdsourcing Systems Farah Saab, Imad H. Elhajj, Ayman Kayssi, Ali Chehab (December 2019) https://doi.org/10.1016/j.cogsys.2019.04.004 The work reveals a surprising result where confidence-related approaches lack in performance when compared to other approaches such as simple plurality voting or approaches which consider respondent competence. Thisinadequacystems froma psychological phenomenon brought forth by David Dunning and Justin Kruger related topeople’s bias in assessingtheir own cognitiveabilities. Comparison of the algorithm, ophthalmologists, and retinal specialists using the adjudicated reference standard at various DR severity thresholds. The algorithm’s performance is the blue curve. The 3 retina specialists are represented in shades of orange/red, and the 3 ophthalmologists are in shades of blue. N = 1813 fully gradable images. Gradervariabilityandtheimportance of referencestandardsforevaluating machinelearningmodelsfordiabetic retinopathy Jonathan Krause,Varun Gulshan, Ehsan Rahimy,Peter Karth,Kasumi Widner,Greg S. Corrado,LilyPeng, Dale R.Webster Google Research,Palo Alto Medical Foundation, Oregon Eye Consultants https://arxiv.org/abs/1710.01711 https://doi.org/10.1016/j.ophtha.2018.01.034 Dealingwith inter-expert variabilityinretinopathyof prematurity:Amachine learningapproach Bolón-Canedor et al.(2015) 10.1016/j.cmpb.2015.06.004 +http://dx.doi.org/10.3414/ME13-01-0081
  • 188.
    Makingthingsevenmore difficultinmedicalsettings Ambiguitiesinyour phenotypes In other words,when pathology is not that well-defined, and you would rather want to phenotype by clustering rather than using some antiquated ICD diagnosis codes
  • 189.
    Relevant when youuseyoursegmentation aspart ofdiagnostics classification Clinically applicable deep learningfor diagnosisandreferralin retinaldisease Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, HarryAskham, Xavier Glorot, Brendan O’Donoghue, DanielVisentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, GeraintRees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane & Olaf Ronneberger Nature Medicine 24, 1342–1350 (2018) Citedby147 - Related articles https://doi.org/10.1038/s41591-018-0107-6 We demonstrate that the tissue segmentationsproduced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device cross-vendor / cross-modal . Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting. The clinical significance of segmentation quality What if he had problemswithdiagnosis? Or simply the phenotypes had “intrinsic uncertainty” as is the case with many neurodegenerative disease (e.g. Alzheimer’s Disease). What would you use as your gold standard for diagnosis? Do you think that the current state- of-art gold standard captures well the “whole pathology”?
  • 190.
  • 191.
    GenerativeModels forsyntheticvasculaturedata#1 VampireAutomatic Generationof Synthetic Retinal Fundus Images: Vascular Network https://doi.org/10.1016/j.procs.2016.07.010 DeepSemanticInstanceSegmentationofTree-like StructuresUsing SyntheticData Kerry Halupka,Rahil Garnavi, Stephen Moore IBM Research, Level 22/60 CityRd, Southbank, Victoria, Australia (Submitted on 8Nov2018) https://arxiv.org/abs/1811.03208 Syntheticsamplesalsoas trainingmaterialfor vascularsegmentation?
  • 192.
    GenerativeModels forsyntheticvasculaturedata#2“Physiologyconstraint” Tissuemetabolismdriven arterialtreegeneration Matthias Schneider,JohannesReichold,Bruno Weber,Gábor Székely, SvenHirsch Medical ImageAnalysis Volume16, Issue 7, October2012, Pages 1397-1414 https://doi.org/10.1016/j.media.2012.04.009 - Cited by 19 We present an approach to generate 3-D arterial tree models based on physiological principles while at the same time certain morphological properties are enforced at construction time. The driving force of the construction is a simplified angiogenesis model incorporating case-specific information about the metabolic demand within the considered domain. The vascular tree is constructed iteratively by successively adding new segments in chemotactic response to angiogenic growth factors secreted by ischemic cells. Morphometrically confirmed bifurcation statistics of vascular networks are incorporated to optimize the synthetic vasculature. The proposed method is able to generate artificial, yet physiologically plausible, arterial tree models that match the metabolic demand of the embedding tissue and fulfill the prescribed morphological properties at the same time.
  • 193.
    GenerativeModels forsyntheticvasculaturedata#3“Physiologyconstraint” Anewmodelfortheemergenceof bloodcapillarynetworks P.Aceves-Sanchez,B. Aymard,D.Peurichard, P.Kennel,A. Lorsignol,F.Plouraboue, L. Casteilla, P. Degond (Submittedon 24 Dec2018) https://arxiv.org/abs/1812.09992 We propose a new model for the emergence of blood capillary networks. We assimilate the tissue and extra cellular matrix as a porous medium, using Darcy'slaw for describing both blood and intersticial fluid flows. Oxygen obeys a convection- diffusion-reaction equation describing advection by the blood, diffusion and consumption by the tissue. The coupling between blood, oxygen flow and capillary elements provides a positive feedback mechanism which triggers the emergence of a network of channels of high hydraulic conductivity which we identify as new blood capillaries. We provide two different, biologically relevant geometrical settings and numerically analyze the influence of each of the capillary creation mechanism in detail. All mechanisms seem to concur towards a harmonious network but the most important ones are those involving oxygen gradientand sheerstress. As summarized here this new network formation model opens many different exciting research avenues. It offers a new paradigm for capillary network creation by placing the flow of blood at thecentral placein the process. This paper provides a proof of concept of this approach and elaborates a road map by which the model can be gradually improved towards a fully fledged simulator of blood capillary network formation. Such simulator would have huge potential for biological or clinical applications in cancer, wound healing, tissue engineering and regeneration. Besides biological or clinical sciences applications the approach could also be adapted to plant biology (for leaf venation or root formation), physics (lightnings of thunder) or engineering (dielectric breakdown).
  • 194.
    GenerativeModelsforsyntheticdata: Vasculature Transferlearningfromsynthetic datareducesneedforlabelsto segmentbrainvasculatureand neuralpathwaysin3D Johannes C.Paetzold,Oliver Schoppe, Rami Al-Maskari, Giles Tetteh,Velizar Efremov, Mihail I. Todorov, Ruiyao Cai,Hongcheng Mai, Zhouyi Rong,Ali Ertuerk,Bjoern H.Menze TranslaTUM and Department of Computer Science, Technical University of Munich / Institute for Stroke and Dementia Research, Ludwig Maximilian University of Munich 10 Apr 2019 (modified: 11 Jun 2019) MIDL 2019 Conference https://openreview.net/forum?id=BJe02gRiY4 Novel microscopic techniques yield high-resolution volumetric scans of complex anatomical structures such as the blood vasculature or the nervous system. Here, we show how transfer learning and synthetic data generation can be used to train deep neural networks to segment these structures successfully in the absence of or with very limited training data. A) Synthetic training data was designed to resemble vasculature of human brain in MRI scans. B-D) Predicted segmentations of 3 different applications: MRI scans of human brain vasculature (B), 3D LSM of mouse brain vasculature (C), and peripheral nervous system (D; shown here: innervated musclefibres) Here, we present results from three widely different applications: human brain vessels (MRI), mouse brain vessels and the mouse peripheral nervous system (both 3D Light Sheet Microscopy, LSM). The same network was trained either on a small labeled set from the respective application (”real data”), on synthetically generated data, or on a combination of both. The synthetic data used is identical for all three applications. We chose DeepVesselNet as our architecture; the schedule for pre-training on synthetic data and refinement on real data match the methods of (Tetteh et al., 2018). The methods for generation of synthetic training data is described in (Schneider et al., 2012).
  • 195.
    Semi-Supervised training Combine unlabeled data withlabeled data (or use a foundation model, finetuned or not for your small dataset)
  • 196.
    Semi-supervised learning inanutshell Sounds perfect in theory, however in practice has not lived up toits expectations lab.rockefeller.edu/strickland foil.bme.utexas.edu Stressed animals exhibited a greater BBB permeability to 40-kDa dextran, but not to 70-kDa dextran, which is suggestive of weakened vascular integrity following stress. doi: 10.1038/s41598-018-30875-y Thousandsofacquired3Dstacks doi: 10.1038/s41592-018-0115-y Maybe only4ofthem have vasculature annotated Semi-Supervised Model You would like to have better performance than using just either - labeled data (supervised learning) - unlabeled data (unsupervised learning) You get fast a lot of “unstructured data”
  • 197.
    One-ShotMedicalSegmentationExample MIT#1 Dataaugmentation usinglearned transformationsfor one-shot medical imagesegmentation Amy Zhao, Guha Balakrishnan, Frédo Durand, John V. Guttag, Adrian V. Dalca (Submitted on 25 Feb 2019 (v1), last revised 6 Apr 2019 (this version, v2)) https://arxiv.org/abs/1902.09383 - https://github.com/xamyzhao/brainstorm CNN brainstorm / src/ segmenter_model.py
  • 198.
    ‘Time-dependent’ i.e. Functional Multiphoton Microscopy so farwe have modelled structural stacks without the temporal dimension. Seek inspiration from video processing papers And if you have fast enough microscope with shallow stacks, you could exploit successive frames for better structural stacks as well (super-resolution)
  • 199.
    MultiphotonMicroscope samplingrates Commercialonescan beabit sluggishifyouwant invivo optical electrophysiologytobe done OlympusFV1000MPE: XY (not whole FOV): 10-20 Hz; XY (whole FOV): generally slow ~3 Hz. linescan: 800-900 Hz; Custom-builtmicroscopes betterforspeed 10.1073/pnas.1514209112 Framerates: 500Hz (single cell, 240 x 48 px); 80Hz (population, 450x 300px); 40Hz (600 x 600 px); 200Hz (dendritic imaging; 360 x 120 px). A bit beyond the scope of this presentation the 2-PM hardware tech, but you can have a look of more efficient scanning patterns (Lissajous scan), MEMS mirrors instead of bulky galvos (Duan et al. 2018; Li et al. 2017), and miniaturization/MEMSification in general like the “MEMS-in-the-lens” (Dickensheets et al. 2019)?
  • 200.
    When highersamplingrates needed#1 2Dplanar data is acquired over time in sequential depth (100–400 μm), and then reconstructed to volumes with spatial resolution of 1.6×1.6×3 μm and effective temporal resolution of 0.786 s (~1.27 Hz)/volume. In paradigm 2, a train of 7 electrical pulses, played out at 3 Hz, is presented, starting at the third imaging frame (shown) and again starting at frame 24 (not shown). - Lindvere et al. 2013 Left Average vertex-wise time course in response to the 7 pulse stimulation, with dilations in red and constrictions in blue, alongside modeled HRF (black) for quickly (A), mid-latency (B), and slowly responding vertices (C). Stimulus presentation period is highlighted in yellow. Right Map of estimated stimulation-induced change in radius (Δr) for the 7pulse stimulation, dilations(red)and constrictions(blue). Note thesmall amplitude! Constriction neversmaller than1%ofbaseline, anddilation neverbiggerthan 2%
  • 201.
    When highersamplingrates needed#2 Bouchardetal. (2006)Video-rate two-photonmicroscopy of corticalhemodynamics in-vivo https://doi.org/10.1364/BIO.2006.MI1 Multi-scaleimagingof functional hemodynamics:Intrinsic imaging of the hemodynamic response toforepaw stimulus reveals a localized region of increased hemoglobin absorption (top- left). Two-photonmicroscopywas then usedto closely examine the active region. In a singleframe, we were able to repeatedly image an artery,veinand venuletogetherduring 5 stimulus repetitions (center-right). From this data we have extractedthe diametersof the vesselsasa functionof time(bottom). The arterioleshowsdistinctdilation duringthestimulus, in contrasttothe vein andvenule, which show no measurable diameter changes. Wide-fieldimaging allows us toimage the vessels in the same plane simultaneously, eliminating the possibility that the observed dilation is in fact out-of-plane movement artifact. The same images can alsobe analyzed to evaluate blood flow, speed and hematocrit.
  • 202.
    Whatyouwouldliketohaveasbloodflowmeasurementspeed? Imagingsingle-cellblood flow inthe smallesttolargestvesselsintheliving retina Aby Joseph, Andres Guevara-Torres, Jesse Schallek Institute of Optics,University of Rochester,New York, United States;CenterforVisual Science, University ofRochester, New York, United States; Flaum Eye Institute, University ofRochester, New York,United States; Department ofNeuroscience, University of Rochester,New York, United States https://doi.org/10.7554/eLife.45077.001 The transparency of the mammalian eye provides a noninvasive view of the microvessels of the retina, a part of the central nervous system. Despite its clarity, imperfections in the optics of the eye blur microscopic retinal capillaries, and single blood cells flowing within. This limits early evaluation of microvascular diseases that originate in capillaries. To break this barrier, we use 15 kHz adaptive optics imaging using the confocal mode of AOSLO to noninvasively measure single-cell blood flow, in one of the most widely used research animals: the C57BL/6J mouse. Measured flow ranged four orders of magnitude (0.0002–1.55 mL min–1) across the full spectrum of retinal vessel diameters (3.2–45.8 mm), without requiring surgery or contrast dye. Here, we describe the ultrafast imaging, analysis pipeline and automated measurement of millions of blood cell speeds.
  • 203.
    Rememberyour multimodal /auxiliarymeasures to helpyousegment vasculaturebetter,andgetbetter‘insights’ Have your (MRI-compatible) physiological monitoring setup data combined in your data. ECG (especially) and respiration helpful in gating your imaging so that the motion artifacts are minimized already at hardware level. You might have someone saying that you just use some algorithmic compensation. Up to you in that situation https://doi.org/10.1186/2191-219X-2-44 https://www.slideshare.net/PetteriTeikariPhD /instrumentation-for-in-vivo-intravital-microsc opy
  • 204.
    Whenhigher samplingrates needed:Your dyes willbe tooslow alsoat some point Whole brain, single-cell resolution calcium transients captured with light-sheet microscopy (Ahrens et al. [118]). (A) Two spherically focused beams rapidly swept out a four-micron-thick plane orthogonal to the imaging axis. The beam and objective step together along the imaging axis to build up three-dimensional volume image at 0.8Hz (B). (C) Rapid light-sheet imaging of GCaMP5G calcium transients revealed a specific hindbrain neuron population (D, green traces) traces correlated with spinal cord neuropil activity(black trace). Schultz et al. 2016 https://doi.org/10.1101/036632
  • 205.
    Spatiotemporal Neuron Segmentationfor2-PM“Optical Electrophysiology” Fastandrobustactiveneuronsegmentationintwo-photon calcium imagingusingspatiotemporaldeeplearning Somayyeh Soltanian-Zadeh, Kaan Sahingur, Sarah Blau, Yiyang Gong, and Sina Farsiu Department of BiomedicalEngineering, Department of Neurobiology, Duke University PNASApril23, 2019 116 (17)8554-8563 https://doi.org/10.1073/pnas.1812995116 – https://github.com/soltanianzadeh/STNeuroNet Two-photon calcium imaging is a standard technique of neuroscience laboratories that recordsneuralactivity fromindividual neurons over large populations in awake- behaving animals. Automatic and accurate identification of behaviorally relevant neurons from these recordings is a critical step toward complete mapping of brain activity. To this end, we present a fastdeep learning framework which significantly outperforms previous methods and is the first to be as accurate as human experts in segmenting active and overlapping neurons. Here, to exploit the full spatiotemporal information in two-photon calcium imaging movies, we propose a 3D convolutional neural network to identify and segment active neurons. By utilizing a variety of two-photon microscopy datasets, we show that our method outperforms state-of-the-art techniques and is on a par with manual segmentation. Furthermore, we demonstrate that the network trained on data recorded at a specific cortical layer can be used to accurately segment active neurons from another layer with different neuron density. Finally, our work documents significant tabulation flaws in one of the most cited and active online scientific challenges in neuron segmentation. As our computationally fast method is an invaluable tool for a large spectrum of real-time optogenetic experiments, we have made our open-source software and carefully annotated dataset freely available online.
  • 206.
    ‘Intelligent’ Labeling As already established, labelingis very laborious and you want a system that makes this process easier for increasing the labeled examples for your semi- supervised approach
  • 207.
    Helpingthe Machine Learning Help for examplethe initial deep learning result which improves the algorithm itself and the resulting segmentation Umbralets you instantly view complex 3D images on any device DEAN TAKAHASHI@DEANTAKOCTOBER 17, 2017 6:00 AM https://venturebeat.com/2017/10/17/umbra-lets-you-instantly-view-complex-3d-images-on-any-devi ce/ Voxeleron Awarded NIH SBIR Grant for Device-independent Retinal OCT Image Analysis Software Voxeleron will collaborate with Professor Pablo Villoslada of UCSF/IDIBAPS and Dr. Pearse Keane of Moorfields Eye Hospital to validate the algorithms and ensure clinical utility. February 8, 2017 VoxeleronOrionallows correctionoflayer boundaries InteractiveMedical ImageSegmentationusing DeepLearningwithImage-specificFine-tuning Guotai Wang,Wenqi Li,Maria A. Zuluaga,Rosalind Pratt,Premal A.Patel,Michael Aertsen,Tom Doel,Anna L. David,Jan Deprest,Sebastien Ourselin,TomVercauteren (Submitted on 11 Oct 2017) https://arxiv.org/abs/1710.04043 The proposed interactive segmentation framework (BIFSeg with PC-Net). 2D images are shown as examples. In the training stage, each instance is cropped with its bounding box, and the CNN model is trained for binary segmentation. In the testing stage, image-specific fine-tuning with optional scribbles and a weighted loss function is used. Note that the object class (e.g. a maternal kidney) in the test image may have not been present in the training set.
  • 208.
    Use thesame“mainsegmentor”model asthebasisnetwork 3DU-NETGAN Semi-Supervised Yourlab runs tons and tons of experiments with perfectly good vasculature stacks. → No time to label them all, so use active learning to select the most useful stacks for labeling. → Re-train your model (with the new labeled stack and bunch of unlabeled ones) and you should have incremental improvement for your model now (if everything goes well). And if this becomes a standard workflow you might check some continuous integration (CI) software for deep learning models to make this more efficient? e.g. ease.ml/ci by Renggli et al. (2019) doi: 10.1038/s41598-018-30875-y YourInitial“Guess” Average Hausdorff Distance (AVD) very bad You want to quicklywithcoupleofclicks to show your system where the prediction is wrong (click outliers / inliers) and have a new prediction) “semi- automatic” Note! That the prediction is not actually “that horrible” but there is a huge discrepancy between ground truth and predicted mask, as ZNN (continuous value map) finds the faint regions from other slices (“bad z-sectioning”), and the probabilistic model that accounts for label noise could help! https://arxiv.org/abs/1606.02382
  • 209.
    Andthisprocess shouldbecomecontinuous 1) Donew experiments 2)Re-train model with the new unlabeled data 3)Select new “hard examples” 4)Show the model its errors for the hard example, 5)Re-train model with your new annotated sample (and new unlabeled data if you did not re-train yet on step 2) → Get a better performing system, and over time your manual annotation work should converge to some some “Bayes error” annotation time :P Continual learning (CL) is the ability to learn continually from a stream of experiential data, building on what was learnt previously, while being able to reapply, adapt and generalize it to new situations. CL is a fundamental step towards artificial intelligence, as it allows the learning agent to continually extend its abilities and adapt them to a continuously changing environment, a hallmark of natural intelligence.
  • 210.
    This iteratedself-improvement illustrated#1 Deep learningforcellular imageanalysis Erick Moen, DylanBannon, Takamasa Kudo,William Graf, Markus Covert and David Van Valen California Institute of Technology / Stanford University Nature Methods (2019) https://doi.org/10.1038/s41592-019-0403-1 Here we review the intersection between deep learning and cellular image analysis and provide an overview of both the mathematical mechanics and the programming frameworks of deep learning that are pertinent to life scientists. We survey the field’s progress in four key applications: image classification, image segmentation, object tracking, andaugmented microscopy. Our prior work has shown that it is important to match a model’s receptive field size with the relevant feature size in order to produce a well-performing model for biological images. The Python package Talos is a convenient tool for Keras users that helps to automate hyperparameter optimization through grid searches. We have found that modern sofwared evelopment practices have substantially improved the programming experience, as well as the stability of the underlying hardware. Our groups routinely use Git and Docker to develop and deploy deep learning models. Git is a version-control sofware, and the associated web platform GitHub allows code to be jointly developed by team members. Docker is a containerization tool that enables the production of reproducible programming environments. Deeplearning is a data science, and few know data better than those whoacquire it. In our experience,better toolsand better insightsarisewhen benchscientistsand computational scientistsworkside byside—even exchanging tasks—to drive discovery
  • 211.
    This iteratedself-improvement illustrated#2 ImprovingDataset VolumesandModel Accuracy withSemi- SupervisedIterativeSelf- Learning RobertDupre ; Jiri Fajtl ; Vasileios Argyriou ; Paolo Remagnino IEEE Transactions on Image Processing (Early Access, May2019 ) https://doi.org/10.1109/TIP.2019.2913986 Within this work a novel semi- supervised learning technique is introduced based on a simple iterative learning cycle together with learned thresholding techniques and an ensemble decision support system. State-of-the-art model performance and increased training data volume are demonstrated, through the use of unlabelled data when training deeply learned classification models. The methods presented work independently from the model architectures or loss functions, making this approach applicable to a wide range of machine learning and classification tasks. v
  • 212.
    Alternative Visualization Old Skool Init cubes Labeledby hand n > 5 Label more stacks continuously Traininitialmodel. e.,g. somesemi-supervised or transfer learnt modification, implemented in PyTorch or Tensorflow Gives you initial guesses that you want to have anintelligentcorrection front- end forMechanical Turkers or students to work on. n > 25 n > 500 n > 100 n > 10 As the labeled stacks increase so does the model performance as well as the initial guesses for the intelligent correction part n
  • 213.
  • 214.
    Git(hub) shortintro ifyou are not yet managing your lab’s code with version control https://datacarpentry.org/semester-biology/materials/git-in-30-minutes/ Common points of confusion - Git vs. GitHub - PrivateGitHub repos ( https://education.github.com/) https://books.google.fi/boo ks?id=53OYDwAAQBAJ https://doi.org/10.1186/1751-0473-8-7 https://www.botany.one/2017/03/sharing-scientific-code-short-introduction-git-github/
  • 215.
    Github withDocker https://techcrunch.com/2018/10/26/microsoft-clo ses-its-7-5b-purchase-of-code-sharing-platform-gi thub/ YouPull(“download”) But youhave a Windows and not allthe libraries installed as the developer Putinsidea“container” (Docker the most commonly used container tech) and the code comes with the OS and the environment AReproducibleRNotebookUsing DockerCarl Boettiger My name is Carl Boettiger. I'm a theoretical ecologist in UCBerkeley ESPM working on problems of forecasting and decision-making in ecological systems. My work involves developing new computational and frequently data- intensiveapproaches tothese problems. https://www.practicereproducibleresearch.org/case-studies/cboettig.html
  • 216.
    Theactualsegmentationmodel isatinyblockofwholearchitecture Only asmall fraction of real-world ML systems is composed of the ML code, as shown by the small black box in the middle. The required surrounding infrastructure is vast and complex. Google (2016) at NIPS: “Hidden Technical Debt in Machine Learning Systems” Cited by 143
  • 217.
    VeryShortIntroforthe MLOpssideofthings andrunningthingsinpractice #1 BuildingProductionMachine LearningSystems ManuSuryavanshMay17 2019 Streaming analysis of two-photoncalciumimaging run on Sparkcluster in cloud, by Jeremy Freemanin collaboration with Karel Svodoba and NicholasSofroniew. https://www.janelia.org/lab/svoboda-lab . https://youtu.be/uUQTSPvD1mc?t=17m Example of large-scale deploymentof two-photon machine learning application on “professional infrastructure” outside the individual desktops of many labs. http://dx.doi.org/10.1016/j.conb.2015.04.0 02 Kubeflow Kubeflow is aopen sourceplatform built on topon Kubernetes that allows scalable training and serving ofmachinelearning models. Kubeflow can run on anycloud infrastructure, and oneofthe key advantages of using Kubeflow is that the system can then be deployedon an on- premise infrastructure.
  • 218.
    VeryShortIntroforthe MLOpssideofthings andrunningthingsinpractice #2 ReproducingMachineLearningResearchon Binder Jessica Forde, Matthias Bussonnier, Félix-Antoine Fortin, Brian Granger, Tim Head, Chris Holdgraf, Paul Ivanov, Kyle Kelley, M Pacer, Yuvi Panda, Fernando Perez, Gladys Nalvarte, Benjamin Ragan-Kelley, Zachary Sailer, Steven Silvester, Erik Sundell, Carol Willing 29Oct2018NIPS2018WorkshopMLOSS https://openreview.net/forum?id=BJlR6KTE3X Binder is an open-source project that lets users share interactive, reproducible science. Binder’s goal is to allow researchers to create interactive versions of their code utilizing pre-existing workflows and minimal additional effort. It uses standard configuration files in software engineering to let researchers create interactive versions of code they have hosted on commonly-used platforms like GitHub. Binder’s underlying technology, BinderHub, is entirely open-source and utilizes entirely open-source tools. By leveraging tools such as Kubernetes and Docker, it manages the technical complexity around creating containers to capture a repository and its dependencies, generating user sessions, and providing public URLs to share the built images with others. BinderHub combines two open-source projects within the Jupyter ecosystem: repo2docker and JupyterHub. repo2docker builds the Docker image of the git repository specified by the user, installs dependencies, and provides various front-ends to explore the image. JupyterHub then spawns and serves instances of these built images using Kubernetes to scale as needed Because each of these pieces is open-source and uses popular tools in cloud orchestration, BinderHub can be deployed on a variety of cloud platforms, or even on your own hardware.
  • 219.
    DeploymentAtopicofitsown,butgoodforyoutoknowthebasics forefficientcommunicationwiththe‘MLOpsteam’ https://xkcd.com/1629/ FavioVázquez Jun 15 2019 https://towardsdatascience.com/https-towardsdatascience-com-the-data-f abric-containers-kubernetes-309674527d16 ToSimplifythings..thetakehomemessage(s) Researcherscan pull (download) the “environments” without having to worry about Windows/Mac/Linux issues, and whether all required libraries are installed Platform-agnostic way to manage many Dockers. If you want to switch from Amazon Cloud to Google Cloud, or your local server, “just move” the Kubernetes “config”
  • 220.
    ReproducabilityIssues No shortageof articles #1 TowardAReproducible, ScalableFrameworkfor ProcessingLarge NeuroimagingDatasets Erik C. Johnson et al. (2019) Johns Hopkins; GeorgiaTech University; Universityof Pennsylvania https://doi.org/10.1101/615161 https://github.com/aplbrain/saber Many neuroscience laboratories lack the computational expertise or resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. We developed an ecosystem, Scalable Analytics forBrain Exploration Research (SABER), of neuroimaging data analysis pipelines that utilize open source algorithms to create standardized modules and end-to-end optimized approaches
  • 221.
    ReproducabilityIssues No shortageof articles #2 System forQuality AssuredDataAnalysis: ‐ Flexible,reproduciblescientific workflows Fowler et al. (2018) https://doi.org/10.1002/gepi.22178 Computingenvironmentsforreproducibility: Capturingthe“WholeTale” Brinckman et al. (2019) https://doi.org/10.1016/j.future.2017.12.029 Qresp,a toolfor curating,discoveringand exploringreproduciblescientific papers MarcoGovoni et al. (2019) https://doi.org/10.1038/sdata.2019.2 Knowledgeandattitudesamonglifescientists towardsreproducibilitywithinjournalarticles Samota and Davey (2019) https://doi.org/10.1101/581033 nf-core:Communitycuratedbioinformatics pipelines Ewelsi et al. (2019) https://doi.org/10.1101/610741 ReproducibleResearchismorethanPublishing ResearchArtefacts:A Systematic Analysisof Jupyter Notebooksfrom ResearchArticles Schröder et al. (2019) https://arxiv.org/abs/1905.00092 Creating reproducible pharmacogenomic analysis pipelines Mammoliti et al. (2019) https://doi.org/10.1101/614560 Reproducible DataAnalysisPipelinesforPrecision Medicine Fjukstad et al. (2019) https://doi.org/10.1109/EMPDP.2019.8671623 Recommendationsforthe packaging and containerizing of bioinformatics software Gruening et al. (2019) https://doi.org/10.12688/f1000research.15140.2 Preparingnext-generation scientistsfor biomedical bigdata: artificialintelligence approaches Moore et al. (2019) https://doi.org/10.2217/pme-2018-0145 From the Wet Lab to the Web Lab:A Paradigm Shift in Brain Imaging Research Keshavan and Poline (2019) https://dx.doi.org/10.3389%2Ffninf.2019.00003 TowardsEffectiveForagingby DataScientiststo Find Past AnalysisChoices Kery et al. (2019) https://doi.org/10.1145/3290605.3300322 Enhancingand accelerating socialscience via automation:Challengesand opportunities Yarkoni et al. (2019) https://doi.org/10.31235/osf.io/vncwe Deploying aScalableDataScienceEnvironment UsingDocker Martín-Santana et al. (2019) https://doi.org/10.1007/978-3-319-95651-0_7 BioportainerWorkbench:aversatileanduser- friendlysystem that integratesimplementation, management,anduseof bioinformatics resourcesinDockerenvironments Menegidioni et al. (2019) https://doi.org/10.1093/gigascience/giz041 TowardsA MethodologyandFrameworkfor Workflow-DrivenTeamScience Altintas et al. (2019) https://arxiv.org/abs/1903.01403 AmbitiousData ScienceCanBePainless Altintas et al. (2019) https://arxiv.org/abs/1903.01403 NamingthePaininDevelopingScientific Software Wiese et al. (2019) https://doi.org/10.1109/MS.2019.2899838 trackr:AFrameworkfor Enhancing Discoverabilityand Reproducibilityof Data VisualizationsandOtherArtifactsinR Becker et al. (2019) https://doi.org/10.1080/10618600.2019.1585259
  • 222.
  • 223.
    Semi-SupervisedVoxelSegmentation “Multi-task” 3DU-Net 3DU-Net GAN Or see e.g. Adi etal. (2019) Ifyou are more excited about GANsingeneral, gohere More papers around this theme, easier to start Ensemble? Nicer for data augmentation
  • 224.
    Whataroundthe “Segmentor” block Segmentor e.g.your ensemble Restore Denoise, Deblur & Inpaint Graph Mesh CFD DatabaseofOME-TIFFs Retro- and prospective ConstrainImage Restorationwith segmentation Mask Shapepriors for isotropic mesh from anistotropic volume? Vasculaturetreeas graphGraph Convolutinal Networks?
  • 225.
    Whatyou wantforthe “Product” MODEL DatabaseofOME-TIFFs Retro- and prospective Proofread Active Learning ANNOTATIONUI Web-basedinterface running on a local intranet cluster or on the cloud Choosetheunlabeled samplesfor humanannotation Correct errorsfrom initialsegmentation Savetheannotated andcorrected volumestothe database As in what would you like to have to support your “actual neuroscience / pre-clinical reearch”
  • 226.
    NeedforGPUpower We take somebaseline approach and start trying tweaking various things, and hopefully have an idea after that where the systems bottlenecks are? Is it any point even in tweaking your activation functions or number of filters per layer, and their sizes? 3DU-NET Baseline Model Restore Smooth Loss Function Uncertainty Map Attention Noise2Noise 1) Jointly with U-Net 2) As separate preprocessingstep None 3) No explicit image restoration Smooth2Smooth 1) Jointly with U-Net 2) Jointly with U-Net and Restore 3) As separate preprocessingstep after restore 4) Jointly with restore but not with U-Net None 5) No explicit image smoothing 1) MCDropout 2) BayesianBN 2) None 1) Additive Attention on skipconnections 2) GFF 3) GCNet 4) Noattention 1) Weighed CE 2) Focal Loss 3) ComboLoss 4) Boundary Loss 1) Vanilla U-Net 2) Make it Multi-taskwith auxiliary tasks: edge and centerline detection 3) Hybrid CNN+GRU 3x 5x 3x 4x 3x 4x =1620 training runs with this naïve setting Make some educated guesses of what you think would work, or what would you like towork :)
  • 227.
    Evaluationablationstudy An investigationoftheeffectoffatsuppressionanddimensionality on theaccuracyofbreastMRIsegmentation using U- nets. Homa Fashandi , GregoryKuling, Ying Li Lu,Hongbo Wu ,and Anne L.Martel Sunnybrook, Toronto https://doi.org/10.1002/mp.13375 EXAMPLE with only36combinations Definitelynot working Maybesome CDdiagram (NemenyiTest) tohaveanidea of ‘statistical p-magic”? Simpler Hypothesis: Joint restoration and segmentation, leads to better segmentation performance than network without explicit restoration or restoration as separate preprocessing step. Thenyoustillhavesomehyperparameterstooptimize “Only 3combinations” to train, figure from Diamond et al. (2017) Youalsorealizewhymostofthe papershavetestedon“simple modifications”andnothadtoomany confoundingvariables