Generalized Dynamic Inversion for Multiaxial Nonlinear Flight Controlismail_hameduddin
(Research paper published in the proceedings of the 2011 American Control Conference (ACC) in San Francisco, CA)
The nonlinear problem of aircraft trajectory track-
ing is tackled in the framework of multiple linear time-varying
constrained control using the newly developed paradigm of
generalized dynamic inversion. The time differential forms
of the multiple constraints encapsulate the control objectives,
and are inverted to obtain the reference trajectory-realizing
control law. The inversion process utilizes the Moore-Penrose
generalized inverse, and it predictably involves the problematic
generalized inversion singularity. Thus, a singularity avoidance
scheme based on a new type of dynamic scaling factors is
introduced that guarantees asymptotically stable tracking and
singularity avoidance. The steady state closed loop system
allows for two inherently noninterfering control actions working
towards a unified goal to exploit the aircraft’s control authority
over the entire state space. One control action is performed on
the range space of the constraint generalized inverse matrix, and
it works to impose the prescribed aircraft constrained dynamics.
The other control action is performed on the complementary
orthogonal nullspace of the constraint matrix, and it provides
aircraft’s global inner stability using a novel type of dynamically
scaled nullprojection control Lyapunov functions. Numerical
simulations of a multiaxial aircraft coordinated maneuver verify
the efficacy of designing nonlinear flight control systems via this
technique.
Generalized Dynamic Inversion for Multiaxial Nonlinear Flight Controlismail_hameduddin
(Research paper published in the proceedings of the 2011 American Control Conference (ACC) in San Francisco, CA)
The nonlinear problem of aircraft trajectory track-
ing is tackled in the framework of multiple linear time-varying
constrained control using the newly developed paradigm of
generalized dynamic inversion. The time differential forms
of the multiple constraints encapsulate the control objectives,
and are inverted to obtain the reference trajectory-realizing
control law. The inversion process utilizes the Moore-Penrose
generalized inverse, and it predictably involves the problematic
generalized inversion singularity. Thus, a singularity avoidance
scheme based on a new type of dynamic scaling factors is
introduced that guarantees asymptotically stable tracking and
singularity avoidance. The steady state closed loop system
allows for two inherently noninterfering control actions working
towards a unified goal to exploit the aircraft’s control authority
over the entire state space. One control action is performed on
the range space of the constraint generalized inverse matrix, and
it works to impose the prescribed aircraft constrained dynamics.
The other control action is performed on the complementary
orthogonal nullspace of the constraint matrix, and it provides
aircraft’s global inner stability using a novel type of dynamically
scaled nullprojection control Lyapunov functions. Numerical
simulations of a multiaxial aircraft coordinated maneuver verify
the efficacy of designing nonlinear flight control systems via this
technique.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
Computer Graphics - Lecture 03 - Virtual Cameras and the Transformation Pipeline💻 Anton Gerdelan
Slides from when I was teaching CS4052 Computer Graphics at Trinity College Dublin in Ireland.
These slides aren't used any more so they may as well be available to the public!
There are some mistakes in the slides, I'll try to comment below these.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...Seiya Ito
第5回 3D勉強会@関東
Deep Reinforcement Learning of Volume-guided Progressive View Inpainting for 3D Point Scene Completion from a Single Depth Image
CVPR 2019 (oral)
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
Computer Graphics - Lecture 03 - Virtual Cameras and the Transformation Pipeline💻 Anton Gerdelan
Slides from when I was teaching CS4052 Computer Graphics at Trinity College Dublin in Ireland.
These slides aren't used any more so they may as well be available to the public!
There are some mistakes in the slides, I'll try to comment below these.
Weighted Performance comparison of DWT and LWT with PCA for Face Image Retrie...cscpconf
This paper compares the performance of face image retrieval system based on discrete wavelet
transforms and Lifting wavelet transforms with principal component analysis (PCA). These
techniques are implemented and their performances are investigated using frontal facial images
from the ORL database. The Discrete Wavelet Transform is effective in representing image
features and is suitable in Face image retrieval, it still encounters problems especially in
implementation; e.g. Floating point operation and decomposition speed. We use the advantages
of lifting scheme, a spatial approach for constructing wavelet filters, which provides feasible
alternative for problems facing its classical counterpart. Lifting scheme has such intriguing
properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image
retrieval. Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives
less computation and DWT-PCA gives high retrieval rate.. Especially ‘sym2’ wavelet
outperforms well comparing to all other wavelets.
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...Seiya Ito
第5回 3D勉強会@関東
Deep Reinforcement Learning of Volume-guided Progressive View Inpainting for 3D Point Scene Completion from a Single Depth Image
CVPR 2019 (oral)
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
2. n high level understanding of images
n May involve segmentation
n mid level image understanding
n From low level representations, such as pixels and edges, to provide
representation that is compact and expressive
n Segmentation often involves grouping, perceptual organisation and
fitting.
n Image partition is taken place in the image spatial domain, but grouping,
fitting and so on can be in other domain, e.g. frequency or spatio-
frequency
Why Segmentation
3. Image Segmentation
Original image Bottom up segmentation Object level segmentation
Berkeley Segmentation
Dataset is a good starting
point (evaluation metrics)
4. Image Segmentation
n Segmentation with little constraint
n Thresholding
n Region growing, split and merge
n Watershed
n With weak constraint
n Graph cut
n Deformable models, such as active contour
n Interactive segmentation, such as intelligent scissors, grab
cut
n With strong constraint
n Active shape model, active appearance model based
segmentation
n Atlas model based segmentation
n Registration populating segmentation
n Two key questions: prior generalisation and model
adaptation
Bottomup
Modeldriven
5. Outline
n Deformable Segmentation
n Edge based 2D segmentation
n Extension to 3D
n Direct sequential segmentation of temporal volumetric data
n Incorporating statistical shape prior
n 3D + T (4D) constrained segmentation
n Tracking using implicit representation
n Implicit representation using RBF (region based)
n Hybrid approach
n Level set intrinsic regularisation (initialisation invariance)
n Integrated Reconstruction & Segmentation
n Combinatorial Optimisation
n Minimum path: lymphatic membrane segmentation
n Optimal surface (minimum cut): coronary segmentation
8. Deformable Segmentation
n Design issues
n Representation & numerical method
n Explicit Vs Implicit
n FEM, FDM, Spectral methods
n RBF-Level Set (Xie & Mirmehdi 07, Xie 11)
n Boundary description & stopping function
n Gradient based (Caselles et al. 97, Xie & Mirmehdi 08, Xie 10
& 11, Yeo et al. 11 )
n Region based (Paragios 02, Chan & Vese 01, Xie 09 & 11)
n Hybrid approach (Wang & Vemuri 04, Xie & Mirmehdi 04)
n Initialisation and convergence
n Initialisation independency (Xie 10, Xie 11, Yeo et al. 11)
n Complex topology & shape (Xie & Mirmehdi 07, 08, Paiement
14)
n These issues are often interdependent
9. Deformable model
n Active contour:
n Dynamic curves within image domain to recover object shapes.
n Deformable surface:
n Its extension to 3D.
n Applications:
n Object localisation
n Motion tracking
n Segmentation (e.g. colour/texture)
n Two general types: explicit and implicit models
n Kass et al. 1988, Caselles et al.1993, and many more…
10. Explicit model
n Parametric snake
n Represented explicitly as parameterized curves, spline, polynormial
function
n Example classic parametric snake
n Snake evolves to minimize the internal and external forces (Let C(q)
be a parameterized planar curve);
n Initialisation problem;
n Concavity convergence problem;
( ) ( ) ( ) ( )( ) .
22
∫∫∫ ∇−ʹ′ʹ′+ʹ′= dqqCIdqqCdqqCCE λβα
Internal forces External force
11. Explicit model
n Point based on tracking
n Resolution problem
n Addition and deletion
n And …
n Topological problems
n Non-intrinsic, parameterisation dependent;
n Hard to detect multiple objects simultaneously.
n Example:
?
12. Explicit model
n Advantages
n Explicit control
n Point correspondence
n Probably easier to impose shape regularisation
n Computational efficiency
n Should be considered when
n Known topology
n No (or predicated) topological changes
n Open curves
n …
n Numerical method
n Finite element method (FEM)
n Discretise into sub-domain
n Cohen & Cohen, IEEE T-PAMI, 1993
13. Implicit model
n Popularly based on the Level Set technique
n Implicit snake models
n Introduced by Caselles et al. and Malladi et al. (1993);
n Based on the theory of curve evolution
n Numerically implemented via level set methods;
n Snake evolves to minimize the weighted length in a Riemannian space
with a metric derived from the image content;
n Weighted length minimisation example:
A
B
A
B
14. Curve evolution
n Curvature flow
n k is the curvature, N denotes the inward normal
n The curvature measures how fast each point moves along its normal
direction;
n A simple closed curve will evolve toward a circular shape and
disappear;
n It smoothes the curve.
NCt
!
κ=
15. Curve evolution
n Constant flow
n c is a constant, N denotes the inward normal
n Each point moves at a constant speed along its normal direction;
n It can cause a smooth curve to become a singular one;
n A.k.a. the balloon force.
NcCt
!
=
16. Level set method
n A computational technique for tracking propagating interface
n Embed the curve into a surface, 2D scalar field
n Zero level set corresponds to the embedded curve
n Deforming the surface, instead of explicitly deforming the curve
17. Level set method
n A computational technique for tracking propagating interface
n Embed the curve into a surface, 2D scalar field
n Zero level set corresponds to the embedded curve
n Deforming the surface, instead of explicitly deforming the curve
18. Level set method
n Key ideas introduced by Dervieux & Thomasset
n Lecture Notes in Physics 1980
n Well-known after the seminal work by Osher & Sethian
n Osher & Sethian, J. Computational Physics 1988
n Fluid dynamics, computational geometry, material science, computer vision,
…
n Introduced to snake methods by Casselles et al. & Malladi et al.
n Casselles et al., Nemuer. Math. 1993
n Malladi et al., IEEE T-PAMI 1995
n Advantages:
n Implicit, intrinsic, non-parametric
n Accurate modelling front propagation
n Capable of handling topological changes (almost!)
19. Level set method
n General curve evolution
n is the level set function, F is the speed function
n The curvature flow can be re-formulated as:
n The constant flow can be re-formulated as:
0|| =Φ∇+Φ Ft
Φ
||
||
Φ∇⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
Φ∇
Φ∇
⋅∇=Φt
|| Φ∇=Φ ct
20. Level set method
n Numerical method
n Finite difference method (FDM)
n A local method to estimate the partial derivatives
n Upwind scheme
n Reinitialisation
n Reshaping the level set surface to retain smoothness
n Periodically performed
n Fast marching method, or
n By solving the following PDE:
|)|1)((sign Φ∇−Φ=Φt
21. Level set method
n Challenges:
n Computational complexity
n FDM – a local method to estimate the partial derivatives
n Dense computational grid
n More expensive in 3D
n Fast marching, narrow band, additive operator splitting (AOS)
n More sophisticated topological changes
n Signed distance function
n Reinitialisation is necessary to eliminate accumulated numerical error
n Prevent the level set developing new components
n Do not allow perturbations away from zero level set
n Can not create new contours
n E.g. fail to localise internal object boundaries
22. Level set method
n Challenges:
n Computational complexity
n FDM – a local method to estimate the partial derivatives
n Dense computational grid
n More expensive in 3D
n Fast marching, narrow band, additive operator splitting (AOS)
n More sophisticated topological changes
n Signed distance function
n Reinitialisation is necessary to eliminate accumulated numerical error
n Prevent the level set developing new components
n Do not allow perturbations away from zero level set
n Can not create new contours
n E.g. fail to localise internal object boundaries
23. Numerical solution
n The level set time derivative is approximated by forward
difference:
n Force fields can be classified into three types
n Curvature flow
n Constant flow
n Advection force field
n Each requires different differencing scheme
26. Numerical solution
n Advection flow:
n Let denote the external velocity force field
n Check the sign of each component
n Construct one-sided upwind differences
n E.g. GVF,
28. Motivation
n Convergence study – 4 disc problem
GeodesicDVF GGVF GeoGGVF CVF MAC
DVF: Cohen & Cohen, IEEE T-PAMI, 1993
Geodesic: Caselles et al., IJCV, 1997
GGVF: Xu & Prince, Signal Processing, 1998
GeoGGVF: Paragios et al., IEEE T-PAMI, 2004
CVF: Gil & Radeva, EMMCVPR 2003
CPM: Jalba et al., IEEE T-PAMI 2004
Xie & Mirmehdi, MAC, IEEE Trans. Pattern Analysis & Machine Intelligence 2008.
29. Motivation
n Convergence study – 4 disc problem
GeodesicDVF GGVF GeoGGVF CVF MAC
DVF: Cohen & Cohen, IEEE T-PAMI, 1993
Geodesic: Caselles et al., IJCV, 1997
GGVF: Xu & Prince, Signal Processing, 1998
GeoGGVF: Paragios et al., IEEE T-PAMI, 2004
CVF: Gil & Radeva, EMMCVPR 2003
CPM: Jalba et al., IEEE T-PAMI 2004
Xie & Mirmehdi, MAC, IEEE Trans. Pattern Analysis & Machine Intelligence 2008.
30. Motivation
n Objectives
n Long range force interaction
n Dynamic force field, instead of static
n Bidirectional – allow cross boundary initialisation
n Efficiency
n Region based or Edge based
n Prior knowledge
n Boundary assumptions
n Discontinuity in regional statistics
n Discontinuity in image intensity
n Application dependent
n Goal: improving edge based performance
n Comparable to region based approaches
n Benefit from less prior knowledge, simpler assumption, and efficiency
n There are scenarios boundary description does not need region support
31. MAC model
n Proposed method
n Novel external force field
n Based on hypothesised magnetic interactions between object boundary
and snake
n Significant improvements upon initialisation invariancy &
convergence ability
n Yet, a very simple model
n Magnetostatics
32. MAC model
n Edge orientation
n Analogy to current orientation
n Rotating image gradient vectors
= 1: anti-clockwise rotation; = 2: clockwise rotation.
: normalised image gradient vectors.
n (actually, these are 3D vectors)
n Current orientation on snake
n Similar to edge current orientation estimation
n Rotating level set gradient vectors
33. MAC model
n Magnetic force on snake
n Derive the force on snake exerted from image gradients
: electric current unit vector on snake
: current magnitude on snake, constant
: electric current vector on edges
: current magnitude on edges
: unit vector between two point, x and s
: permeability constant
n Uniqueness
n The force on snake is dynamic
n Relies on both spatial position and evolving contour
n Always perpendicular to the snake
n Global force interaction
34. MAC model
n Snake formulation
: curvature
: snake inward normal
n Level set representation
n Force field extension
n Snake is extended in a 2D scalar function
n Accordingly its forces upon it
n Fast marching
n In this case, simply compute forces for each level set
36. MAC model
n Edge preserving force diffusion
n Minimise noise interference
n Nonlinear diffusion of magnetic flux density
n Similar to GGVF, but…
n Add edge weighting term in diffusion control
n As little diffusion as possible at strong edges
n Homogeneous and noisy area which lack consistent support
from edges will have larger diffusion
37. MAC model
n Edge preserving force diffusion
n Fast implementation
n Decompose the magnetic flux term
n Fast computation in the Fourier domain
47. n Extension to 3D
Yeo, Xie, Sazonov, Nithiarasu, GPF, IEEE Trans. Image Processing 2011.
48. GPF model
n Geometrical Potential Force
n Suitable for 3D data
n Based on hypothesised geometrically induced force field between
deformable model and object boundary
n Generalisation of the MAC model
n Unique bi-directionality
n Dynamic force interaction
n Global view of object boundary representation
Yeo, Xie, Sazonov, Nithiarasu, GPF, IEEE Trans. Image Processing 2011.
49. GPF model
n Interaction force acting on due to is given as
q – corresponding geometrically induced potential created by
69. n Segmentation of LV borders allows quantitative analysis of
perfusion defects and cardiac function.
4D SPECT Segmentation
SPECT slice of the LV A DoughnutCardiac motion (mid-slice)
Yang, Mirmehdi, Xie, Hall, CI2BM, MICCAI workshop 2009. (MIA under-review)
70. Frontal view of
opaque surface
Top view of
opaque surface
Short-axis view
Frontal view
Correspondence between short-axis
slice and 3D frontal view
Frontal view of opaque surface overlaid on
orthogonal slice planes
Frontal view of
transparent surface
4D SPECT Segmentation
72. Training Image and
Shape Sequences
Gaussian
Analysis
PCA
Gaussian
Priors
Spatiotemporal
Priors
Segmentation
Unseen
Sequence
CACE
Evolution
Constraint (based on
Gaussian and
Spatiotemporal Priors)
Update Level Sets
and Spatiotemporal
Parameters
Convergence
End
No
Yes
Training
GtL trans.
4D SPECT Segmentation
73. The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears,
you may have to delete the image
and then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be displayed.
Your computer may not have
enough memory to open the
image, or the image may have
been corrupted. Restart your
computer, and then open the file
again. If the red x still appears, you
may have to delete the image and
then insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
The image cannot be
displayed. Your computer
may not have enough
memory to open the image,
or the image may have been
corrupted. Restart your
computer, and then open the
file again. If the red x still
appears, you may have to
delete the image and then
insert it again.
CCACE: 85.8%
SCMS: 63.1%
Results
on input
image
Results
against
ground
truth
Results
on input
image
Results
against
ground
truth
4D SPECT Segmentation
79. Automatic Bootstrapping & Tracking
Chiverton, Xie & Mirmehdi, IEEE T-IP 2012.
n Online shape learning coupled with automatic bootstrapping
n Finite size shape memory
n Statistical shape modelling
n Level set based tracking – similar to previous approach
80. n RBF-Level Set based Active Contouring
Xie & Mirmehdi, Image and Vision Computing 2011 & BMVC07.
81. Conventional level set technique
n Problems
n Computational complexity
n Dense computation grid, particularly expensive in 3D
n Some solutions: fast marching, narrow band, AoS schemes, …
n Can’t handle more sophisticated topological changes
n Usually requires re-initialisation to maintain a smooth surface to prevent
numerical artefacts contaminating the solution
n Perturbations away from the zero level set are missed
Conventional level set:
The hole is missed!
82. RBF-Level Set
n RBF-Level Set
n Use radial basis function to interpolate level set
n Updating expansion coefficients to deform level set
n Transfer PDE to ODE: efficient
n Much coarser computational grid, even irregular
n More complex topological changes readily achievable
Conventional level set Proposed method
83. RBF-Level Set
n RBF-Level Set
n Use radial basis function to interpolate level set
n Updating expansion coefficients to deform level set
n Transfer PDE to ODE: efficient
n Much coarser computational grid, even irregular
n More complex topological changes readily achievable
Conventional level set Proposed method
84. RBF-Level Set
n RBF interpolation
n Level set function, : a scalar function, usually obtained from the signed
distance transform
n Interpolate using a linear combination of a radial basis function,
where p(x) is a first degree polynomial and are the expansion coefficients
n The interpolation can be expressed as:
where
6
85. RBF-Level Set
n Updating RBF level set
n Original level set evolution:
where F is the speed function in the normal direction
n Transferred evolution:
n The spatial derivatives can be solved analytically
n First order Euler’s method
n Iteratively updating the expansion coefficients to evolve level set
n Benefits:
n Coarse computational grid, could be irregular
n No need for re-initialisation
n More complex topological changes achievable
89. RBF-Level Set
n Active modelling using RBF level set
n A region based approach
n Texem based modelling
n Active contour formulation:
n m is the number of classes
n 1/m is the average expectation of a class
n u is the posterior of the class of interest
n Level set representation:
90. I
RBF-Level Set
n Texems are image representations at various sizes that
retain the texture or visual primitives of a given image.
n A two-layer generative model
n Each texem represented by mean and variance: m={µ,ω}
n A bottom-up learning procedure
learning
{ }…M
Z
Xie-Mirmehdi, IEEE T-PAMI, 29(8), 2007.
91. RBF-Level Set
n Example learnt texems (7x7)
n Multiscale branch based texems
n Texem grouping for multi-modal regions
Xie-Mirmehdi, IEEE T-PAMI, 29(8), 2007.
96. RAGS model
n Region-aided (RAGS) model
n Bridge boundary and region-based techniques
n Fusing global information to local boundary description
n Improvements towards weak edges
n More resilient to noise interference
Geodesic snake Proposed methodGGVF snake
Xie & Mirmehdi, IEEE Trans. Image Processing, 2004.
97. n Integrated Reconstruction, Registration and Segmentation
A. Paiement et al., IEEE Transactions on Image Processing, January 2014.
98. Motivation
n Modelling from 3D/4D imaging data raises two intertwined issues:
n segmentation
n interpolation
n Segmentation
n partition 3D space containing the object and to distinguish data points belonging to the
object from background points
n e.g. 2D slices, ranging from simple stacks of parallel slices to more
complicated spatial configurations
99. Motivation
n Segmentation
n 2D independent segmentation is often not desirable
n all the slices are better segmented simultaneously in 3D/4D
n However, interpolation is thus necessary
n since data often does not span the whole 3D space
n only partial support from data
n Imaging conditions
n some modalities require integration over a thick slice to improve signal to noise ratio
n e.g. 1.5T cine cardiac MRI typical slice thickness 7mm; hence spacing is 7mm or bigger
n large spacing is also desirable in order to reduce acquisition time (patient discomfort,
motion artefact)
n in the 4D case, data must also be interpolated between available time frames
100. Motivation
n Argument
n “the success of one stage (segmentation or interpolation) depends on the accuracy of the
other”
n Approaches
n two sequential approaches: perform these two stages in opposing order
n some first segment slices independently then interpolate from 2D contours
n shape interpolation, notable work: Liu et al., surface reconstruction from Non-parallel
curve network, CGF 27(2) 2008.
n combining registration and segmentation
n segment sparse volumes made up of 2D slices by registering and deforming a model on
images (e.g. ASM): prior is often necessary
n level set based method: foundation (earlier work) for what presented here
n integrate segmentation and interpolation into a new RBF interpolated level set framework
n simplicity and flexibility of level set
n stability of RBF
n inherent interpolation provided by RBF
101. Proposed Method
n Interpolate level set function using Strictly Positive Definitive (SPD) RBF
n combining registration and segmentation
n segment sparse volumes made up of 2D slices by registering and deforming a model
on images (e.g. ASM): prior is often necessary
n level set based method: foundation (earlier work) for what presented here
n integrate segmentation and interpolation into a new RBF interpolated level set framework
n simplicity and flexibility of level set
n stability of RBF
n inherent interpolation provided by RBF
102. Proposed Method
n Instead of evolve phi through expansion coefficients (which involves
inverting a large matrix), evolve alpha by minimising an energy functional E:
n F may be any functional and is defined by the chosen segmentation method.
n Conventional variational level set method:
n using chain rule, a gradient descent method yields:
103. Proposed Method
n Rename as .
n S is the speed of the moving front and is generally defined on the contour C
only.
n The alpha evolution function can thus be simplified as
n is an approximation of the Dirac function
n this imposes a restriction of S to the contour C
n practical choice of regularised Dirac function:
§ epsilon = 1 for sharp RBF; epsilon = 3 for flatter RBF.
104. Proposed Method
n RBF based interpolation methods usually define one control point per data
point
n we define one control point per voxel of a discrete space,
n thus allow rewrite as a
convolution:
n The initial expansion coefficients can be computed in the Fourier domain
114. n Team
n Dr. Feng Zhao, Mr. Ehab Essa, Mr. Jingjing Deng, Mr. Mike Edwards,
Mr. Robert Palmer, Mr. Yaxi Ye, Mr. David James, Mr. Jonathan
Jones
n Alumni
n Dr. Ben Daubney, Dr. Huazizhong Zhang, Dr. Dongbin Chen, Dr. Si
Yong Yeo, Dr. Cyril Charron, Dr. John Chiverton, Dr. Ronghua Yang,
Mr. Liu Ren, Mr. Arron Lacey
n Clinical collaborator
n Swansea Singleton Hospital ABM UHT at Morriston
n Bristol Royal Infirmary
n Cardiff Hospital
n Plymouth Hospital
Acknowledgement
csvision.swan.ac.uk