SlideShare a Scribd company logo
1 of 45
Download to read offline
Complementing Mutual Information Techniques
to Improve Deformable Registration Processing
           for Clinical Applications

            Martin Dudziak, PhD

                  Version 1.0

                 August 4, 2005




                                                1
Introduction: Goal




 Improved localization and tracking

  over multiple types of time intervals (months, weeks, seconds)

  of specific objects (e.g., thoracic, abdominal tumors)

  that are challenging to image, to be compared and registered, to be tracked

  by robotic or semi-robotic therapeutic devices

  (or during human-guided administration of treatment).




                                                                                2
Introduction: Quest




 How to use multimodal image types to clarify and stabilize the key features and
 attributes that are:

  (a) critical for planning, targeting, treatment

  (b) most useful to identify significant medical transformations (growth, remission,
  recurrence, metastasis) especially post-therapy tumor dynamics

  (c) most useful for automated deformable registration processes

  (d) possible for extending forward simulated-future object behavior (geometry)




                                                                                 3
Introduction: Proposed Solution




Staged, hierarchical evaluation of pairs or triplets of models corresponding to
specific components of observed objects

    measures of mutual agreement or discord at each stage

   ♦ conventional image data parameters (α,β,γ,δ,ε…)

    ♦ network elements describing transform operators (Γ,∆,Η,Θ,Φ,…) that can be
 interpreted as a way to encode one object (or segment) into another

    sets of functions that can be applied for fast, accurate transformations
 forward/backward in time representing spatial and temporal behavior of the
 observed objects


 This is something like an encryption scheme applied to geometrical objects that
 follow very complex, dynamic rules (in this case: organic, metabolic)
                                                                                  4
Introduction: Requirements (Ideal)




Improve image registration accuracy AND utility by clinicians

Improve computational performance and accuracy

Not add to more complicated protocols for imaging or interpretation

Versatility to accommodate progressive novel developments in diagnostic
     radiology, alternative spectral imaging, non-radiological therapeutics




                                                                              5
Constraints and Caveats


•    Cannot get too general or too basic – the aim is to have something that can really work in a
     realistic time frame – some step forward for at least one particular sub-branch of oncology
     that could be implemented into the clinical world in 1, 2 years, not much more

•    But if this can lead to something that has the potential for extension, even generalization, to
     multiple forms of diagnosis, therapeutics, even outside oncology, then this is also good

•    A massive amount of work has been done already by many people much more experienced in
     every aspect of medical imaging and radiological oncology – much of what is suggested here
     as some “new” approach has not yet had the opportunity to “connect” with real medical data

•    The extensive prior and current work by others could actually provide the basis for a healthy
     “synthesis” approach, using established algorithms, software, databases, which could rule out
     the need or benefit for any radically different approach (such as what may be suggested here)

•    Even if a lot of what follows is full of holes, it might lead to a practical theorem or two that
     ultimately can be useful to improve upon some of the existing technology


                                                                                                  6
Some Foundations


•    Mutual information is, first of all, the only information really, because information is always
     about something. So I[X;Y] is telling us about what we have in X that can tell about Y. It is
     also telling us about uncertainty, insofar as I[H;Y] = H(X) + H(Y) – H(X,Y)

•    This leads into relative entropy or information divergence which can be thought of as a
     ``distance'' between the joint probability p(x,y) of and the probability q(x,y) = p(x)p(y) ,
     which is the joint probability under the assumption of independence

•    The converse of the distance or separation is the mutual dependence or agreement between
     for instance two object models. Here we can treat p(x) and p(y) in terms of some object or
     artifact that may / may not be present in a portion of an image.

•    For any p(x) we are concerned with p(x0) for some object and its absence, p(x1) = 1-p(x0). Is
     some object (feature) of interest present or not?

•    The feature may be a parameter set, a range of values. Color, location, diameter of a
     definable marker, a fractal texture, an estimable volume, all characteristics that can be found
     in both the input data (e.,g., image) and the object model, compared, and examined for
     mutual agreement.

                                                                                                    7
Foundation Lit
•   Collignon, A. et al.. Automated multi-modality image registration based on information
    theory, in Bizais, Y., editor, Proceedings of the Information Processing in Medical Imaging
    Conference, pages 263--274. Kluwer Academic Publishers.
•   Collignon, A., Vandermuelen, D., Suetens, P., and Marchal, G., 3d multi-modality medical
    image registration using feature space clustering, in Ayache, N., editor, Computer Vision,
    Virtual Reality and Robotics in Medicine, pages 195 -- 204. Springer Verlag.
•   Cover, T. and Thomas, J., Elements of Information Theory, John Wiley and Sons, Inc, 1991
•   Duda, R. and Hart, P., Pattern Classification and Scene Analysis, John Wiley and Sons, 1973
•   Hyvärinen, Aapo, Survey on Independent Component Analysis, Helsinki University of
    Technology, http://www.cis.hut.fi/aapo/papers/NCS99web/NCS99web.html
•   Kruppa, Hannes and Schiel, Bernt, Hierarchical combination of object models using mutual
    information, in Proceedings of the 12th British Machine Vision Conference, pages 103-112,
    2001
•   Studholme, C., Hill, D., and Hawkes, D., Multiresolution voxel similarity measures for mr-
    pet registration, in Bizais, Y., editor, Proceedings of the Information Processing in Medical
    Imaging Conference, Kluwer Academic Publishers
•   Thévenaz, P., Unser, M., An Efficient Mutual Information Optimizer for Multiresolution
    Image Registration, in Proceedings of the 1998 IEEE International Conference on Image
    Processing (ICIP'98), Chicago IL, USA, October 4-7, 1998, vol. I, pp. 833-837.
•   Viola, P. and Wells III, W., Alignment by maximization of mutual information, in
    Proceedings of the 5th International Conference on Computer Vision
•   Wells III, W., Atsumi, H., Nakajima, S., Kikinis, R.,
    Multi-Modal Volume Registration by Maximization of Mutual Information,                    8
    http://splweb.bwh.harvard.edu:8000/pages/papers/swells/mia-html/mia.html
Promising Work

•   Hannes Kruppa and Bernt Schiele, “Mutual Information for Evidence Fusion,”
    Perceptual Computing and Computer Vision Group, ETH Zurich, CH
    http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/KRUPPA1/cvonline.html

•   A simpler place to start – faces in rooms – can the computer find them?
    (Important – first find the faces before trying to identify them)

•   They take a mutual information approach using pairwise combinations of different
    object models




                                                                                       9
                                                                        Kruppa & Schiele
Pairing Up Models

 •        Goal: Find where are the human faces in the image
 •        Use skin color, facial shape, and a template matcher
 •        First use color and template (stage 1), compare observed with expected, deforming
          to fit for a maximum MI




                                                             Observed distributions are compared to
                                                             expected, and fitting (moving) can occur to
                                                             maximize mutual information,
                                                             but sometimes for a given observed-
                                                             expected pairing, there may be too much
                                                             interference from other observables



Kruppa & Schiele




                                                                                              10
Overcoming by Stepping Up

•    Absences or overabundance (shadows, bright lights) will skew some of the
     pairings, and this is why there are parallel and hierarchical pairings of models
•    The key to overcoming failures in one model is by doing the pairing and then
     introducing another level based on the results from the first pairings, and so on




                                                                            Kruppa & Schiele




                                                                                       11
Best-fitting among best pairs




                                Kruppa & Schiele



                                          12
About Face…




                                           Kruppa & Schiele




                                       ?


Can we apply something from “out
there” with entirely different image
feature types and matching
problems, to something on
seemingly a different ‘order’
altogether?

                                                      13
All images taken from … a UPMC paper coincidentally!
HCC Example
                  “Fibrolamellar Hepatocellular Carcinoma: Pre- and Posttherapy Evaluation with CT
                  and MR Imaging” – Tomoaki Ichikawa, MD, Michael Federle, MD, Luigi Grazioli,
                  MD, & Wallis Marsh, MD (UPMC and Yamanashi Medical Univ.)
                  Radiology 2000; 217:145-151




   Pretreatment




                                      1 Year Later

                                                                                         14
Transforms of Interest



•      Temporal: M1(t1)      M1 (t2)

•      Displacement: M1(x0, y0,z0)      M1(x1, y1,z1)

•      Orientation: Observer(xi, yi,zi) [M1(x0, y0,z0)]   Observer(xi+j, yi+j,zi+j) [M1(x0, y0,z0)]

•      Modality (MRI, CT, PET, US): M1         M2



In each of these transforms there can be many other transforms occurring. It is desirable if we can
       establish what are some of the rules by which they can occur, what are the dependencies, and
       what then are the likely paths by which to deform and reshape the expected distributions of
       parameters for a given model or sub-model in order to optimize (maximize) the mutual
       information between the two.



                                                                                                      15
Challenges




•   FACES – seemingly a much easier problem, more ‘tractable’ challenges:
    (a) mostly the same shape, and normalizable in geometry
    (b) color, lighting, shadow can throw off one model or another

•   Multimodal imaging of TUMORS (and other anatomical objects)
    (a) mapping from MRI to CT to to US
    (b) changes in coordinate systems
    (c) growth, coloration changes, hematoma, hemorrhage, dislocation
    (d) effects from surgery and other therapy
    (e) almost by definition, models Mi and Mj will have differences that involve qualitative
    differences that cannot be easily compared

•   So there must be some transform functions defined to map from the attributes of M1 to those
    of M2 that cannot be compared as if they were simple scalar or vector transforms of each
    other



                                                                                            16
A Clue from the Biometric World



•    Compartmentalize the features and the tasks to detect, track, and recognize – don’t try to
     bundle it all together, but use the higher-scale to handle discrepancies at lower details

•    Faces for instance –
     Localize where they are, pinpoint in the world-space and relative to others
     Rule out illogical, impossible things (2 faces in same space, insufficient size, etc.)
     Then tackle orientation and axes – can eyes, nose, mouth even be seen, or what transforms
     must be applied
     Find key features in the face, and apply transform rules if necessary
     Then lock onto key features and starting comparing
     For discrepancies, look into what could have changed, when, how
     For inconsistencies, go back and re-examine for the impossible, the orientation, the projective
     geometry aspects

•    Anatomical objects – similar approach, different specifics
     Location
     What must be vs. what cannot be
     Orientation
     Key features expected to stay the same or increase/decrease
     Features that are expected to change or have changed
                                                                                              17
GENET Overview

•   Consider two objects, static 3D models M1 and M2, generated from transformations of 2D
    data sets (e.g., MRI and CT, or multiple CT). Let M1 and M2 be static representations of the
    same anatomical object Λi, an instance of a type Λ object (e.g., hepatoma).

•   Objective: to derive a geometrical encoding network that aids in adaptive deformable
    registration from M1 M2 and that is extensible to other models Mi of the same anatomical
    object Λi, s.t.

    (a) Mutual agreement between M1 and M2 can be maximized (this is the classic goal; mutual
    information is maximal, and there is a useful relation established between the objects)

    (b) a deformation function fD can be applied to variations of M1 and M2 (M1(k) and M2(k)) that
    are individually generated - following sets of rules governing objects of type Λ - but that are
    independent of each other (namely M2(k) and M1(k) )

•   CLAIM: such a network, if it exists or can be approximated, will potentially be useful as a
    means of rapid registration between members of a larger class of models representing Λi
    including incomplete models Mx that individually may not be readily classified into
    agreement with any other model Mx+n derived from similar data sets

                                                                                             18
In Other Words
•   If we can develop some generalizable tools (the GENET or Geometrical Encoding Network)
    that work very well for describing, hierarchically, attributes of some well-defined images (the
    models Mi) that are known to be derived from viewing, under different circumstances (time,
    sensing modality) the same fundamental object (Λi),

•   Then we may hope to find certain patterns (within the Network that thus describes relations
    within some of these Mi (intramodular) and between different Mi (intermodular)).

•   These patterns should not be limited in correspondence only to the individual images used in
    the process of deriving (constructing) the GENET but they should have some applicability to
    transforming other images that we know to be connected, albeit with some fuzziness,
    distortion, incompleteness, to the same fundamental object ΛI

•   This is still a hypothesis, prior work has been limited, but there is work by others that shows
    promise. It also has some unusual connections, to network and graph theory, to solitons, and
    to cryptology. Hmmm.

•   It might be way off base, or it could shorten computations, handle noisy, missing and
    ambiguous image data, and help therapies like IMRT and tissue-specific nanomaterial
    implants to work more accurately and under robotic control for extracranial regions that are
    hard to motion-stabilize during treatment.

                                                                                             19
Two Interesting Current Approaches



•    Multi-Modal Volume Registration by Maximization of Mutual Information
     William M. Wells III, Paul Viola, Hideki Atsumi, Shin Nakajima, Ron Kikinis
     An approach based upon estimating entropies and derivatives, then applying a stochastic
     analog of gradient descent to achieve a local mutual information maximum.
     Not requiring segmentation, the goal is to be free of indiosyncratic properties of specific
     imaging modalities.
     Applied experimentally to MRI-CT and MRI-PET, focus on neurosurgery application.

•    An Efficient Mutual Information Optimizer for Multiresolution Image Registration
     Thévenaz, P., Unser, M.
     An approach for multiresolution context registration that is based upon the Kullbeck-Leibler
     measure of mutual information which in turn rests on the use of joint histograms of test and
     reference image data.
     Applies Parzen window functions and a derivative of Marquardt-Levenberg, not using least
     squares but the Parzen function.
     Applied to CT and MRI (using PD, T1 and T2 modalities) for brain image registration



                                                                                               20
Diffusion-Attraction

•   Based upon J. P. Thiron (1995 et al)
•   “Maxwell’s demon” --- akin to simulated annealing,
    basically a hybrid of diffusion and attraction
    competing in the same object space


                                              Suggestion – this should be done in
                                              parallel at multiple scales and
                                              spanning multiple subregions of
                                              two models M1 and M2




                                                                              21
Abstracting the Diffusion-Attractor-Breather

•    Consider two images of one object and we have defined 1+ subnets of
     the GENET to accommodate several models (color, template,
     granularity, fractal-measure, volumetric relations). Each subnet
     defines how a portion of Mi can transform into some Mj.
•    The functions that operate in the arcs of the graph (subnet) may be
     computationally implemented through diffusion, attraction, stochastic
     gradients, Parzen windows, Morse functions – at this point they are
     abstract functions, not entirely removed conceptually from
     “overloaded functions” in programming languages.
•    The functions are like a coding algorithm, and working from subnets
     and GENETs toward increasingly generalizeable deformation
     fucntions fD is a bit like working from a “crib” toward a full-fledged
     decoding of the cipher.
•    One subnet could govern the dynamics of how diffusion and attraction
     interplay to deform one part of an object topology into another and in
     the process reduce the blindness and iterative burden of the
     computations involved.



                                     Imagine that these two
                                     images are really not both
                                     CT, or more significantly
                                     different in view and features
                                                                              22
Smartening Up the Diffusion-Attractor-Breather

                Operatively we want to perform diffusion only on certain elements of
                the topology, attraction on others. We want to interpolate but
                intelligently. We want to create a “breather” function for the way we
      M1        are trying to change M1 M2, in such a way that not only do we
                achieve a 1-up straightforward deformable registration of M1 M2, but
                also we get a deformation function fD[t0 t1, V0 V1,… ] that can be
                applied to Mi and not limited only to the specific transformation
                and registration of M1 M2. That was only the carrot for the
                numbers to go do their trotting and pull the wagon…




                                                                         M2




                                                                                23
Is there a neural network in the room?


Yes, there might be, but it is not a typical one.
The function fD is mapping multiple variables that are essentially vector spaces,
to others on the basis of some classification of image features and possible
topological configurations (volume fitting). It may be that what we are dealing
with are higher-ranked tensors, and these could be the objects that are classifiable
as patterns/configurations using some very simple neural networks.
The bi-stable coupled associative matrix model, based upon
                                          dxi/dt = - ∂ /∂xi (xi (yi – xi) + Σj wij yj,   (1)
where
                                                        yj = sin(xj),                    (2)
                                                        wij = Σk yi yj .                 (3)
yields a simple but extensible tool for doing very fast classifications and once
learned and codified, they could be applied in a soft/hard environment useful to
(for instance) clinical radiology applications
                                                                                          24
A Few More Thoughts on GENET



•   Is there only one optimizer for all volumes and image modalities?
•   How fast can we improve the computations? We have considered thus far only the
    algorithmic aspects. We can also engage MIMD parallelism, 16-way and 32-way
    multiprocessing, now compact enough and cheap enough to fit into the clinical
    budget stream.
•   Can we tune a GENET to be smart in the adaptive sense, using a look-ahead neural
    network for instance to adjust for patient breathing, thus making CyberKnife apps
    more practical for liver, colon, spleen, pancreas, stomach
•   Can we use a GENET approach to go after smaller mestatases – abdominal, breast,
    lymphatic, and not only with RT but with tissue-specific, guidable nano-dispensers,
    something like the TNT (tagged nanothread) concept (now being implemented in
    different forms) or with polymer-embedded drug delivery protocols, possible
    tissue-regional patches with timed-released through “smart” PET and PVB?
•   At least we may have a step forward with one deformable registration at a time, and that
    would not be so bad after all…

                                                                                           25
GENET Speculations



•   Are the dynamics (growth, remission as well) of organic structures behaving
    topologically in a manner that follows a solitonic KdV path?
                             ut + uxxx + kuux = 0, u = u(x,t)
•   Can we skip some of the image calculations and compose surfaces using what we
    may know to be necessarily the case from a Morse-theoretic perspective about the
    topologies?
•   Volumetric boundaries are very interesting at the cellular level. What can we learn
    about the geometry of growth including metastasis but also normal cell
    differentiation by examining abstractly the boundary relationships that we can see
    dynamically through MRI, CT, and particularly US?
•   Could this help us to evolve some kind of solitonic tensegrity network (STN) that
    defines how cells and clusters of cells can define and measure (chemically)
    topological boundary zones, thereby determining to-grow-and-divide or to-die
    decision-protocols (and affecting, through signals, genetic switches, the chemical
    activations to carry out those protocols)?
•   Ultimately question # 4 is very, very interesting (for the author, for over 40 years).
                                                                                    26
Conclusions



•   Several recent developments in formal mutual information theoretic and
    computational solutions for deformable image registration have very promising
    results (e.g., Kruppa, Schiele; Thevanaz, Unser; Viola, Wells, but this is just the
    short-list)
•   Most of this work has a lot in common, mathematically, computationally, and in
    the targets (MRI+C, MRI+PET, CT+US; brain mainly)
•   We are looking at work that can definitely be integrated and developed through
    today’s technology for reasonable-term clinical applications, timeframe measurable
    in several months to a few years, project scopes as R1 or SBIR FastTrack
•   Integration of such work with clinical IMRT including CyberKnife type devices is
    a reasonable outcome, but this can serve non-radiological D/T as well
•   And then there is the possibility that with “dedicated slave labor” something like
    GENET might actually amount to something – shortcut for the CPU, at least we
    hope, breakthrough if we are blessed
•   All this could also be useful outside of the field of oncology, just to keep in mind.

                                                                                            27
Next Steps


SOME OPTIONS

•     Plan new R1 proposal to NIH/NCI
•     Enfold things into existing project underway or ready to submit
•     SBIR/STTR pathway, possible FastTrack
      NIH naturally
      DOE (e.g., pathway pursued by JNAL, Hampton, ODU)
      DHS (HSARPA) – a twist but there are matches and connections
         (consider upcoming Center of Excellence opps)
•     Private sector – various, great environment in Pittsburgh
•     ISTC funding for enhanced team (maths, software, clinical trials) outsourcing,
      offshore (get 100% of phase 1 paid, agree to cover phase 2, can apply to grants)

GOAL
Get started, have real data, environment, ability to focus and to Do It

                                                                                    28
Thank You



                          Martin (is-at) @ forteplan.com

                               (804) 740-0342 home

                               (202) 415-7295 cell




Maybe inventor of this GENET                               Not this Genet



                                                                        29
Backup/Reference Material




                            30
BioScan Tech Summary

1.     First generation work was focused on developing algorithms for optimizing image
       capture with a Nogatech CCD-based video chipset and a custom-design device for
       wireless data capture and transmission to a PC.
2.     Image optimization was guided by parameter sets applied to lens, lighting and capture
       control, obviating the need for operator intervention or decision-making.
3.     Images were automatically archived, displayed, and processed with recognition software
       (OpenNet suite, developed and marketed initially by SDC).
4.     The BioScan workstation provided internet connectivity to an e-collaboration application
       (e-Presents) for real-time consultation by Ob/Gyn and oncology experts to assist with
       interpretating image results (current samples plus through historical comparison).
5.     The BioScan imaging database was designed for real-time updating and agent-based
       notification triggering using the ADaM (Active Data Mover) software subsequently
       completed and redirected to Intel Corp. applications.
6.     BioScan enables the use of real-time, ad hoc imaging by medical specialists during
       clincal procedures as well as the informal at-home imaging by individuals to be
       integrated and processed through automated knowledge management and discovery
       logic, pattern recognition logic, and expert intervention, for the purpose of providing
       “heads up” warnings or definitive diagnostic indicators to multiple forms of cancer or
       other surface-observable diseases such as STDs.

                                                                                           31
32




                         CerviScan HEAD
                                          NT1004
                                          Video
                                          Chip (*)
                                                                       TLWA1100
                          (*) NT1004 or                                LED
                          NT1003 options                               (Array)
                         CerviScan STEM
                             Image                   Cam/LED                       Data
                             Recognitio              Control                       Collection
                             n /                     Processor                     Processor
                             Classifier              Module (*)                    Module (*)
                             Processor
                             Module (*)
                             (*) ST-20/40. ST FIVE, ARM7, StrongARM
Version 1 Architecture




                             (Dragonball), CY8C2xxxx, xX256, TE502 (SoC or
                             16/32 micro + Flash + SRAM chipset solutions for
                             each logical module function
                         CerviScan BASE
                                                                                Belkin
                                                                                USB
                         USB Cable Interface                                    VideoBus
                                                                                II Logic
                                 Charger                   Li ion
                                                                                    Lucent/
                                 Interface                 Battery
                                                                                    Proxim
                                                                                    Wireless
                                                                                    Logic
Resources

1.      The following images and charts give a snapshot introduction to a few of the tool
        components that were developed and applied in the BioScan R&D process. Not all
        of these images reflect BioScan directly, cervical cancer, or skin-related imaging.
2.      These images are provided to show some of what was produced and can be deployed
        now to either a new Bioscan initiative or to other projects, unrelated to BioScan, for
        which the same expertise (including mathematical modeling, image analysis,
        electronics design and testing, database and knowledgebase implementation) can be
        very easily applied.


                                    Wireless Telemed Interface




       Macromolecular
     Networks Simulation                                                 Verite interactive pattern
                                                                          detection/classification


                                                                                                  33
Resources (More)




                                                                                                                                                      e-Presents conferencing and
Another Verite application, with EKG                                                                                                                 muilti-channel video streaming
                                       ADaM’s exceptional performance
                                                    16000



                                                    14000



                                                    12000                                                                                                                                         Typical Fastload
                                                                                                                                                                                                  Typical Tpump
                                                                                                                                                                                                  Typical Mixed
                                                    10000
                                                                                                                                                                                                  Peak Fastload


                                         Rows/Sec
                                                                                                                                                                                                  Peak Tpump
                                                     8000                                                                                                                                         Peak Fstld & Tpump
                                                                                                                                                                                                  Transparent FastL
                                                                                                                                                                                                  Transparent Tpump
                                                     6000
                                                                                                                                                                                                  Special FastL
                                                                                                                                                                                                  "Kitchen Sink"
                                                     4000                                                                                                                                         Peak ETL



                                                     2000



                                                            0
                                                                                              ed




                                                                                                                                                                                             L
                                                                                                                        p




                                                                                                                                                                                  "
                                                                                 p




                                                                                                                                                                     tL
                                                                                                         ad
                                                                 ad




                                                                                                                                     p



                                                                                                                                                tL



                                                                                                                                                           p




                                                                                                                                                                                  nk



                                                                                                                                                                                          ET
                                                                                m




                                                                                                                       m


                                                                                                                                 um




                                                                                                                                                          m



                                                                                                                                                                     as
                                                                                            ix




                                                                                                                                             as
                                                                                                         lo
                                                                tlo



                                                                           pu




                                                                                                                    u




                                                                                                                                                                                Si
                                                                                                                                                       pu
                                                                                          M




                                                                                                                                                                   lF




                                                                                                                                                                                         k
                                                                                                                 Tp
                                                                                                      st

   SQL (Oracle) Data Server



                                                                                                                                           tF
                                                                                                                                Tp
                                                            as




                                                                                                                                                                                          a
                                                                           lT




                                                                                                                                                                           n
                                                                                                                                                     tT
                                                                                                   Fa
                                                                                       al




                                                                                                                                                               cia




                                                                                                                                                                                       Pe
                                                                                                                                                                           he
                                                                                                                                           en
                                                                                                                k
                                                            F




                                                                                        c
                                                                         ca




                                                                                                                            &
                                                                                                                 a




                                                                                                                                                     en
                                                                                     pi



                                                                                                   k




                                                                                                                                                                        itc
                                                                                                                                                               e
                                                         al




                                                                                                              Pe




                                                                                                                                         ar
                                                                                                                          tld
                                                                      pi




                                                                                                 a
                                                                                 Ty




                                                                                                                                                            Sp
                                                        c




                                                                                                                                                   ar




                                                                                                                                                                      "K
                                                                                              Pe




                                                                                                                                      sp
                                                                  Ty
                                                     pi




interface for image data mining                                                                                         Fs




                                                                                                                                                sp
                                                    Ty




                                                                                                                                   an
                                                                                                                       k




                                                                                                                                              an
                                                                                                                                 Tr
                                                                                                                        a
                                                                                                                     Pe




                                                                                                                                            Tr
                                                                                                                                 Test Type

                                                                                                                                                                                                 34
Resources (Still More)   Screenshots of SOAR-based production-rule system




                                                                            35
Resources sans Images; Current Tech Status


1.     There are images archived from some of the earlier BioScan Bayesian Network
       modeling and the pattern classifier work, all from the 1997 – 2004 period. These can
       be provided later once de-archived.
2.     All of the mathematical and software work can be reconstituted and re-run on
       general-purpose Windows and Linux platforms. It is simply a matter of having the
       resources and adequate funding support for the project to resume or to take on a new
       course different from BioScan per se (e.g., a project in oncological imaging and
       treatment of interest to the readers of this presentation, possibly unrelated to surface
       images, etc.)
3.     The current thinking about BioScan is to take advantage of superior micro-imaging
       that is commercially available, to take full advantage of the wireless capabilities, and
       to integrate the diagnostic and pre-diagnostic functions into established clinical
       procedures and patient disease management programs as well.
4.     The results of studying inverse method models for image registration, correlation,
       and tracking enable one to consider the practicality of redirecting some of this early
       work toward multimodal radiation-based treatments and tissue-specific
       chemotherapies, quite different from BioScan but making use of the same
       mathematical and computational resources, including perhaps the integrated
       Bayesian, product-rule and neural net models.



                                                                                             36
Why AdaM is important
1.       If you cannot collate, coordinate and efficiently access the patient image data, in real-time, free-
         form (with respect to views and users) and without blocking users during backup and archiving
         periods, then you have a very inefficient medical database and it is not conducive to the open-
         ended purposes of BioScan.
2.       The ADaM software outperformed that from NCR-Teradata with their own product as a data
         warehouse. It outperformed ab Initio, a leader in the field of Extract-Transfer-Load for Fortune
         100 VLDB applications.
                                                               AD                   Orchestr
     CONFI            ADB              SETU                    B                    ator
       G         System & Meta           P
     (ETLJO
                      Data                                            Ban
       Bs ,
     ETLSP ORCHESTRATOR (ORCH) (Initiali                              ker
                                                                             Mon
      ECs)                       ze)
        MONI                                ETLP                             itor
                  Genera                                    Actor
        TOR                             functional          objects
                     tor                   space            (nodes)                   ETLPs
                Transforme
                 Transforme                                                             (with         ETL Set
                  Transforme
                     rr                                                               actors)         (with
Extract
 Extract                r            Insrtor
                                      Inserto
                                                                                                      ETLPs)
  Extract
  or                       Monito
                                       Loader
   or            Control                 r                    ETL Set
     or          Memor        r                               (with
Docs               y
                  Data     Memor
                           Thread  Docs                       ETLPs)
and              Memor        y
                            Pool   and Datab
Files Datab        y       Machine Files ases
       ases                  space
  ADaM runtime           Setup and configuration                                    ETL Set
  ADaM runtime componentsmoduleselements
  modules                Internal                                                   (with
  External                      data flow                                           ETLPs)
  sources/destinations                                                                                      37
References
 1.   Early overview document (product oriented, high-level)
      http://tetradgroup.com/library/bioscan.doc
 2.   Technical documents and notes available, on archived CDs
 3.   Early published paper on the neural net component
      http://tetradgroup.com/library/bistablecam_ijcnn99.doc
 4.   ADaM extract-transfer-load system, critical for the super-fast
      movement of image data, triggering of agents, and coordination of
      images within patient-specific and feature-specific database views
      http://tetradgroup.com/library/ADaM_Design_Description1-1.doc
 5.   ADaM performance optimization, a key part of the system enabling
      massive throughput and parallelism for high-density imaging (not
      only for BioScan but more for MRI, CT, PET, 3d-ultrasound, digital
      x-ray) http://tetradgroup.com/ADaM_PerfOpt.doc
 6.   Additional material available, in published, pre-published, archived,
      and workbook forms, along with databases and simulation/modeling
      software (Matlab, Mathematica, Maple)
                                                                         38
Current Program Status

 1.   The electronics hardware for the mobile wireless image capture and
      collection has been radically simplified.
 2.   Pre-contract agreements with suppliers and partners in the electronics
      hardware domain have been established.
 3.   Matching fund agreements for phase-1 work have been obtained.
 4.   The software development has proceeded extensively during 2001-2004
      and includes work using SOAR, GeNie, BNJ, JESS, and PNL, plus
      extensive work in the application of inverse method models (for more
      advanced applications than BioScan will require in most applications).
 5.   Project work has been in hiatus due to funding issues.
 6.   Project work can be resumed and a substantial team of technical personnel
      can be activated within 1 to 3 months.
 7.   All prior work, intellectual property, patent applications, data, algorithms,
      designs, etc. remain the property of the author, M. Dudziak and are
      assignable under contract.


                                                                                  39
Some I3 Material




                   40
Making Sense of the Data (I)

•   Basic diffusion equation - usable as starting point for inverse problems

             ∂ 2u 1 ∂u




                                                                                               Particular credits - Roger Dufour, MIT
                2
                  =                u( x ,0) = f ( x )           u(0, t ) = u(a, t ) = 0
             ∂x     k ∂t
•   Time-transition is accomplished in Fourier domain
                ∞
                           x                         2 a            x
        f ( x ) = ∑ fn sin πn                    fn = ∫ f ( x ) sin πn dx
                  n =1     a                         a 0            a
                                 ∞
                                                               n
                      u( x , t ) = ∑ fne   −k ( πn a ) t
                                                     2
                                                           sin π 
                                n =1                           a
•   Transition backwards in time requires amplification of high frequency
    components - most likely to be noisy and skewed



                                                                                          41
Making Sense of the Data (II)
•   Heuristic and a priori constraints needed to maintain physical realism and
    suppress distortions from inverse process




                                                                                      Particular credits - Roger Dufour, MIT
•   First-pass solution best match or interpolation among a set of acceptable
    alternatives

      €
      x = arg min Ax − y                         s.t.            x∈X
                      x

•   Final solution may minimize the residual error and the regularization term

                                           2                        2
         €
         x = arg min Ax − y 2 + λ L( x − x ) 2
                          x

       Regularization offers fidelity to the observed data and an
    a priori determined (e.g., higher-scale-observed) solution model

                                                                                 42
Making Sense of the Data (III)


               •   Diffusion _ Attraction
               •   Modeling situations and schemas




                                                             Particular credits - J. P. Thirion, INRIA
                   as composite “images” in n-D
               •   Iterative process with
                   exploration of parallel tree paths
                    – Speculative track; not required
                      for Nomad Eyes sensor fusion
                      to be useful to analysts
                    – Purpose is to enable automation
                      of the analysis and forecasting
                      post-collection process
                    – Area of active current research




                                                        43
Making Sense of the Data (IV) - I3BAT

                                                       •   Multiple modalities
           Sensor 1                         Sensor 2        – Acoustic, EM, Optical, Text,
                                                              NLP, SQL, AI-reasoning…
                                                       •   All looking at the same topic of
                                                           interest (aka “region”)
                                                       •   Each sensitive to different
                                                           physical/logical properties
                                          Property 3        –   “Trigger” data
                                                            –   Contiguity (space/time)
                                                            –   Inference relations
                                         Property 2         –   “Hits” with conventional DB
      Property 1                                                queries (immigration, known
                                                                associations, other
                                        Background              investigations)
                                                       •   Compare with Terrorist Cadre
                                                           Tactic models (schemas, maps)
Particular credits - Eric Miller, NEU



                                                                                      44
Contact information:



Martin J. Dudziak, PhD
28 Chase Gayton Circle, Suite 736
Richmond, VA 23238-6533 (USA)

(804) 740-0342
(202) 415-7295 cell

martin@forteplan.com
martin “at” forteplan.com




                                    45

More Related Content

Viewers also liked

Education usa presentation at nafsa region iii fall 2011
Education usa presentation at nafsa region iii fall 2011Education usa presentation at nafsa region iii fall 2011
Education usa presentation at nafsa region iii fall 2011
Marty Bennett
 
Distimo report -_february_2010
Distimo report -_february_2010Distimo report -_february_2010
Distimo report -_february_2010
weichengwendao
 
Renewing natural processes in urban area
Renewing natural processes in urban areaRenewing natural processes in urban area
Renewing natural processes in urban area
aslfadmin
 
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
Ross Perry
 

Viewers also liked (13)

reinventing_loyalty_programs
reinventing_loyalty_programsreinventing_loyalty_programs
reinventing_loyalty_programs
 
Janson MacLeod - NGN US
Janson MacLeod - NGN USJanson MacLeod - NGN US
Janson MacLeod - NGN US
 
1436 Sheridan, Plymouth, MI | Steps from Downtown Plymouth
1436 Sheridan, Plymouth, MI | Steps from Downtown Plymouth1436 Sheridan, Plymouth, MI | Steps from Downtown Plymouth
1436 Sheridan, Plymouth, MI | Steps from Downtown Plymouth
 
Education usa presentation at nafsa region iii fall 2011
Education usa presentation at nafsa region iii fall 2011Education usa presentation at nafsa region iii fall 2011
Education usa presentation at nafsa region iii fall 2011
 
Distimo report -_february_2010
Distimo report -_february_2010Distimo report -_february_2010
Distimo report -_february_2010
 
Linked data 101: Getting Caught in the Semantic Web
Linked data 101: Getting Caught in the Semantic Web Linked data 101: Getting Caught in the Semantic Web
Linked data 101: Getting Caught in the Semantic Web
 
Susan A. Stanton Management Professional
Susan A. Stanton Management ProfessionalSusan A. Stanton Management Professional
Susan A. Stanton Management Professional
 
Amazon federal registry 2.0
Amazon federal registry 2.0Amazon federal registry 2.0
Amazon federal registry 2.0
 
Emerging Markets Plan - New Market Investment
Emerging Markets Plan - New Market InvestmentEmerging Markets Plan - New Market Investment
Emerging Markets Plan - New Market Investment
 
Marechal Hazardous Area Ex Approved Explosion Proof Products - Marechal ATEX ...
Marechal Hazardous Area Ex Approved Explosion Proof Products - Marechal ATEX ...Marechal Hazardous Area Ex Approved Explosion Proof Products - Marechal ATEX ...
Marechal Hazardous Area Ex Approved Explosion Proof Products - Marechal ATEX ...
 
Renewing natural processes in urban area
Renewing natural processes in urban areaRenewing natural processes in urban area
Renewing natural processes in urban area
 
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
CEBS Advantage Newsletter December 15, 2015, pages 11 to 12
 
The Digital Doctor - MMSA TRD 2014
The Digital Doctor - MMSA TRD 2014The Digital Doctor - MMSA TRD 2014
The Digital Doctor - MMSA TRD 2014
 

Similar to Mutualinfo Deformregistration Upci 04aug05

Similar to Mutualinfo Deformregistration Upci 04aug05 (20)

Foundation Multimodels.pptx
Foundation Multimodels.pptxFoundation Multimodels.pptx
Foundation Multimodels.pptx
 
MedGIFT projects in medical imaging
MedGIFT projects in medical imagingMedGIFT projects in medical imaging
MedGIFT projects in medical imaging
 
Research Objects for FAIRer Science
Research Objects for FAIRer Science Research Objects for FAIRer Science
Research Objects for FAIRer Science
 
Editing Digital Imagery in Research: Exploring the Fidelity-to-Artificiality...
Editing Digital Imagery in Research:  Exploring the Fidelity-to-Artificiality...Editing Digital Imagery in Research:  Exploring the Fidelity-to-Artificiality...
Editing Digital Imagery in Research: Exploring the Fidelity-to-Artificiality...
 
Minimal viable-datareuse-czi
Minimal viable-datareuse-cziMinimal viable-datareuse-czi
Minimal viable-datareuse-czi
 
Medical image analysis and big data evaluation infrastructures
Medical image analysis and big data evaluation infrastructuresMedical image analysis and big data evaluation infrastructures
Medical image analysis and big data evaluation infrastructures
 
Big Data - To Explain or To Predict? Talk at U Toronto's Rotman School of Ma...
Big Data - To Explain or To Predict?  Talk at U Toronto's Rotman School of Ma...Big Data - To Explain or To Predict?  Talk at U Toronto's Rotman School of Ma...
Big Data - To Explain or To Predict? Talk at U Toronto's Rotman School of Ma...
 
Designing Interactive Visualisations to Solve Analytical Problems in Biology
Designing Interactive Visualisations to Solve Analytical Problems in BiologyDesigning Interactive Visualisations to Solve Analytical Problems in Biology
Designing Interactive Visualisations to Solve Analytical Problems in Biology
 
Data Communities - reusable data in and outside your organization.
Data Communities - reusable data in and outside your organization.Data Communities - reusable data in and outside your organization.
Data Communities - reusable data in and outside your organization.
 
Computational Reproducibility vs. Transparency: Is It FAIR Enough?
Computational Reproducibility vs. Transparency: Is It FAIR Enough?Computational Reproducibility vs. Transparency: Is It FAIR Enough?
Computational Reproducibility vs. Transparency: Is It FAIR Enough?
 
Class 5
Class 5Class 5
Class 5
 
D. G. Mayo: Your data-driven claims must still be probed severely
D. G. Mayo: Your data-driven claims must still be probed severelyD. G. Mayo: Your data-driven claims must still be probed severely
D. G. Mayo: Your data-driven claims must still be probed severely
 
To explain or to predict
To explain or to predictTo explain or to predict
To explain or to predict
 
Shmueli
ShmueliShmueli
Shmueli
 
NeuroVault and the vision for data sharing in neuroimaging
NeuroVault and the vision for data sharing in neuroimagingNeuroVault and the vision for data sharing in neuroimaging
NeuroVault and the vision for data sharing in neuroimaging
 
CODATA International Training Workshop in Big Data for Science for Researcher...
CODATA International Training Workshop in Big Data for Science for Researcher...CODATA International Training Workshop in Big Data for Science for Researcher...
CODATA International Training Workshop in Big Data for Science for Researcher...
 
Dd25624627
Dd25624627Dd25624627
Dd25624627
 
Being Reproducible: SSBSS Summer School 2017
Being Reproducible: SSBSS Summer School 2017Being Reproducible: SSBSS Summer School 2017
Being Reproducible: SSBSS Summer School 2017
 
Image annotation - Segmentation & Annotation
Image annotation - Segmentation & AnnotationImage annotation - Segmentation & Annotation
Image annotation - Segmentation & Annotation
 
Big Data & ML for Clinical Data
Big Data & ML for Clinical DataBig Data & ML for Clinical Data
Big Data & ML for Clinical Data
 

More from martindudziak

Annot Mjd Selected Present Course Train Aug07
Annot Mjd Selected Present Course Train Aug07Annot Mjd Selected Present Course Train Aug07
Annot Mjd Selected Present Course Train Aug07
martindudziak
 
Coordinated And Unified Responses To Unpredictable And Widespread Biothreats
Coordinated And Unified Responses To Unpredictable And Widespread BiothreatsCoordinated And Unified Responses To Unpredictable And Widespread Biothreats
Coordinated And Unified Responses To Unpredictable And Widespread Biothreats
martindudziak
 
New Iaa Magopt Paper
New Iaa Magopt PaperNew Iaa Magopt Paper
New Iaa Magopt Paper
martindudziak
 
Redbionet Idhs White Paper
Redbionet Idhs White PaperRedbionet Idhs White Paper
Redbionet Idhs White Paper
martindudziak
 
Radterrorism Spb Oct04
Radterrorism Spb Oct04Radterrorism Spb Oct04
Radterrorism Spb Oct04
martindudziak
 
Radterror Spb Oct04 Paper
Radterror Spb Oct04 PaperRadterror Spb Oct04 Paper
Radterror Spb Oct04 Paper
martindudziak
 
Empires Idhs Red Cell White Paper
Empires Idhs Red Cell White PaperEmpires Idhs Red Cell White Paper
Empires Idhs Red Cell White Paper
martindudziak
 
Berlin Intl Sec Conf2 Paper
Berlin Intl Sec Conf2 PaperBerlin Intl Sec Conf2 Paper
Berlin Intl Sec Conf2 Paper
martindudziak
 
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
martindudziak
 

More from martindudziak (20)

Annot Mjd Selected Present Course Train Aug07
Annot Mjd Selected Present Course Train Aug07Annot Mjd Selected Present Course Train Aug07
Annot Mjd Selected Present Course Train Aug07
 
Coordinated And Unified Responses To Unpredictable And Widespread Biothreats
Coordinated And Unified Responses To Unpredictable And Widespread BiothreatsCoordinated And Unified Responses To Unpredictable And Widespread Biothreats
Coordinated And Unified Responses To Unpredictable And Widespread Biothreats
 
Structinteg
StructintegStructinteg
Structinteg
 
Ndemetalassemb
NdemetalassembNdemetalassemb
Ndemetalassemb
 
Nanosp98paper
Nanosp98paperNanosp98paper
Nanosp98paper
 
New Iaa Magopt Paper
New Iaa Magopt PaperNew Iaa Magopt Paper
New Iaa Magopt Paper
 
Spmnanopres
SpmnanopresSpmnanopres
Spmnanopres
 
Landmines
LandminesLandmines
Landmines
 
Ihee Ppres0998
Ihee Ppres0998Ihee Ppres0998
Ihee Ppres0998
 
Redbionet Idhs White Paper
Redbionet Idhs White PaperRedbionet Idhs White Paper
Redbionet Idhs White Paper
 
Radterrorism Spb Oct04
Radterrorism Spb Oct04Radterrorism Spb Oct04
Radterrorism Spb Oct04
 
Radterror Spb Oct04 Paper
Radterror Spb Oct04 PaperRadterror Spb Oct04 Paper
Radterror Spb Oct04 Paper
 
Empires Idhs Red Cell White Paper
Empires Idhs Red Cell White PaperEmpires Idhs Red Cell White Paper
Empires Idhs Red Cell White Paper
 
Berlin Intl Sec Conf2 Paper
Berlin Intl Sec Conf2 PaperBerlin Intl Sec Conf2 Paper
Berlin Intl Sec Conf2 Paper
 
Ecosymbiotics01
Ecosymbiotics01Ecosymbiotics01
Ecosymbiotics01
 
Qnsrbio
QnsrbioQnsrbio
Qnsrbio
 
Tdbasicsem
TdbasicsemTdbasicsem
Tdbasicsem
 
Bistablecamnets
BistablecamnetsBistablecamnets
Bistablecamnets
 
Sciconsc
SciconscSciconsc
Sciconsc
 
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
Evolutionary IED Prevention 09 2006 Updated W Comments Jan2010
 

Mutualinfo Deformregistration Upci 04aug05

  • 1. Complementing Mutual Information Techniques to Improve Deformable Registration Processing for Clinical Applications Martin Dudziak, PhD Version 1.0 August 4, 2005 1
  • 2. Introduction: Goal Improved localization and tracking over multiple types of time intervals (months, weeks, seconds) of specific objects (e.g., thoracic, abdominal tumors) that are challenging to image, to be compared and registered, to be tracked by robotic or semi-robotic therapeutic devices (or during human-guided administration of treatment). 2
  • 3. Introduction: Quest How to use multimodal image types to clarify and stabilize the key features and attributes that are: (a) critical for planning, targeting, treatment (b) most useful to identify significant medical transformations (growth, remission, recurrence, metastasis) especially post-therapy tumor dynamics (c) most useful for automated deformable registration processes (d) possible for extending forward simulated-future object behavior (geometry) 3
  • 4. Introduction: Proposed Solution Staged, hierarchical evaluation of pairs or triplets of models corresponding to specific components of observed objects measures of mutual agreement or discord at each stage ♦ conventional image data parameters (α,β,γ,δ,ε…) ♦ network elements describing transform operators (Γ,∆,Η,Θ,Φ,…) that can be interpreted as a way to encode one object (or segment) into another sets of functions that can be applied for fast, accurate transformations forward/backward in time representing spatial and temporal behavior of the observed objects This is something like an encryption scheme applied to geometrical objects that follow very complex, dynamic rules (in this case: organic, metabolic) 4
  • 5. Introduction: Requirements (Ideal) Improve image registration accuracy AND utility by clinicians Improve computational performance and accuracy Not add to more complicated protocols for imaging or interpretation Versatility to accommodate progressive novel developments in diagnostic radiology, alternative spectral imaging, non-radiological therapeutics 5
  • 6. Constraints and Caveats • Cannot get too general or too basic – the aim is to have something that can really work in a realistic time frame – some step forward for at least one particular sub-branch of oncology that could be implemented into the clinical world in 1, 2 years, not much more • But if this can lead to something that has the potential for extension, even generalization, to multiple forms of diagnosis, therapeutics, even outside oncology, then this is also good • A massive amount of work has been done already by many people much more experienced in every aspect of medical imaging and radiological oncology – much of what is suggested here as some “new” approach has not yet had the opportunity to “connect” with real medical data • The extensive prior and current work by others could actually provide the basis for a healthy “synthesis” approach, using established algorithms, software, databases, which could rule out the need or benefit for any radically different approach (such as what may be suggested here) • Even if a lot of what follows is full of holes, it might lead to a practical theorem or two that ultimately can be useful to improve upon some of the existing technology 6
  • 7. Some Foundations • Mutual information is, first of all, the only information really, because information is always about something. So I[X;Y] is telling us about what we have in X that can tell about Y. It is also telling us about uncertainty, insofar as I[H;Y] = H(X) + H(Y) – H(X,Y) • This leads into relative entropy or information divergence which can be thought of as a ``distance'' between the joint probability p(x,y) of and the probability q(x,y) = p(x)p(y) , which is the joint probability under the assumption of independence • The converse of the distance or separation is the mutual dependence or agreement between for instance two object models. Here we can treat p(x) and p(y) in terms of some object or artifact that may / may not be present in a portion of an image. • For any p(x) we are concerned with p(x0) for some object and its absence, p(x1) = 1-p(x0). Is some object (feature) of interest present or not? • The feature may be a parameter set, a range of values. Color, location, diameter of a definable marker, a fractal texture, an estimable volume, all characteristics that can be found in both the input data (e.,g., image) and the object model, compared, and examined for mutual agreement. 7
  • 8. Foundation Lit • Collignon, A. et al.. Automated multi-modality image registration based on information theory, in Bizais, Y., editor, Proceedings of the Information Processing in Medical Imaging Conference, pages 263--274. Kluwer Academic Publishers. • Collignon, A., Vandermuelen, D., Suetens, P., and Marchal, G., 3d multi-modality medical image registration using feature space clustering, in Ayache, N., editor, Computer Vision, Virtual Reality and Robotics in Medicine, pages 195 -- 204. Springer Verlag. • Cover, T. and Thomas, J., Elements of Information Theory, John Wiley and Sons, Inc, 1991 • Duda, R. and Hart, P., Pattern Classification and Scene Analysis, John Wiley and Sons, 1973 • Hyvärinen, Aapo, Survey on Independent Component Analysis, Helsinki University of Technology, http://www.cis.hut.fi/aapo/papers/NCS99web/NCS99web.html • Kruppa, Hannes and Schiel, Bernt, Hierarchical combination of object models using mutual information, in Proceedings of the 12th British Machine Vision Conference, pages 103-112, 2001 • Studholme, C., Hill, D., and Hawkes, D., Multiresolution voxel similarity measures for mr- pet registration, in Bizais, Y., editor, Proceedings of the Information Processing in Medical Imaging Conference, Kluwer Academic Publishers • Thévenaz, P., Unser, M., An Efficient Mutual Information Optimizer for Multiresolution Image Registration, in Proceedings of the 1998 IEEE International Conference on Image Processing (ICIP'98), Chicago IL, USA, October 4-7, 1998, vol. I, pp. 833-837. • Viola, P. and Wells III, W., Alignment by maximization of mutual information, in Proceedings of the 5th International Conference on Computer Vision • Wells III, W., Atsumi, H., Nakajima, S., Kikinis, R., Multi-Modal Volume Registration by Maximization of Mutual Information, 8 http://splweb.bwh.harvard.edu:8000/pages/papers/swells/mia-html/mia.html
  • 9. Promising Work • Hannes Kruppa and Bernt Schiele, “Mutual Information for Evidence Fusion,” Perceptual Computing and Computer Vision Group, ETH Zurich, CH http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/KRUPPA1/cvonline.html • A simpler place to start – faces in rooms – can the computer find them? (Important – first find the faces before trying to identify them) • They take a mutual information approach using pairwise combinations of different object models 9 Kruppa & Schiele
  • 10. Pairing Up Models • Goal: Find where are the human faces in the image • Use skin color, facial shape, and a template matcher • First use color and template (stage 1), compare observed with expected, deforming to fit for a maximum MI Observed distributions are compared to expected, and fitting (moving) can occur to maximize mutual information, but sometimes for a given observed- expected pairing, there may be too much interference from other observables Kruppa & Schiele 10
  • 11. Overcoming by Stepping Up • Absences or overabundance (shadows, bright lights) will skew some of the pairings, and this is why there are parallel and hierarchical pairings of models • The key to overcoming failures in one model is by doing the pairing and then introducing another level based on the results from the first pairings, and so on Kruppa & Schiele 11
  • 12. Best-fitting among best pairs Kruppa & Schiele 12
  • 13. About Face… Kruppa & Schiele ? Can we apply something from “out there” with entirely different image feature types and matching problems, to something on seemingly a different ‘order’ altogether? 13
  • 14. All images taken from … a UPMC paper coincidentally! HCC Example “Fibrolamellar Hepatocellular Carcinoma: Pre- and Posttherapy Evaluation with CT and MR Imaging” – Tomoaki Ichikawa, MD, Michael Federle, MD, Luigi Grazioli, MD, & Wallis Marsh, MD (UPMC and Yamanashi Medical Univ.) Radiology 2000; 217:145-151 Pretreatment 1 Year Later 14
  • 15. Transforms of Interest • Temporal: M1(t1) M1 (t2) • Displacement: M1(x0, y0,z0) M1(x1, y1,z1) • Orientation: Observer(xi, yi,zi) [M1(x0, y0,z0)] Observer(xi+j, yi+j,zi+j) [M1(x0, y0,z0)] • Modality (MRI, CT, PET, US): M1 M2 In each of these transforms there can be many other transforms occurring. It is desirable if we can establish what are some of the rules by which they can occur, what are the dependencies, and what then are the likely paths by which to deform and reshape the expected distributions of parameters for a given model or sub-model in order to optimize (maximize) the mutual information between the two. 15
  • 16. Challenges • FACES – seemingly a much easier problem, more ‘tractable’ challenges: (a) mostly the same shape, and normalizable in geometry (b) color, lighting, shadow can throw off one model or another • Multimodal imaging of TUMORS (and other anatomical objects) (a) mapping from MRI to CT to to US (b) changes in coordinate systems (c) growth, coloration changes, hematoma, hemorrhage, dislocation (d) effects from surgery and other therapy (e) almost by definition, models Mi and Mj will have differences that involve qualitative differences that cannot be easily compared • So there must be some transform functions defined to map from the attributes of M1 to those of M2 that cannot be compared as if they were simple scalar or vector transforms of each other 16
  • 17. A Clue from the Biometric World • Compartmentalize the features and the tasks to detect, track, and recognize – don’t try to bundle it all together, but use the higher-scale to handle discrepancies at lower details • Faces for instance – Localize where they are, pinpoint in the world-space and relative to others Rule out illogical, impossible things (2 faces in same space, insufficient size, etc.) Then tackle orientation and axes – can eyes, nose, mouth even be seen, or what transforms must be applied Find key features in the face, and apply transform rules if necessary Then lock onto key features and starting comparing For discrepancies, look into what could have changed, when, how For inconsistencies, go back and re-examine for the impossible, the orientation, the projective geometry aspects • Anatomical objects – similar approach, different specifics Location What must be vs. what cannot be Orientation Key features expected to stay the same or increase/decrease Features that are expected to change or have changed 17
  • 18. GENET Overview • Consider two objects, static 3D models M1 and M2, generated from transformations of 2D data sets (e.g., MRI and CT, or multiple CT). Let M1 and M2 be static representations of the same anatomical object Λi, an instance of a type Λ object (e.g., hepatoma). • Objective: to derive a geometrical encoding network that aids in adaptive deformable registration from M1 M2 and that is extensible to other models Mi of the same anatomical object Λi, s.t. (a) Mutual agreement between M1 and M2 can be maximized (this is the classic goal; mutual information is maximal, and there is a useful relation established between the objects) (b) a deformation function fD can be applied to variations of M1 and M2 (M1(k) and M2(k)) that are individually generated - following sets of rules governing objects of type Λ - but that are independent of each other (namely M2(k) and M1(k) ) • CLAIM: such a network, if it exists or can be approximated, will potentially be useful as a means of rapid registration between members of a larger class of models representing Λi including incomplete models Mx that individually may not be readily classified into agreement with any other model Mx+n derived from similar data sets 18
  • 19. In Other Words • If we can develop some generalizable tools (the GENET or Geometrical Encoding Network) that work very well for describing, hierarchically, attributes of some well-defined images (the models Mi) that are known to be derived from viewing, under different circumstances (time, sensing modality) the same fundamental object (Λi), • Then we may hope to find certain patterns (within the Network that thus describes relations within some of these Mi (intramodular) and between different Mi (intermodular)). • These patterns should not be limited in correspondence only to the individual images used in the process of deriving (constructing) the GENET but they should have some applicability to transforming other images that we know to be connected, albeit with some fuzziness, distortion, incompleteness, to the same fundamental object ΛI • This is still a hypothesis, prior work has been limited, but there is work by others that shows promise. It also has some unusual connections, to network and graph theory, to solitons, and to cryptology. Hmmm. • It might be way off base, or it could shorten computations, handle noisy, missing and ambiguous image data, and help therapies like IMRT and tissue-specific nanomaterial implants to work more accurately and under robotic control for extracranial regions that are hard to motion-stabilize during treatment. 19
  • 20. Two Interesting Current Approaches • Multi-Modal Volume Registration by Maximization of Mutual Information William M. Wells III, Paul Viola, Hideki Atsumi, Shin Nakajima, Ron Kikinis An approach based upon estimating entropies and derivatives, then applying a stochastic analog of gradient descent to achieve a local mutual information maximum. Not requiring segmentation, the goal is to be free of indiosyncratic properties of specific imaging modalities. Applied experimentally to MRI-CT and MRI-PET, focus on neurosurgery application. • An Efficient Mutual Information Optimizer for Multiresolution Image Registration Thévenaz, P., Unser, M. An approach for multiresolution context registration that is based upon the Kullbeck-Leibler measure of mutual information which in turn rests on the use of joint histograms of test and reference image data. Applies Parzen window functions and a derivative of Marquardt-Levenberg, not using least squares but the Parzen function. Applied to CT and MRI (using PD, T1 and T2 modalities) for brain image registration 20
  • 21. Diffusion-Attraction • Based upon J. P. Thiron (1995 et al) • “Maxwell’s demon” --- akin to simulated annealing, basically a hybrid of diffusion and attraction competing in the same object space Suggestion – this should be done in parallel at multiple scales and spanning multiple subregions of two models M1 and M2 21
  • 22. Abstracting the Diffusion-Attractor-Breather • Consider two images of one object and we have defined 1+ subnets of the GENET to accommodate several models (color, template, granularity, fractal-measure, volumetric relations). Each subnet defines how a portion of Mi can transform into some Mj. • The functions that operate in the arcs of the graph (subnet) may be computationally implemented through diffusion, attraction, stochastic gradients, Parzen windows, Morse functions – at this point they are abstract functions, not entirely removed conceptually from “overloaded functions” in programming languages. • The functions are like a coding algorithm, and working from subnets and GENETs toward increasingly generalizeable deformation fucntions fD is a bit like working from a “crib” toward a full-fledged decoding of the cipher. • One subnet could govern the dynamics of how diffusion and attraction interplay to deform one part of an object topology into another and in the process reduce the blindness and iterative burden of the computations involved. Imagine that these two images are really not both CT, or more significantly different in view and features 22
  • 23. Smartening Up the Diffusion-Attractor-Breather Operatively we want to perform diffusion only on certain elements of the topology, attraction on others. We want to interpolate but intelligently. We want to create a “breather” function for the way we M1 are trying to change M1 M2, in such a way that not only do we achieve a 1-up straightforward deformable registration of M1 M2, but also we get a deformation function fD[t0 t1, V0 V1,… ] that can be applied to Mi and not limited only to the specific transformation and registration of M1 M2. That was only the carrot for the numbers to go do their trotting and pull the wagon… M2 23
  • 24. Is there a neural network in the room? Yes, there might be, but it is not a typical one. The function fD is mapping multiple variables that are essentially vector spaces, to others on the basis of some classification of image features and possible topological configurations (volume fitting). It may be that what we are dealing with are higher-ranked tensors, and these could be the objects that are classifiable as patterns/configurations using some very simple neural networks. The bi-stable coupled associative matrix model, based upon dxi/dt = - ∂ /∂xi (xi (yi – xi) + Σj wij yj, (1) where yj = sin(xj), (2) wij = Σk yi yj . (3) yields a simple but extensible tool for doing very fast classifications and once learned and codified, they could be applied in a soft/hard environment useful to (for instance) clinical radiology applications 24
  • 25. A Few More Thoughts on GENET • Is there only one optimizer for all volumes and image modalities? • How fast can we improve the computations? We have considered thus far only the algorithmic aspects. We can also engage MIMD parallelism, 16-way and 32-way multiprocessing, now compact enough and cheap enough to fit into the clinical budget stream. • Can we tune a GENET to be smart in the adaptive sense, using a look-ahead neural network for instance to adjust for patient breathing, thus making CyberKnife apps more practical for liver, colon, spleen, pancreas, stomach • Can we use a GENET approach to go after smaller mestatases – abdominal, breast, lymphatic, and not only with RT but with tissue-specific, guidable nano-dispensers, something like the TNT (tagged nanothread) concept (now being implemented in different forms) or with polymer-embedded drug delivery protocols, possible tissue-regional patches with timed-released through “smart” PET and PVB? • At least we may have a step forward with one deformable registration at a time, and that would not be so bad after all… 25
  • 26. GENET Speculations • Are the dynamics (growth, remission as well) of organic structures behaving topologically in a manner that follows a solitonic KdV path? ut + uxxx + kuux = 0, u = u(x,t) • Can we skip some of the image calculations and compose surfaces using what we may know to be necessarily the case from a Morse-theoretic perspective about the topologies? • Volumetric boundaries are very interesting at the cellular level. What can we learn about the geometry of growth including metastasis but also normal cell differentiation by examining abstractly the boundary relationships that we can see dynamically through MRI, CT, and particularly US? • Could this help us to evolve some kind of solitonic tensegrity network (STN) that defines how cells and clusters of cells can define and measure (chemically) topological boundary zones, thereby determining to-grow-and-divide or to-die decision-protocols (and affecting, through signals, genetic switches, the chemical activations to carry out those protocols)? • Ultimately question # 4 is very, very interesting (for the author, for over 40 years). 26
  • 27. Conclusions • Several recent developments in formal mutual information theoretic and computational solutions for deformable image registration have very promising results (e.g., Kruppa, Schiele; Thevanaz, Unser; Viola, Wells, but this is just the short-list) • Most of this work has a lot in common, mathematically, computationally, and in the targets (MRI+C, MRI+PET, CT+US; brain mainly) • We are looking at work that can definitely be integrated and developed through today’s technology for reasonable-term clinical applications, timeframe measurable in several months to a few years, project scopes as R1 or SBIR FastTrack • Integration of such work with clinical IMRT including CyberKnife type devices is a reasonable outcome, but this can serve non-radiological D/T as well • And then there is the possibility that with “dedicated slave labor” something like GENET might actually amount to something – shortcut for the CPU, at least we hope, breakthrough if we are blessed • All this could also be useful outside of the field of oncology, just to keep in mind. 27
  • 28. Next Steps SOME OPTIONS • Plan new R1 proposal to NIH/NCI • Enfold things into existing project underway or ready to submit • SBIR/STTR pathway, possible FastTrack NIH naturally DOE (e.g., pathway pursued by JNAL, Hampton, ODU) DHS (HSARPA) – a twist but there are matches and connections (consider upcoming Center of Excellence opps) • Private sector – various, great environment in Pittsburgh • ISTC funding for enhanced team (maths, software, clinical trials) outsourcing, offshore (get 100% of phase 1 paid, agree to cover phase 2, can apply to grants) GOAL Get started, have real data, environment, ability to focus and to Do It 28
  • 29. Thank You Martin (is-at) @ forteplan.com (804) 740-0342 home (202) 415-7295 cell Maybe inventor of this GENET Not this Genet 29
  • 31. BioScan Tech Summary 1. First generation work was focused on developing algorithms for optimizing image capture with a Nogatech CCD-based video chipset and a custom-design device for wireless data capture and transmission to a PC. 2. Image optimization was guided by parameter sets applied to lens, lighting and capture control, obviating the need for operator intervention or decision-making. 3. Images were automatically archived, displayed, and processed with recognition software (OpenNet suite, developed and marketed initially by SDC). 4. The BioScan workstation provided internet connectivity to an e-collaboration application (e-Presents) for real-time consultation by Ob/Gyn and oncology experts to assist with interpretating image results (current samples plus through historical comparison). 5. The BioScan imaging database was designed for real-time updating and agent-based notification triggering using the ADaM (Active Data Mover) software subsequently completed and redirected to Intel Corp. applications. 6. BioScan enables the use of real-time, ad hoc imaging by medical specialists during clincal procedures as well as the informal at-home imaging by individuals to be integrated and processed through automated knowledge management and discovery logic, pattern recognition logic, and expert intervention, for the purpose of providing “heads up” warnings or definitive diagnostic indicators to multiple forms of cancer or other surface-observable diseases such as STDs. 31
  • 32. 32 CerviScan HEAD NT1004 Video Chip (*) TLWA1100 (*) NT1004 or LED NT1003 options (Array) CerviScan STEM Image Cam/LED Data Recognitio Control Collection n / Processor Processor Classifier Module (*) Module (*) Processor Module (*) (*) ST-20/40. ST FIVE, ARM7, StrongARM Version 1 Architecture (Dragonball), CY8C2xxxx, xX256, TE502 (SoC or 16/32 micro + Flash + SRAM chipset solutions for each logical module function CerviScan BASE Belkin USB USB Cable Interface VideoBus II Logic Charger Li ion Lucent/ Interface Battery Proxim Wireless Logic
  • 33. Resources 1. The following images and charts give a snapshot introduction to a few of the tool components that were developed and applied in the BioScan R&D process. Not all of these images reflect BioScan directly, cervical cancer, or skin-related imaging. 2. These images are provided to show some of what was produced and can be deployed now to either a new Bioscan initiative or to other projects, unrelated to BioScan, for which the same expertise (including mathematical modeling, image analysis, electronics design and testing, database and knowledgebase implementation) can be very easily applied. Wireless Telemed Interface Macromolecular Networks Simulation Verite interactive pattern detection/classification 33
  • 34. Resources (More) e-Presents conferencing and Another Verite application, with EKG muilti-channel video streaming ADaM’s exceptional performance 16000 14000 12000 Typical Fastload Typical Tpump Typical Mixed 10000 Peak Fastload Rows/Sec Peak Tpump 8000 Peak Fstld & Tpump Transparent FastL Transparent Tpump 6000 Special FastL "Kitchen Sink" 4000 Peak ETL 2000 0 ed L p " p tL ad ad p tL p nk ET m m um m as ix as lo tlo pu u Si pu M lF k Tp st SQL (Oracle) Data Server tF Tp as a lT n tT Fa al cia Pe he en k F c ca & a en pi k itc e al Pe ar tld pi a Ty Sp c ar "K Pe sp Ty pi interface for image data mining Fs sp Ty an k an Tr a Pe Tr Test Type 34
  • 35. Resources (Still More) Screenshots of SOAR-based production-rule system 35
  • 36. Resources sans Images; Current Tech Status 1. There are images archived from some of the earlier BioScan Bayesian Network modeling and the pattern classifier work, all from the 1997 – 2004 period. These can be provided later once de-archived. 2. All of the mathematical and software work can be reconstituted and re-run on general-purpose Windows and Linux platforms. It is simply a matter of having the resources and adequate funding support for the project to resume or to take on a new course different from BioScan per se (e.g., a project in oncological imaging and treatment of interest to the readers of this presentation, possibly unrelated to surface images, etc.) 3. The current thinking about BioScan is to take advantage of superior micro-imaging that is commercially available, to take full advantage of the wireless capabilities, and to integrate the diagnostic and pre-diagnostic functions into established clinical procedures and patient disease management programs as well. 4. The results of studying inverse method models for image registration, correlation, and tracking enable one to consider the practicality of redirecting some of this early work toward multimodal radiation-based treatments and tissue-specific chemotherapies, quite different from BioScan but making use of the same mathematical and computational resources, including perhaps the integrated Bayesian, product-rule and neural net models. 36
  • 37. Why AdaM is important 1. If you cannot collate, coordinate and efficiently access the patient image data, in real-time, free- form (with respect to views and users) and without blocking users during backup and archiving periods, then you have a very inefficient medical database and it is not conducive to the open- ended purposes of BioScan. 2. The ADaM software outperformed that from NCR-Teradata with their own product as a data warehouse. It outperformed ab Initio, a leader in the field of Extract-Transfer-Load for Fortune 100 VLDB applications. AD Orchestr CONFI ADB SETU B ator G System & Meta P (ETLJO Data Ban Bs , ETLSP ORCHESTRATOR (ORCH) (Initiali ker Mon ECs) ze) MONI ETLP itor Genera Actor TOR functional objects tor space (nodes) ETLPs Transforme Transforme (with ETL Set Transforme rr actors) (with Extract Extract r Insrtor Inserto ETLPs) Extract or Monito Loader or Control r ETL Set or Memor r (with Docs y Data Memor Thread Docs ETLPs) and Memor y Pool and Datab Files Datab y Machine Files ases ases space ADaM runtime Setup and configuration ETL Set ADaM runtime componentsmoduleselements modules Internal (with External data flow ETLPs) sources/destinations 37
  • 38. References 1. Early overview document (product oriented, high-level) http://tetradgroup.com/library/bioscan.doc 2. Technical documents and notes available, on archived CDs 3. Early published paper on the neural net component http://tetradgroup.com/library/bistablecam_ijcnn99.doc 4. ADaM extract-transfer-load system, critical for the super-fast movement of image data, triggering of agents, and coordination of images within patient-specific and feature-specific database views http://tetradgroup.com/library/ADaM_Design_Description1-1.doc 5. ADaM performance optimization, a key part of the system enabling massive throughput and parallelism for high-density imaging (not only for BioScan but more for MRI, CT, PET, 3d-ultrasound, digital x-ray) http://tetradgroup.com/ADaM_PerfOpt.doc 6. Additional material available, in published, pre-published, archived, and workbook forms, along with databases and simulation/modeling software (Matlab, Mathematica, Maple) 38
  • 39. Current Program Status 1. The electronics hardware for the mobile wireless image capture and collection has been radically simplified. 2. Pre-contract agreements with suppliers and partners in the electronics hardware domain have been established. 3. Matching fund agreements for phase-1 work have been obtained. 4. The software development has proceeded extensively during 2001-2004 and includes work using SOAR, GeNie, BNJ, JESS, and PNL, plus extensive work in the application of inverse method models (for more advanced applications than BioScan will require in most applications). 5. Project work has been in hiatus due to funding issues. 6. Project work can be resumed and a substantial team of technical personnel can be activated within 1 to 3 months. 7. All prior work, intellectual property, patent applications, data, algorithms, designs, etc. remain the property of the author, M. Dudziak and are assignable under contract. 39
  • 41. Making Sense of the Data (I) • Basic diffusion equation - usable as starting point for inverse problems ∂ 2u 1 ∂u Particular credits - Roger Dufour, MIT 2 = u( x ,0) = f ( x ) u(0, t ) = u(a, t ) = 0 ∂x k ∂t • Time-transition is accomplished in Fourier domain ∞  x 2 a  x f ( x ) = ∑ fn sin πn  fn = ∫ f ( x ) sin πn dx n =1  a a 0  a ∞  n u( x , t ) = ∑ fne −k ( πn a ) t 2 sin π  n =1  a • Transition backwards in time requires amplification of high frequency components - most likely to be noisy and skewed 41
  • 42. Making Sense of the Data (II) • Heuristic and a priori constraints needed to maintain physical realism and suppress distortions from inverse process Particular credits - Roger Dufour, MIT • First-pass solution best match or interpolation among a set of acceptable alternatives € x = arg min Ax − y s.t. x∈X x • Final solution may minimize the residual error and the regularization term 2 2 € x = arg min Ax − y 2 + λ L( x − x ) 2 x Regularization offers fidelity to the observed data and an a priori determined (e.g., higher-scale-observed) solution model 42
  • 43. Making Sense of the Data (III) • Diffusion _ Attraction • Modeling situations and schemas Particular credits - J. P. Thirion, INRIA as composite “images” in n-D • Iterative process with exploration of parallel tree paths – Speculative track; not required for Nomad Eyes sensor fusion to be useful to analysts – Purpose is to enable automation of the analysis and forecasting post-collection process – Area of active current research 43
  • 44. Making Sense of the Data (IV) - I3BAT • Multiple modalities Sensor 1 Sensor 2 – Acoustic, EM, Optical, Text, NLP, SQL, AI-reasoning… • All looking at the same topic of interest (aka “region”) • Each sensitive to different physical/logical properties Property 3 – “Trigger” data – Contiguity (space/time) – Inference relations Property 2 – “Hits” with conventional DB Property 1 queries (immigration, known associations, other Background investigations) • Compare with Terrorist Cadre Tactic models (schemas, maps) Particular credits - Eric Miller, NEU 44
  • 45. Contact information: Martin J. Dudziak, PhD 28 Chase Gayton Circle, Suite 736 Richmond, VA 23238-6533 (USA) (804) 740-0342 (202) 415-7295 cell martin@forteplan.com martin “at” forteplan.com 45