SlideShare a Scribd company logo
1 of 34
Download to read offline
1
Software Engineering Department
Analysis of PHANTOM images in order
to determine the reliability of
PET/SPECT cameras
Authors
Archil Pirmisashvili (ID: 317881407)
Gleb Orlikov (ID: 317478014)
Supervisor
Dr. Miri Cohen Weiss
2
Software Engineering Department
Table of contents:
1. INTRODUCTION............................................................................................................................3
2. THEORY.........................................................................................................................................5
2.1 BACKGROUND ............................................................................................................................................ 5
2.1.1 Image registration by maximization of combined mutual information and gradient information [1]: 5
2.1.2 Multi-modal volume registration by maximization of mutual Information [2]:.................................. 6
2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR
Volume Images of the Head [3]:............................................................................................................... 9
2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]:.................................... 12
2.2 DETAILED DESCRIPTION ............................................................................................................................... 13
2.2.1 Introduction:................................................................................................................................. 13
2.2.2 The problem is:.............................................................................................................................. 14
2.2.3 Our solution to problem is: ............................................................................................................ 14
2.3 EXPECTED RESULTS..................................................................................................................................... 14
3. SOFTWARE ENGINEERING DOCUMENTS.................................................................................16
3.1 REQUIREMENTS (USE CASE) ......................................................................................................................... 16
3.2 GUI....................................................................................................................................................... 17
3.3 PROGRAM STRUCTURE – ARCHITECTURE, DESIGN.............................................................................................. 20
3.3.1 UML class diagram........................................................................................................................ 20
3.3.2 Sequence diagram......................................................................................................................... 22
3.3.3 Activity diagram............................................................................................................................ 22
3.4 TESTING PLAN........................................................................................................................................... 23
3.4.1 Test scenario for: Main interface ................................................................................................... 23
3.4.2 Test scenario for – Program Option ............................................................................................... 24
3.4.3 Test scenario for – Mask Generator............................................................................................... 24
3.4.4 Test scenario for – DICOM images selection................................................................................... 25
3.4.5 Test scenario for – Manually correction ......................................................................................... 26
4. RESULT AND CONCLUSION ......................................................................................................27
4.1 QA TESTING PROCESS ................................................................................................................................ 27
4.2 PROBLEMS AND SOLUTIONS.......................................................................................................................... 27
4.2.1 Working with set of DICOM images: .............................................................................................. 27
4.2.2 Creation of PET/CT mask: .............................................................................................................. 27
4.2.3 Find the best slices: ....................................................................................................................... 27
4.2.4 Fit the MASK to Best slice: ............................................................................................................. 28
4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image: ................................................. 29
4.3 RUNNING/SIMULATION .............................................................................................................................. 30
4.3.1 Simulation 1.................................................................................................................................. 30
4.3.2 Simulation 2.................................................................................................................................. 31
4.3.3 Simulation 3.................................................................................................................................. 32
4.4 FINAL CONCLUSION .................................................................................................................................... 33
REFERENCES .................................................................................................................................34
3
Software Engineering Department
1. Introduction
Imaging visualization methods are widely used in modern medicine. These methods allow get
images of human normal and pathological organs and systems. Beside CT and MRI methods,
nuclear diagnostic is a branch of imaging diagnostic, in which multi-modality imaging techniques
such as positron emission tomography (PET) and single-photon emission computed tomography
(SPECT) are widely used. These two methods use gamma cameras in order to provide 2D/3D
images. The maintenance of these cameras requires periodical QA tests. Today this procedure
takes at least 4 hours per camera. Therefore our goal is to automate this procedure to reduce
time.
Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also
be referred to as molecular medicine or molecular imaging & therapeutics. Nuclear medicine uses
certain properties of isotopes and the energetic particles emitted from radioactive material to
diagnose or treat various pathology. Different from the typical concept of anatomic radiology,
nuclear medicine enables assessment of physiology. This function-based approach to medical
evaluation has useful applications in most subspecialties, notably oncology, neurology, and
cardiology. Gamma cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of
biologic activity that may be associated with disease. Relatively short lived isotope, such as 123
I is
administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue
in the body, and can be used to identify tumors or fracture points in bone. Images are acquired
after collimated photons are detected by a crystal that gives off a light signal, which is in turn
amplified and converted into count data.
Scintigraphy is a form of diagnostic test wherein radioisotopes are taken internally, for example
intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the
radiation emitted by the radiopharmaceuticals.
Single-Photon Emission Computed Tomography (SPECT) is a 3D tomographic technique that
uses gamma camera data from many projections and can be reconstructed in different planes. A
dual detector head gamma camera combined with a CT scanner, which provides localization of
functional SPECT data, is termed a
SPECT/CT camera, and has shown utility in advancing the field of molecular imaging. In most
other medical imaging modalities, energy is passed through the body and the reaction or result is
read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly
Thallium 201
TI, Technetium 99m
TC, Iodine 123
I, and Gallium 67
Ga. The radioactive gamma rays are
emitted through the body as the natural decaying process of these isotopes takes place. The
emissions of the gamma rays are captured by detectors that surround the body. This essentially
means that the human is now the source of the radioactivity, rather than the medical imaging
devices such as X-Ray or CT.
Positron emission tomography (PET) uses coincidence detection to image functional processes.
Short-lived positron emitting isotope, such as 18
F, is incorporated with an organic substance such
as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic
utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like
tumor, metastasis, or infection. PET images can be viewed in comparison to computed
tomography scans to determine an anatomic correlate. Modern scanners combine PET with a CT,
or even MRI, to optimize the image reconstruction involved with positron imaging. This is
performed on the same equipment without physically moving the patient off of the gantry. The
resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive
diagnosis and patient management.
4
Software Engineering Department
Figure 1: Positron annihilation event in PET
Imaging phantoms, or simply "phantoms", are specially designed objects that are scanned or
imaged in the field of medical imaging to evaluate, analyze, and tune the performance of various
imaging devices. These objects are more readily available and provide more consistent results
than the use of a living subject or cadaver, and likewise avoid subjecting a living subject to direct
risk. Phantoms were originally employed for use in 2D x-ray based imaging techniques such as
radiography or fluoroscopy, though more recently phantoms with desired imaging characteristics
have been developed for 3D techniques such as MRI, CT, Ultrasound, PET, and other imaging
methods or modalities.
Figure 2: PHATOM
A phantom used to evaluate an imaging device should respond in a similar manner to how human
tissues and organs would act in that specific imaging modality. For instance, phantoms made for
2D radiography may hold various quantities of x-ray contrast agents with similar x-ray absorbing
properties to normal tissue to tune the contrast of the imaging device or modulate the patients’
exposure to radiation. In such a case, the radiography phantom would not necessarily need to
have similar textures and mechanical properties since these are not relevant in x-ray imaging
modalities. However, in the case of ultrasonography, a phantom with similar rheological and
ultrasound scattering properties to real tissue would be essential, but x-ray absorbing properties
would not be needed.
Physicists perform the PHANTOM studies in PET and SPECT cameras, each producing a stack of
images that shows the 3D radioactive distribution as produced by the camera. The results can be
measured and compared to either the ideal results or to previous results.
Aim of QA test - Tomographic image quality is determined by a number of different performance
parameters, primarily the scanner sensitivity, tomographic uniformity, contrast and spatial
resolution, and the process that is used to reconstruct the images. Because of the complexity of
the variation in the uptake of radiopharmaceuticals and the large range of patient sizes and
shapes, the characteristics of radioactivity distributions can vary greatly and a single study with a
phantom cannot simulate all clinical imaging conditions. Cameras produce images simulating
those obtained in a total body imaging study involving both hot and cold lesions. Image quality is
assessed by calculating image contrast and background variability ratios for both hot and cold
5
Software Engineering Department
spheres. This test allows assessment of the accuracy of the absolute quantification of radioactivity
concentration in the uniform volume of interest inside the phantom.
2. Theory
2.1 Background
The goal of the test is to determine the two β€œbest” slices from the collection of image slices
provided by the camera. Best slice is the slice image, which best matches to template of the
ROI (regions-of-interest). Accordingly, we need firstly to define the template and then use it in
order to find the two β€œbest” slices. Template contains positions of hot and cold ROI cylinders.
There are some algorithms that work with CT and PET images:
2.1.1 Image registration by maximization of combined mutual information and gradient
information [1]:
Mutual information has developed into an accurate measure for rigid and affine mono- and
multimodality image registration. The robustness of the measure is questionable, however.
A possible reason for this is the absence of spatial information in the measure. The present
paper proposes to include spatial information by combining mutual information with a term
based on the image gradient of the images to be registered. The gradient term not only seeks
to align locations of high gradient magnitude, but also aims for a similar orientation of the
gradients at these locations.
Method: The definition of the mutual information I of two images A and B combines the
marginal and joint entropies of the images in the following manner:
𝐼(𝐴, 𝐡) = 𝐻(𝐴) + 𝐻(𝐡) βˆ’ 𝐻(𝐴, 𝐡)
Here, H(A) and H(B) denote the separate entropy values of A and B respectively. H(A,B) is he
joint entropy, i.e. the entropy of the joint probability distribution of the image intensities.
Correct registration of the images is assumed to be equivalent to maximization of the mutual
information of the images. This implies a balance between minimization of the joint entropy
and maximization of the marginal entropies.
Recently, it was shown that the mutual information measure is sensitive to the amount of
overlap between the images and normalized mutual information measures were introduced to
overcome this problem. Examples of such measures are the normalized mutual information
introduced by Studholme:
π‘Œ(𝐴, 𝐡) =
𝐻(𝐴) + 𝐻(𝐡)
𝐻(𝐴, 𝐡)
and the entropy correlation coefficient used by Maes:
𝐸𝐢𝐢(𝐴, 𝐡) =
2𝐼(𝐴, 𝐡)
𝐻(𝐴) + 𝐻(𝐡)
These two measures have a one-to-one correspondence.
Image locations with a strong gradient are assumed to denote a transition of tissues, which
are locations of high information value. The gradient is computed on a certain spatial scale.
We have extended mutual information measures (both standard and normalized) to include
spatial information that is present in each of the images. This extension is accomplished by
multiplying the mutual information with a gradient term. The gradient term is based not only on
the magnitude of the gradients, but also on the orientation of the gradients.
The gradient vector is computed for each sample point x ={x1, x2, x3} in one image and its
corresponding point in the other image, x`, which is found by geometric transformation of
6
Software Engineering Department
x. The three partial derivatives that together form the gradient vector are calculated by
convolving the image with the appropriate first derivatives of a Gaussian kernel of scale Οƒ. The
angle Ξ±x,x` (Οƒ) between the gradient vectors is defined by:
∝ π‘₯,π‘₯` (𝜎) = π‘Žπ‘Ÿπ‘π‘π‘œπ‘ 
βˆ‡π‘₯(𝜎) βˆ™ βˆ‡π‘₯`(𝜎)
|βˆ‡π‘₯(𝜎)||βˆ‡π‘₯`(𝜎)|
with βˆ‡x(Οƒ) denoting the gradient vector at point x of scale Οƒ and | Β· | denoting magnitude.
The proposed registration measure defined by:
𝐼 𝑛𝑒𝑀(𝐴, 𝐡) = 𝐺(𝐴, 𝐡)𝐼(𝐴, 𝐡)
with
𝐺(𝐴, 𝐡) = βˆ‘ πœ”(𝛼 π‘₯,π‘₯`(𝜎))
(π‘₯,π‘₯`)∈(𝐴∩𝐡)
π‘šπ‘–π‘›(|βˆ‡π‘₯(𝜎)|, |βˆ‡π‘₯`(𝜎)|)
Similarly, the combination of normalized mutual information and gradient information is
defined:
π‘Œπ‘›π‘’π‘€(𝐴, 𝐡) = 𝐺(𝐴, 𝐡)π‘Œ(𝐴, 𝐡)
2.1.2 Multi-modal volume registration by maximization of mutual Information [2]:
This approach works directly with image data; no pre-processing or segmentation is required.
This technique is, however, more flexible and robust than other intensity-based techniques like
correlation. Additionally, it has an efficient implementation that is based on stochastic
approximation. Experiments are presented that demonstrate the approach registering
magnetic resonance (MR) images with computed tomography (CT) images, and with positron-
emission tomography (PET) images.
Consider the problem of registering two different MR images of the same individual. When
perfectly aligned these signals should be quite similar. One simple measure of the quality of a
hypothetical registration is the sum of squared differences between voxel values. This
measure can be motivated with a probabilistic argument. If the noise inherent in an MR image
were Gaussian, independent and identically distributed, then the sum of squared differences is
negatively proportional to the likelihood that the two images are correctly registered.
Unfortunately, squared difference and the closely related operation of correlation are not
effective measures for the registration of different modalities. Even when perfectly registered,
MR and CT images taken from the same individual are quite different. In fact MR and CT are
useful in conjunction precisely because they are different.
This is not to say the MR and CT images are completely unrelated. They are after all both
informative measures of the properties of human tissue. Using a large corpus of data,
or some physical theory, it might be possible to construct a function F(Β·) that predicts CT from
the corresponding MR value, at least approximately. Using F we could evaluate registrations
by computing F(MR) and comparing it via sum of squared differences (or correlation) with the
CT image. If the CT and MR images were not correctly registered, then F would not be good
at predicting one from the other. While theoretically it might be possible to find F and use it in
this fashion, in practice prediction of CT from MR is a difficult and under-determined problem.
The the following derivation is referred to the two volumes of image data that are to be
registered as the reference volume and the test volume. A voxel of the reference volume is
denoted u(x), where the x are the coordinates of the voxel. A voxel of the test volume is
denoted similarly as v(x). Given that T is a transformation from the coordinate frame of the
reference volume to the test volume, v(T (x)) is the test volume voxel associated with the
reference volume voxel u(x). Note that in order to simplify some of the subsequent equations
we will use T to denote both the transformation and its parameterization.
We seek an estimate of the transformation that registers the reference volume u and test
volume v by maximizing their mutual information:
(1) 𝑇̂ = arg
π‘šπ‘Žπ‘₯
𝑇
𝐼(𝑒(π‘₯), 𝑣(𝑇(π‘₯)))
Mutual information is defined in terms of entropy in the following way:
(2) 𝐼 (𝑒(π‘₯), 𝑣(𝑇(π‘₯))) ≑ β„Ž(𝑒(π‘₯)) + β„Ž (𝑣(𝑇(π‘₯))) βˆ’ β„Ž(𝑒(π‘₯), 𝑣(𝑇(π‘₯)))
7
Software Engineering Department
h(Β·) is the entropy of a random variable, and is defined as β„Ž(π‘₯) ≑ βˆ’ ∫ 𝑝(π‘₯) ln(𝑝(π‘₯)) 𝑑π‘₯ , while
the joint entropy of two random variables x and y is β„Ž(π‘₯) ≑ βˆ’ ∫ 𝑝(π‘₯, 𝑦)ln(𝑝(π‘₯, 𝑦)) 𝑑π‘₯ 𝑑𝑦.
Entropy can be interpreted as a measure of uncertainty, variability, or complexity.
The mutual information defined in Equation (2) has three components. The first term on the
right is the entropy in the reference volume, and is not a function of T. The second term is the
entropy of the part of the test volume into which the reference volume projects. It encourages
transformations that project u into complex parts of v. The third term, the (negative) joint
entropy of u and v, contributes when u and v are functionally related.
The entropies described above are defined in terms of integrals over the probability densities
associated with the random variables u(x) and v(T (x)). When registering medical image data
we will not have direct access to these densities.
The first step in estimating entropy from a sample is to approximate the underlying probability
density p(z) by a superposition of functions centered on the elements of a sample A drawn
from z:
(3) 𝑝(𝑧) β‰ˆ π‘ƒβˆ—
(𝑧) ≑
1
𝑁𝐴
βˆ‘ 𝑅(𝑧 βˆ’ 𝑧𝑗)
𝑧 π‘—βˆˆπ΄
where NA is the number of trials in the sample A and R is a window function which integrates
to 1. Pβˆ—(z) is widely known as the Parzen window density estimate.
Unfortunately, evaluating the entropy integral:
(4) β„Ž(𝑧) β‰ˆ βˆ’πΈπ‘§[𝑙𝑛𝑃 βˆ— (𝑧)] β‰ˆ βˆ’
1
𝑁𝑏
βˆ‘ π‘™π‘›π‘ƒβˆ—
(𝑧𝑖)
π‘§π‘–βˆˆπ΅
where NB is the size of a second sample B. The sample mean converges toward the true
expectation at a rate proportional to 1/√NB.
We may now write an approximation for the entropy of a random variable z as follows:
(5) β„Ž(𝑧) β‰ˆ β„Žβˆ—(𝑧) ≑ βˆ’
1
𝑁𝑏
βˆ‘ 𝑙𝑛
1
𝑁𝐴
βˆ‘ 𝐺 πœ“(𝑧𝑖 βˆ’ 𝑧𝑗)
𝑧 π‘—βˆˆπ΄π‘§π‘–βˆˆπ΅
Where (Gaussian density function):
𝐺 πœ“(𝑧) ≑ (2πœ‹)
βˆ’πœ‹
2 | πœ“|βˆ’0.5
exp(βˆ’
1
2
𝑧 𝑇
πœ“βˆ’1
𝑧)
Next we examine the entropy of v(T (x)), which is a function of the transformation T . In order
to find a maximum of entropy or mutual information, we may ascend the gradient with respect
to the transformation T. After some manipulation, the derivative of the entropy may be written
as follows:
(6)
𝑑
𝑑𝑇
β„Ž βˆ— (𝑣(𝑇(π‘₯))) =
1
𝑁 𝐡
βˆ‘ βˆ‘ π‘Šπ‘’(𝑉𝑖, 𝑉𝑗)(𝑉𝑖 βˆ’ 𝑉𝑗) 𝑇
π‘₯ π‘—βˆˆπ΄π‘₯ π‘–βˆˆπ΅
πœ“βˆ’1
𝑑
𝑑𝑇
(𝑉𝑖 βˆ’ 𝑉𝑗 )
Using the following definitions:
𝑣𝑖 ≑ 𝑣(𝑇(π‘₯𝑖)), 𝑣𝑗 ≑ 𝑣 (𝑇(π‘₯𝑗)), 𝑣 π‘˜ ≑ 𝑣(𝑇(π‘₯ π‘˜))
And
π‘Šπ‘£(𝑉𝑖, 𝑉𝑗) ≑
𝐺 πœ“ 𝑣
(𝑉𝑖 βˆ’ 𝑉𝑗)
βˆ‘ 𝐺 πœ“ 𝑣
(𝑉𝑖 βˆ’ π‘‰π‘˜)π‘₯ π‘˜βˆˆπ΄
The entropy approximation described in Equation (5) may now be used to evaluate the mutual
information between the reference volume and the test volume [Equation (2)]. In order to seek
a maximum of the mutual information, we will calculate an approximation to its derivative,
8
Software Engineering Department
𝑑
𝑑𝑇
𝐼(𝑇) β‰ˆ
𝑑
𝑑𝑇
β„Ž βˆ— (𝑒(π‘₯)) +
𝑑
𝑑𝑇
β„Ž βˆ— (𝑣(𝑇(π‘₯))) βˆ’
𝑑
𝑑𝑇
β„Ž βˆ— (𝑒(π‘₯), 𝑣(𝑇(π‘₯)))
Given these definitions we can obtain an estimate for the derivative of the mutual information
as follows:
𝑑𝐼
𝑑𝑇
Μ‚
=
1
𝑁 𝐡
βˆ‘ βˆ‘ (𝑉𝑖 βˆ’ 𝑉𝑗) 𝑇
π‘₯ π‘—βˆˆπ΄π‘₯ π‘–βˆˆπ΅
Γ— [π‘Šπ‘£(𝑣𝑖, 𝑣𝑗) πœ“ 𝑣
βˆ’1
βˆ’ π‘Šπ‘€(𝑀𝑖, 𝑀𝑗) πœ“ 𝑣𝑣
βˆ’1
]
𝑑
𝑑𝑇
(𝑣𝑖 βˆ’ 𝑣𝑗)
The weighting factors are defined as:
π‘Šπ‘£(𝑣𝑖, 𝑣𝑗) ≑
𝐺 πœ“ 𝑣
(𝑣𝑖 βˆ’ 𝑣𝑗)
βˆ‘ 𝐺 πœ“ 𝑣
(𝑣𝑖 βˆ’ 𝑣 π‘˜)π‘₯ π‘˜βˆˆπ΄
π‘Šπ‘€(𝑀𝑖, 𝑀𝑗) ≑
𝐺 πœ“ 𝑣
(𝑉𝑖 βˆ’ 𝑉𝑗)
βˆ‘ 𝐺 πœ“ 𝑣
(𝑉𝑖 βˆ’ π‘‰π‘˜)π‘₯ π‘˜βˆˆπ΄
If we are to increase the mutual information, then the first term in the brackets may be
interpreted as acting to increase the squared distance between pairs of samples that are
nearby in test volume intensity, while the second term acts to decrease the squared distance
between pairs of samples whose intensities are nearby in both volumes. It is important to
emphasize that these distances are in the space of intensities, rather than coordinate
locations.
The term
𝑑
𝑑𝑇
(𝑣𝑖 βˆ’ 𝑣𝑗) will generally involve gradients of the test volume intensities, and the
derivative of transformed coordinates with respect to the transformation.
We seek a local maximum of mutual information by using a stochastic analog of gradient
descent. Steps are repeatedly taken that are proportional to the approximation of the
derivative of the mutual information with respect to the transformation:
Repeat:
A ← {sample of size NA drawn from x}
B ← {sample of size NB drawn from x}
T ← 𝑇 + πœ†
𝑑𝐼̂
𝑑𝑇
The parameter Ξ» is called the learning rate. The above procedure is repeated a fixed number
of times or until convergence is detected. When using this procedure, some care must be
taken to ensure that the parameters of transformation remain valid.
In addition to the learning rate Ξ», the covariance matrices of the Parzen window functions are
important parameters of this technique. It is not difficult to determine suitable values for these
parameters by empirical adjustment, and that is the method we usually use. Referring back to
Equation (3), ψ should be chosen so that Pβˆ—(z) provides the best estimate for p(z). In other
words ψ is chosen so that a sample B has the maximum possible likelihood. Assuming that
the trials in B are chosen independently, the log likelihood of ψ is:
(7) 𝑙𝑛 ∏ 𝑃 βˆ— (𝑧𝑖) = βˆ‘ ln 𝑃 βˆ— (𝑧𝑖)
π‘§π‘–βˆˆπ΅π‘§π‘–βˆˆπ΅
This equation bears a striking resemblance to Equation (4), and in fact the log likelihood of ψ
is maximized precisely when the entropy estimator hβˆ—(z) is minimized.
Was assumed that the covariance matrices are diagonal,
(8) πœ“ = 𝐷𝐼𝐴𝐺(𝜎1
2
, 𝜎2
2
, … )
Following a derivation almost identical to the one described above derived an equation
analogous to Equation (6),
(9)
𝑑
𝑑 𝜎 π‘˜
β„Žβˆ—
(𝑧) =
1
𝑁𝑏
βˆ‘ βˆ‘ π‘Šπ‘§(𝑧 𝑏, 𝑧 π‘Ž)
π‘₯ π‘Žβˆˆπ‘Žπ‘₯ π‘βˆˆπ‘
(
1
𝜎 𝐾
)(
[𝑍] 𝐾
2
𝜎 𝐾
2 βˆ’ 1)
9
Software Engineering Department
where [z]k is the z`th component of the vector z. In practice both the transformation T and the
covariance ψ can be adjusted simultaneously; so while T is adjusted to maximize the mutual
information, I (u(x), v(T (x))), ψ is adjusted to minimize hβˆ—(v(T (x))).
2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers
in CT and MR Volume Images of the Head [3]:
Different imaging modalities provide different types of information that can be combined to aid
diagnosis and surgery. Bone, for example, is seen best on X-ray computed tomography (CT)
images, while soft-tissue structures are seen best on magnetic resonance (MR) images.
Because of the complementary nature of the information in these two modalities, the
registration of CT images of the head with MR images is of growing importance for diagnosis
and for surgical planning. Furthermore, registration of images with patient anatomy is used in
new interactive image-guided surgery techniques to track in real time the changing position of
a surgical instrument or probe on a display of preoperative image sets of the patient. The
definition of registration as the determination of a one-to-one mapping between the
coordinates in one space and those in another, such that points in the two spaces that
correspond to the same anatomic point are mapped to each other.
Point-based registration involves the determination of the coordinates of corresponding points
in different images and the estimation of the geometrical transformation using these
corresponding points. The points may be either intrinsic, or extrinsic. Intrinsic points are
derived from naturally occurring features, e.g., anatomic landmark points. Extrinsic points are
derived from artificially applied markers, e.g., tubes containing copper sulfate. We use external
fiducial markers that are rigidly attached through the skin to the skull. The points used for
registration fiducial points or fiducials, as distinguished from β€œfiducial markers,” and pick
as the fiducials the geometric centers of the markers. Determining the coordinates of the
fiducials, which we callJiducia2 localization, may be done in image space or in physical space.
Several techniques have been developed for determining the physical space coordinates of
external markers.
The algorithm finds markers in image volumes of the head. A three-dimensional (3-D) image
volume typically consists of a stack of two-dimensional (2-D) image slices. The algorithm finds
markers whose image intensities are higher than their surroundings. It is also tailored to find
markers of a given size and shape. All of the marker may be visible in the image, or it may
consist of both imageable and nonimageable parts. It is the imageable part that is found by the
algorithm, and it is the size and shape of this imageable part that is important to the algorithm.
Henceforth when we use the term β€œmarker” we are referring to only the imageable portion of
the marker. Three geometrical parameters specify the size and shape of the marker
adequately for the purposes of this algorithm: 1) the radius rm, of the largest sphere that can
be inscribed within the marker, 2) the radius Rm, of the smallest sphere that can circumscribe
the marker, and 3) the volume Vm, of the marker. Cylindrical markers with diameter d and
height h for clinical experiments. For these markers:
π‘Ÿ π‘š =
min(𝑑, β„Ž)
2
, 𝑅 π‘š =
βˆšπ‘‘2 + β„Ž2
2
, π‘‰π‘š =
πœ‹π‘‘2
β„Ž
4
First, we must search the entire image volume to find marker-like objects. Second, for each
marker-like object, we must decide whether it is a true marker or not and accurately localize
the centroid for each true one. Therefore, the algorithm consists of two parts. Part one finds
β€œcandidate voxels”. Each candidate voxel lies within a bright region that might be the image of
a marker. The requirements imposed by Part One are minimal with the result that, for the
M markers in that image, there are typically many more than M candidate points identified.
Part Two selects from these candidates M points that are most likely to lie within actual
10
Software Engineering Department
markers and provides a centroid for each one. Part One is designed so that it is unlikely to
miss a true marker. Part Two is designed so that it is unlikely to accept a false marker.
Part One takes the following input: The image volume of the head of a patient. The type of
image (CT or MR). The voxel dimensions βˆ†π‘₯ 𝑣, βˆ†π‘¦π‘£, and βˆ†π‘§ 𝑣. The marker’s geometrical
parameters rm, Rm and Vm. The intensity of an empty voxel. Part One produces as output a set
of candidate voxels.
Part Two takes the same input as Part One, plus two additional pieces of information: the set
of candidate voxels produced by Part One and the number of external markers M known a
priori to be present in the image. Part Two produces as output a list of M β€œfiducial points”.
Each fiducial point is a 3-D position (zf, yf, zf ) that is an estimate of the centroid of a marker.
The list is ordered with the first member of the list being most likely to be a marker and the last
being the least likely.
Part One operates on the entire image volume.
1. If the image is an MR image, a 2-D, three-by-three median filter is applied within each
slice to reduce noise.
2. To speed up the search, a new, smaller image volume is formed by subsampling. The
subsampling rate in x is calculated as ⌊
π‘Ÿ π‘š
βˆ†π‘₯ 𝑣
βŒ‹. The subsampling rates in y and z are similarly
calculated.
3. An intensity threshold is determined. For CT images, the threshold is the one that
minimizes the within-group variance. For MR images, the threshold is computed as the
mean of two independently determined thresholds. The first is the threshold that
minimizes the within-group variance. The second is the threshold that maximizes the
Kullback information value.
4. This threshold is used to produce a binary image volume with higher intensities in the
foreground. Foreground voxels are typically voxels that are part of the image of markers
or of the patient’s head.
5. If the original image is an MR image, spurious detail tends to appear in the binary image
produced by the previous step. The spurious detail is composed of apparent holes in the
head caused by regions that produce weak signal, such as the skull and sinuses. Thus, if
the original image is an MR image, these holes in the binary image are filled. In this step
each slice is considered individually. A foreground component is a two-dimensionally
connected set of foreground voxels. The holes are background regions completely
enclosed within a slice by a single foreground component. This step reduces the number
of false markers.
6. Two successive binary, 2-D, morphological operations are performed on each slice. The
operations taken together have the effect of removing small components and small
protrusions on large components. In particular, the operations are designed to remove
components and protrusions whose cross sections are smaller than or equal to the largest
cross section of a marker. The operations are erosion and dilation, in that order. The
structuring element is a square. The x dimension (in voxels) of the erosion structuring
element is calculated as ⌈2𝑅 π‘š/βˆ†π‘₯ 𝑣
|
βŒ‰ (the ceiling function and the prime refers to the
subsampled image). The y dimension is similarly calculated. The size of the dilation
structuring element in each dimension is the size of the erosion element plus one.
7. The binary image that was output by the previous step is subtracted from the binary image
that was input to the previous step. That is, a new binary image is produced in which
those voxels that were foreground voxels in the input image but background in the output
image are set to foreground. The remaining voxels are set to background. The result is a
binary image consisting only of the small components and protrusions that were removed
in the previous step.
8. For the entire image volume, the foreground is partitioned into 3-D connected
components. The definition of connectedness can be varied. We have found that including
11
Software Engineering Department
the eight 2-D eight-connected neighbors within the slice plus the two 3-D six-connected
neighbors on the neighboring slices works well for both CT and MR images.
9. The intensity-weighted centroid of each selected component is determined using the voxel
intensities in the original image. The coordinates of the centroid position (xc, yc, zc) are
calculated independently as follows:
π‘₯ 𝑐 =
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)π‘₯𝑖𝑖
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖
, 𝑦𝑐 =
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑦𝑖𝑖
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖
, 𝑧𝑐 =
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑧𝑖𝑖
βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖
10. The voxels that contain the points (xc, yc, zc) are identified.
The voxels identified in the last step are the candidate voxels.
The step of Part two:
Part Two operates on a region of the original image around each candidate voxel. Desired to
use the smallest region possible to improve speed. The region must contain all voxels whose
centers are closer to the center of the candidate voxel than the longest marker dimension
(2Rm), plus all voxels that are adjacent to these voxels. For convenience, we use a rectangular
parallelepiped that is centered about the candidate voxel. The x dimension (in voxels) is
calculated as 2⌈2𝑅 π‘š/βˆ†π‘₯ π‘£βŒ‰ + 3. The 3 represents the center voxel, plus an adjacent voxel on
each end. The y and z dimensions are similarly calculated. For each of these regions Part Two
performs the following steps:
1. It is determined whether or not there exists a β€œsuitable” threshold for the candidate voxel.
This determination can be made by a brute-force checking of each intensity value in the
available range of intensities. In either case a suitable threshold is defined as follows. For
a given threshold the set of foreground (higher-intensity) voxels that are three-
dimensionally connected to the candidate voxel are identified. The threshold is considered
suitable if the size and shape of this foreground component is sufficiently similar to that of
a marker. There are two rules that determine whether the size and shape of the
component are sufficiently similar.
a) The distance from the center of the candidate voxel to the center of the most distant
voxel of the component must be less than or equal to the longest marker dimension
(2Rm).
b) The volume, Vc, of the component, determined by counting its voxels and multiplying
by the volume of a single voxel 𝑉𝑣 = βˆ†π‘₯ 𝑣 Γ— βˆ†π‘¦π‘£ Γ— βˆ†π‘§ 𝑣, must be within the range
⌈ π›Όπ‘‰π‘š, π›½π‘‰π‘šβŒ‰.
2. If no such threshold exists, the candidate point is discarded. If there are multiple suitable
thresholds, the smallest one (which produces the largest foreground component) is
chosen in order to maximally exploit the intensity information available within the marker.
3. If the threshold does exist, the following steps are taken
a) The intensity-weighted centroid of the foreground component is determined using
the voxel intensities in the original image. The coordinates of the centroid position
(xf, yf, zf ) are calculated as in Step 9 of Part One of the algorithm but with the
foreground component determined in Step 1.
b) The average intensity of the voxels in the foreground component is calculated
using the voxel intensities in the original image.
4. The voxel that contains the centroid (xf, yf, zf) is iteratively fed back to Step 1 of Part Two.
If two successive iterations produce the same centroid, the centroid position and its
associated average intensity is recorded. If two successive iterations have not produced
the same centroid by the fourth iteration, the candidate is discarded.
The centroid positions (xf, yf, zf) are ranked according to the average intensity of their
components. The M points with the highest intensities are declared to be fiducial points and
are output in order by rank. A candidate with a higher intensity is considered more likely to be
a fiducial point.
12
Software Engineering Department
2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]:
The set of all straight lines in the picture plane constitutes two-parameter family. If we fix a
parameterization for the family, then an arbitrary straight line can be represented by a single
point in the parameter space. For reasons that become obvious, we prefer the so-called
normal parameterization. As illustrated in Fig. 3, this parameterization specifies a straight
line by the angle πœƒ of its normal and its algebraic distance p from the origin. The equation of
a line corresponding to this geometry is:
π‘₯π‘π‘œπ‘ πœƒ + π‘¦π‘ π‘–π‘›πœƒ = π‘Ÿ
If we restrict πœƒ to the interval [0,Ο€), then the normal parameters for a line are unique. With
this restriction, every line in the x-y plane corresponds to a unique point in the πœƒ βˆ’ π‘Ÿ plane.
Suppose, now, that we have some set {(π‘₯1, 𝑦1), … , (π‘₯ 𝑛, 𝑦𝑛)} of n figure points and we want to
find a set of straight lines that fit them. We transform the points (π‘₯𝑖, 𝑦𝑖) into the sinusoidal
curves in the πœƒ βˆ’ π‘Ÿ plane defined by:
(1) π‘Ÿ = π‘₯𝑖 π‘π‘œπ‘ πœƒ + 𝑦𝑖 π‘ π‘–π‘›πœƒ
It is easy to show that the curves corresponding to co-linear figure points have a common
point of intersection. This point in the πœƒ βˆ’ π‘Ÿ plane, say (πœƒ0, π‘Ÿ0) defines the line passing
through the colinear points. Thus, the problem of detecting co-linear points can be converted
to the problem of finding concurrent curves.
Figure 3.The normal parameters for the line
A dual property of the point-to-curve transformation can also be established. Suppose we
of points in the πœƒ βˆ’ π‘Ÿ plane, all lying on the curve:
π‘Ÿ = π‘₯0 π‘π‘œπ‘ πœƒ + 𝑦0 π‘ π‘–π‘›πœƒ
Then it is easy to show that all these points correspond to lines in the x-y plane passing
through the point (π‘₯0, 𝑦0). We can summarize these interesting properties of the point-to-
curve transformation as follows:
1. A point in the picture plane corresponds to a sinusoidal curve in the parameter plane.
2. A point in the parameter plane corresponds to a straight line in the picture plane.
3. Points lying on the same straight line in the picture plane correspond to curves
through a common point in the parameter plane.
4. Point s lying on the same curve in the parameter plane correspond to lines through
the same point in the picture plane.
13
Software Engineering Department
2.2 Detailed description
2.2.1 Introduction:
Physicians use phantom in the test, in order to simulate the human organs. They fill the
cylinders with different volumes of radiopharmaceutical (depicted on figure 4).
Figure 4:PET phantom viewed from above
The phantom is placed into the PET camera, and the scan begins. As a result of the scan we
get a set of slice images (figure 5).
Figure 5: Image slice receive from the PET camera
The best slice is the image slice that has no noises and all cylinders are clearly visible.
So, the physicians need to select the β€œbest slice” from the set of received slices to
work with it.
They mark all clearly visible cylinders and get minimum, maximum and mean SUV
(Standardized Uptake Value) statistics from marked regions. This data is needed for
future calculations, such as ratios.
At the end of the test they produce a report with attached hard copy image slice. If all
the results meet the criteria then the camera has passed the test.
14
Software Engineering Department
2.2.2 The problem is:
1. Define the template – actually the template is the MASK that is applied on the slice image
in order to define the ROIs and choosing the best slice from the set of images.
Figure 6: Applied template mask
2. Fit the template to the PET slices size (scaling, rotating and moving).
3. According the template choose β€œbest” slice from all slices, given by the camera.
2.2.3 Our solution to problem is:
1. Find at least three spots using algorithm depicted in [3] from the CT image. This algorithm
provides as a result centroids of all founded cylinders. By Z-coordinate of centroids and
known thickness of the slice image we can get the number of the CT slice in order to build
a template according to the found centroids.
We got the slice and now we need to find all the circles on it, according the Haugh
transformation algorithm [4]. Founded 8 circles gave us the needed template (figure 6).
2. To fit the template to the PET slices size, the following steps are applied:
a) Extract a slice same as the slice found from PET camera (the template`s slice).
b) Color the inner space of the phantom in the image in white.
c) Find the center of this circle (center of phantom) using Haugh transformation
algorithm [4].
d) Get the size of this white circle and transform (scale) template to its size.
3. We need to check all slices according to template in order to find the β€œbest” slice that has
less errors and noises. Get needed values from founded ROIs. Then, do all calculations
needed for the report. At the end provide the report with attached hard copy image slice.
2.3 Expected results
To illustrate the expected results (β€œbest” slice image of phantom that contain β€œclear” (best
fitted) information), we want to show two slices. The first one is bad one (figure 7) and the
second one (figure 8) is good enough to be an expected result.
15
Software Engineering Department
Figure 7: Bad slice image (not selected to be the best)
Figure 8: Good slice image (candidate to be the best slice)
16
Software Engineering Department
We get the best slice image, needed for QA test with marked ROIs (figure 9).
Figure 9: Hard copy of final ROIs
3. Software Engineering documents
3.1 Requirements (Use case)
17
Software Engineering Department
3.2 GUI
This is the main window of the program with filled test parameters:
You can change the application settings using options window:
18
Software Engineering Department
There is an option to generate MASKs with MASK generation application:
The program automatically finds the best slice and fits the selected MASK to it. But there is an
option to edit applied MASK if user does not like how it was applied:
19
Software Engineering Department
During the loading if found more than one series in search directory, program pop-ups the
β€œseries selection” window:
For problem solving there is a help window with all explanations:
20
Software Engineering Department
3.3 Program structure – Architecture, Design
3.3.1 UML class diagram
+ CenterClosingCT(img : Image<Gray, Byte>) : Image<Gray, Byt...
+ ClosingImage(img : Image<Gray, Byte>, erodeElement : IntPtr,...
+ ConvertFromImageCoordinates(img : Image<Gray, Byte>, pnt ...
+ FindBestSlice(slices : List<DicomFile>, mask : CircleMask) : Dico...
+ FitCircleMask(img : Image<Gray, Byte>, msk : CircleMask) : Circ...
+ MakeBinaryImage(img : Image<Gray, Byte>, intensityThreshol...
+ SearchPhantomCenter(image : Image<Gray, Byte>, cannyThre...
+ SearchPhantomRadius(image : Image<Gray, Byte>, cannyThre...
- shapes : List<Shape>
+ CircleMask(shapes : List<Shape>)
+ CircleMask(center : PointF, radius : Single, shapes :...
21
Software Engineering Department
+ lstReturn : List<Di...
- allList : List<DicomF...
+ ChooseSeries(strLi...
- SortList(list : List<D...
- masks : Dictionary<...
- PETimagesList : List...
- PETimagesList3D : L...
- SortList(list : List<D...
- allList : List<DicomF...
+ SliceFitForm(allList ...
- PETimagesList : List...
- SortList(list : List<D...
- shapes : Dictionary...
22
Software Engineering Department
3.3.2 Sequence diagram
3.3.3 Activity diagram
23
Software Engineering Department
3.4 Testing plan
This section presents test scenario done for common user requirements for learning.
3.4.1 Test scenario for: Main interface
# Taken Action Expected Results Pass/Fail
1
Start the application
An empty (clear parameters)
GUI opened. The application
is ready for use. β€œRun test” &
β€œCorrect Manually” buttons is
disabled. All the other GUI
components are enabled.
Pass
2
β€œFile->Program option”
Program options window
opened. All the buttons
enabled. Text fields show the
paths that user has defined.
Pass
3 β€œFile->Exit” Close the application. Pass
4
β€œMask->Generate Mask”
Open the MASK generation
application. All the GUI
components are enabled.
Pass
5
β€œHelp->About”
Open the β€œabout” window. All
the text fields are correctly
shown. β€œOK” button enabled.
Pass
6
β€œHelp->Help”
Open the β€œhelp” (.chm)
window.
Pass
7
Paths β€œBrowse” button
Open the browse window. All
the GUI components are
enabled. After the selection,
the full path is shown in
program window.
Pass
8 Test parameter wrong values
or empty fields
Pop up error message. Pass
9
Mask combo box clicked
Open the combo box dialog
with list of MASKs that exists
in MASKs folder.
Pass
10 β€œLoad images” button with
empty paths or MASK was not
chosen
Pop up error message. Pass
11
β€œLoad images” button with filled
correct paths + chosen MASK
Load the images. Update and
show test log. Progress bar is
running during the loading.
After the loading is completed
find the best slices, fit MASKs
for them, show them in
program main window and
disable the β€œLoad images”
button. β€œRun test” & β€œCorrect
manually” buttons is enabled.
While loading the images the
β€œClear” button is disabled. If
during the loading there more
than one series in DICOM
images folder, pop up the
Pass
24
Software Engineering Department
selection window (all GUI
parameters are correct slider
is disabled).
12
β€œCorrect manually” button
Open fit the MASK manually
window. All the GUI
parameters are enabled. Best
slice is shown in the window
with automatically fitted
MASK.
Pass
13
β€œClear” button
In all step of the test the
button clears all the test
parameters.
Pass
14
β€œRun test” button
Open the test result (.pdf) file.
File filled with all correct
calculation results.
Pass
15 Exit program button Close the application. Pass
3.4.2 Test scenario for – Program Option
# Taken Action Expected Results Pass/Fail
1
β€œBrowse” the path button
Open the browse window.
Text fields are disabled and
show a path that user chose
during the installation. All the
GUI parameters are shown
correctly. After the browse
selection text fields show the
chosen path.
Pass
2 Exit button Close the application. Pass
3.4.3 Test scenario for – Mask Generator
# Taken Action Expected Results Pass/Fail
1
β€œFile->Load Background”
Open file dialog. All the GUI
parameters correct. Show the
background image when user
has been selected it.
Pass
2
β€œFile->Load Mask”
Open file dialog. All the GUI
parameters correct. Show the
mask when user has been
selected it.
Pass
3
β€œFile->Save Mask”
Open the browse dialog to
save the (.msk) file of the
created Mask.
Pass
4 β€œFile->Exit” Close the application. Pass
5
β€œHelp->About”
Open about dialog. All the GUI
parameters correct. OK button
enabled.
Pass
6 Selection of ROI objects Highlight the chosen object.
Make the transformation
Pass
25
Software Engineering Department
option opened for this object.
7
Mouse right button
Open the object
transformation dialog (if the
object was not selected before
the None option is checked
in).
Pass
8 Object selected +
transformation did not select +
(Up/Down key pressed or Left
mouse clicked + move cursor
up and down)
Nothing was happened. Pass
9 Object selected +
transformation selected +
(Up/Down key pressed or Left
mouse clicked + move cursor
up and down)
Transform of chosen object is
working correctly.
Pass
10 Orientation check box checked
(by default)
Show the PHANTOM outline
circle.
Pass
11 Orientation check box
unchecked
Does not show the PHANTOM
direct circle.
Pass
12 ROIs check box checked (by
default)
Show the ROIs circles. Pass
13
ROIs check box unchecked
Does not show the ROIs
circles.
Pass
14 Exit button Close the application. Pass
3.4.4 Test scenario for – DICOM images selection
# Taken Action Expected Results Pass/Fail
1
Combo box
Drop down the founded
series. Show images and
enable the slider when series
is selected.
Pass
2
Slider moving
Show the series DICOM
images.
Pass
3
Exit/Cancel button
Close the window. Pop up
error message. Stop the
loading.
Pass
26
Software Engineering Department
3.4.5 Test scenario for – Manually correction
# Taken Action Expected Results Pass/Fail
1
Slider moving
Show the DICOM slice images
in the list of chosen series.
Show the number of the
image in β€œSlice” text field.
Pass
2
Green direction buttons
Move the Mask on chosen
image, according to each
button direction image
(Up/Down/Left/Right).
Pass
3
Purple rotation buttons
Rotate the Mask on chosen
image, according to each
button direction image (left
direction = counter clockwise,
right direction = clockwise).
Pass
4
Scale selection
Scale the Mask on chosen
image. Up values (>0)
increase the Mask size, Down
values (<0) decrease the
Mask size.
Pass
5
OK button
Save the current mask
position and chosen image as
best slice. Close the window.
Pass
6 Exit/Cancel button Close the window. Pass
27
Software Engineering Department
4. Result and conclusion
During the work on the project, we dealt with number of problems. In this chapter we want to
describe them and show our solutions.
4.1 QA Testing Process
1. Generate/Choose the PHANTOM Mask.
2. Load Series of 2D/3D image slices from source directories.
3. Find the best slice in each image slice series (2D/3D). As it was mentioned above, the
best slice is the slice, which contains β€œclear” (best fitted) information.
4. Fit the masks.
5. Retrieve test values from ROIs according chosen Masks.
6. Generate report.
4.2 Problems and solutions
4.2.1 Working with set of DICOM images:
Problem:
In order to complete the QA test, user must select the path to 2D PET DICOM images and to
3D PET DICOM images. But source images directory can contain more than one series of
PHANTOM slices.
Solution:
The program will load all image series and will ask the user to choose the series. User can
run over the slices to see the quality of each set and choose the best one.
Note: If there is only one set of to 2D PET DICOM images and one set of 3D PET DICOM
images in folder, system automatically detect it (will not show the popup window).
4.2.2 Creation of PET/CT mask:
Problem:
Our test program uses a PHANTOM MASK for choosing the best slices and for calculating
the ROIs SUV values. In the start of our work we had no MASK so there was nothing to
apply.
Solution:
In RAMBAM medical center tester works only with one kind of PHANTOM, that can be
changed in the future. For this MASK and for all future MASKS, we have created tool for
MASK generation.
4.2.3 Find the best slices:
Problem:
According the Part A of our project we wanted to use β€œAn Automatic Technique for Finding
and Localizing Externally Attached Markers in CT and MR Volume Images of the Head”
algorithm in order to obtain the best slice. But unfortunately, the algorithm did not work. So
there is a need in another way to solve the problem.
28
Software Engineering Department
Solution:
Firstly we wanted to use β€œHough” algorithm in order to find all visible circles on the image.
But different PET slices series have varies intensities, are noisy and it is very difficult to
determine if it is a real image or noise. So we needed to provide new parameters to β€œHough”
algorithm every time we had new series. There was no regularity in those parameters, so it
was impossible to use this method.
Another solution was to find the slice with highest intensity voxel on it. There was a problem
with it, because each slice contains the max value and it can be real or caused by noise.
During our tries, we have mentioned, that changing image contrasts affects the visibility of
image parts. So, by setting the specific image contrast we can mark the hot spots only. By
counting hot spots in each slice we can determine the quality of the slice and it can be
candidate for the best slice. For each candidate and his neighbors, define how many visible
circles the slice has (using β€œHough” algorithm) and the difference between neighbor slice
circles. Compare the numbers of visible circles on the slice, and choose the slice with max
circle numbers (not above 4 in our case), having lowest difference between the neighbors
slice circles, as the best slice.
Note: The contrast in DICOM image is defined by two parameters: window width, window
center. These parameters defines range window of gray levels. There are two PET/CT
cameras machines in RAMBAM hospital: GE Discovery 690 (new model), GE Discovery
LS(old model). For marking the hot spots we used following settings:
D690 (new model) – window width = 1, window center = (4000 + 9085 – window width
provided in DICOM file (tag: (0028, 1051)).
LS (old model) – window width = 1, window center = 40 * (400 – (energy window limit upper
limit (tag: (0054, 0016)) – energy window lower limit (tag: (0054, 0014)))).
4.2.4 Fit the MASK to Best slice:
Problem:
After finding the best slice we need to fit the MASK. Initially, we took a CT image, and used
image processing (β€œopening”, β€œclosing”, and filtering), then we used a β€œHough” algorithm to
find the β€œBone cylinder” and center of PHANTOM slice and after that we calculated
scale/moving/rotation factors. But this solution was not good, because all this actions change
source image and caused discrepancy.
After this work we understood that it is impossible to transform CT fitted MASK to PET image
because different size of these images. So, we decided to fit the MASK directly on PET
image.
Solution: Firstly we found the highest/lowest/left/right points of PHANTOM and received
square. By square we got the center of PHANTOM and radius as scale factor for the MASK.
Then we converted the image to binary image by applying threshold found some of hot spot
circles (center and radius). By center of PHANTOM and centers of these circles we found
rotation factor for the MASK.
Note: To find rotation angle we need to determine angles differences between Hot spot
center and axes Y (as shown in figure 10). Let us call angle between center of Hot spot on
the PET image and axes y – Ξ±, angle between center of Hot spot on the MASK and axes Y –
Ξ². The difference between angles: βˆ†=∝ βˆ’π›½. The rotation angle is an average between all
Hot spots Ξ”`s.
And afterwards we fit the MASK by applying all the transformations with founded factors.
29
Software Engineering Department
Figure 10: Rotation angle
4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image:
Problem:
The values that, are stored in DICOM image are in Bq/ml, but for our QA test those values in
units of SUV, so we need to convert the Bq/ml to SUV.
Solution:
If the original image units are Bq/ml and all necessary data are present, PET images can be
displayed in units of SUVs.
If the PET image units field (DICOM Tag: <0054, 1001>) is set to BQML, then the PET
images may be displayed in SUVs or in uptake in the form of Bq/ml. The application must do
the conversion from activity concentration to SUV. GE (as we work only with GE cameras)
applications provide the following SUV types:
1. SUV Body Weight (SUVbw) – this value we need for our test.
2. SUV Body Surface Area (SUVbsa).
3. SUV Lean Body Mass (SUVlbm).
Calculations:
SUVbw =
PET image pixels βˆ™ Weights in grams
𝐼𝑛𝑗𝑒𝑐𝑑𝑒𝑑 π·π‘œπ‘ π‘’
PET image pixels and injected dose are decay corrected to the start of scan. PET image
pixels are in units of Activity/Volume. Images converted to SUVbw are displayed with units of
gr/ml.
Images with initial units of uptake (Bq/ml) may be converted to SUVs and back to uptake or
to another SUV type. However if the images are loaded in some units other than uptake,
then no conversion shall be allowed. This holds true even if the units are the same as SUV
units. This is because there is no way to know exactly how the SUVs were calculated.
SUV computation requires the following DICOM attributes to be filled in:
weight = patient weight = Study Patient Weight (10,1010)
tracer activity = Total Dose (18, 1074)
measured time = Radio Pharmaceutical Start Time (18, 1072)
administered time = Radio Pharmaceutical Start Time (18, 1072)
half life = Radio Nuclide Half Life(0018,1075)
30
Software Engineering Department
scan time = Series Date (0008, 0021) + Series Time (0008,0031)
Note: Series Date/Time can be overwritten if the original PET images are post processed
and a new series is generated. The software needs to check that the acquisition Date/Time
(0008, 0023) and (0008, 0033) is equal to or later than the Series Date/Time. If it isn’t, the
Series Date/Time has been overwritten and for GE PET images the software should use a
GE private attribute (0009x, 100d) for the scan start DATETIME.
Proceed to calculate SUVs as below.
The formula we use for SUV factors are:
SUVbw =
𝑝𝑖π‘₯𝑒𝑙 βˆ™ π‘€π‘’π‘–π‘”β„Žπ‘‘
π‘Žπ‘π‘‘π‘’π‘Žπ‘™ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦
π‘Žπ‘π‘‘π‘’π‘Žπ‘™ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦ = π‘‘π‘Ÿπ‘Žπ‘π‘’π‘Ÿ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦ βˆ™ 2
βˆ’(π‘ π‘π‘Žπ‘› π‘‘π‘–π‘šπ‘’βˆ’π‘šπ‘’π‘Žπ‘ π‘’π‘Ÿπ‘’π‘‘ π‘‘π‘–π‘šπ‘’)
β„Žπ‘Žπ‘™π‘“ 𝑙𝑖𝑓𝑒
Note: In the GE PET Images, Total Dose(18,1074) = NET Activity to the patient at Series
Time (0008, 0031).
4.3 Running/Simulation
4.3.1 Simulation 1
Date of QA test: 03/12/12
Camera: Discovery D690
FOV2:
31
Software Engineering Department
FOV1:
Test Result: Test was successfully passed.
4.3.2 Simulation 2
Date of QA test: 03/12/12
Camera: Discovery LS
FOV2:
32
Software Engineering Department
FOV1:
Test Result: Test was successfully passed.
4.3.3 Simulation 3
Date of QA test: 08/05/13
Camera: Discovery D690
We deliberately have rotated the MASK in order to fail the QA Test. As you can see on the
pictures the calculated results have not passed the criteria. So, the Test failed!
FOV2:
33
Software Engineering Department
FOV1:
Test Result: Test Failed.
4.4 Final conclusion
As we see during our project, that β€œAn Automatic Technique for Finding and Localizing
Externally Attached Markers in CT and MR Volume Images of the Head” algorithm is not
applicable in our project. This algorithm probably is fine for the work with CT images, but our
project focuses on PET images, so we found the best solution for it.
In a work with image processing, you need to pay attention that the image processing result
generally are not accurate, and if there is a need in precision you need to try additional
techniques in order to double check your results.
In addition, if there are similar images but with different quality you need to adjust the contrast
in order to improve your image.
Our work was based only on two kinds of GE cameras, so the project is oriented for them. Any
additions may cause additional changes in algorithms and calculations.
34
Software Engineering Department
References
[1] J. P. Pluim, J. B. Maintz, and M. A. Viergever, β€˜β€˜Image registration by maximization of
combined mutual information and gradient information,’’IEEE Trans. Med. Imaging 19, 809–
814 ~2000.
[2] W. M. Wells III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, β€˜β€˜Multi-modal volume
registration by maximization of mutual information,’’ Med. Image Anal 1, 35–51 ~1996.
[3] Matthew Y. Wang, Calvin R. Maurer, Jr., J. Michael Fitzpatrick,* Member, IEEE, and Robert
J. Maciunas, β€œIEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 43, NO. 6,
JUNE 1996.”
[4] Use of the Hough transformation to detect lines and curves in pictures. Technical note 36.
April 1971. By: Richard O.Duda and Peter E. Hart. Artificial intelligence center.

More Related Content

What's hot

SPECT SCAN
SPECT SCANSPECT SCAN
SPECT SCANsensuiii
Β 
Radiation diagnostics diseases of the brain and spinal cord
Radiation diagnostics diseases of the brain and spinal cord Radiation diagnostics diseases of the brain and spinal cord
Radiation diagnostics diseases of the brain and spinal cord ShieKh Aabid
Β 
nuclear medicine
nuclear medicine nuclear medicine
nuclear medicine Fhood Al-matbe
Β 
Introduction to radiology
Introduction to radiologyIntroduction to radiology
Introduction to radiologyShahbaz Ali
Β 
Changing how researchers think about MRI: Utilizing a simple to use, compact...
Changing how researchers think about MRI:  Utilizing a simple to use, compact...Changing how researchers think about MRI:  Utilizing a simple to use, compact...
Changing how researchers think about MRI: Utilizing a simple to use, compact...Scintica Instrumentation
Β 
2016.12.pdf
2016.12.pdf2016.12.pdf
2016.12.pdfwil son
Β 
Nuclear medicine in dento maxillofacial region
Nuclear medicine in dento maxillofacial regionNuclear medicine in dento maxillofacial region
Nuclear medicine in dento maxillofacial regionFathimath Zahra
Β 
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...Scintica Instrumentation
Β 

What's hot (10)

SPECT SCAN
SPECT SCANSPECT SCAN
SPECT SCAN
Β 
Radiation diagnostics diseases of the brain and spinal cord
Radiation diagnostics diseases of the brain and spinal cord Radiation diagnostics diseases of the brain and spinal cord
Radiation diagnostics diseases of the brain and spinal cord
Β 
CT vs MRI Scan
CT vs MRI ScanCT vs MRI Scan
CT vs MRI Scan
Β 
nuclear medicine
nuclear medicine nuclear medicine
nuclear medicine
Β 
Introduction to radiology
Introduction to radiologyIntroduction to radiology
Introduction to radiology
Β 
10lab spect
10lab spect10lab spect
10lab spect
Β 
Changing how researchers think about MRI: Utilizing a simple to use, compact...
Changing how researchers think about MRI:  Utilizing a simple to use, compact...Changing how researchers think about MRI:  Utilizing a simple to use, compact...
Changing how researchers think about MRI: Utilizing a simple to use, compact...
Β 
2016.12.pdf
2016.12.pdf2016.12.pdf
2016.12.pdf
Β 
Nuclear medicine in dento maxillofacial region
Nuclear medicine in dento maxillofacial regionNuclear medicine in dento maxillofacial region
Nuclear medicine in dento maxillofacial region
Β 
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...
Overview of Scintica’s Preclinical Imaging Product Portfolio: Technical Capab...
Β 

Viewers also liked

Six Things About Ayyress
Six Things About AyyressSix Things About Ayyress
Six Things About AyyressAyyress10
Β 
Uno intl marketing plan
Uno intl marketing planUno intl marketing plan
Uno intl marketing planNel Castillo
Β 
IQ vs AI
IQ vs AIIQ vs AI
IQ vs AIPhani Sai
Β 
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')Илья Π“ΠΎΠ»ΠΎΠ²Π»Ρ‘Π²
Β 
Veterinary Dental Imaging Advantages
Veterinary Dental Imaging AdvantagesVeterinary Dental Imaging Advantages
Veterinary Dental Imaging Advantagesmaryettaglockner
Β 
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄ - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄  - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄  - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄ - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''Илья Π“ΠΎΠ»ΠΎΠ²Π»Ρ‘Π²
Β 
Examen
ExamenExamen
Examenliuzerg
Β 
Presentation at Montreal Stock Exchange
Presentation at Montreal Stock ExchangePresentation at Montreal Stock Exchange
Presentation at Montreal Stock ExchangeRoss H. McMeekin
Β 
GCOT 2015 - Experiential Marketing
GCOT 2015 - Experiential MarketingGCOT 2015 - Experiential Marketing
GCOT 2015 - Experiential MarketingHeather Ainardi
Β 
Asif Khan CV
Asif Khan CVAsif Khan CV
Asif Khan CVAsif Khan
Β 
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111Pao Panja
Β 
Cremation of IBM Stock Certificates
Cremation of IBM Stock CertificatesCremation of IBM Stock Certificates
Cremation of IBM Stock CertificatesDaniel Stumpf
Β 
Selenium Success
Selenium SuccessSelenium Success
Selenium SuccessSurinder Kaur
Β 

Viewers also liked (16)

Six Things About Ayyress
Six Things About AyyressSix Things About Ayyress
Six Things About Ayyress
Β 
Uno intl marketing plan
Uno intl marketing planUno intl marketing plan
Uno intl marketing plan
Β 
IQ vs AI
IQ vs AIIQ vs AI
IQ vs AI
Β 
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')
Π˜Ρ€Π²ΠΈΠ½Π³, Π”ΠΆΠΎΠ½ - ''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π”ΠΎΠΌΠ° сидра'' (''ΠŸΡ€Π°Π²ΠΈΠ»Π° Π²ΠΈΠ½ΠΎΠ΄Π΅Π»ΠΎΠ²'')
Β 
CNN.com - Transcripts
CNN.com - TranscriptsCNN.com - Transcripts
CNN.com - Transcripts
Β 
Veterinary Dental Imaging Advantages
Veterinary Dental Imaging AdvantagesVeterinary Dental Imaging Advantages
Veterinary Dental Imaging Advantages
Β 
report
reportreport
report
Β 
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄ - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄  - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄  - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''
Π›ΠΎpΠ° Π€Π»opΠ°Π½Π΄ - ''Π€paΠ½Ρ†yΠΆΠ΅Π½ΠΊΠΈ Π½e ΠΊpaΠ΄ΡƒΡ‚ шoΠΊoΠ»Π°Π΄''
Β 
Examen
ExamenExamen
Examen
Β 
Presentation at Montreal Stock Exchange
Presentation at Montreal Stock ExchangePresentation at Montreal Stock Exchange
Presentation at Montreal Stock Exchange
Β 
Olavides-ML-MMS102Final
Olavides-ML-MMS102FinalOlavides-ML-MMS102Final
Olavides-ML-MMS102Final
Β 
GCOT 2015 - Experiential Marketing
GCOT 2015 - Experiential MarketingGCOT 2015 - Experiential Marketing
GCOT 2015 - Experiential Marketing
Β 
Asif Khan CV
Asif Khan CVAsif Khan CV
Asif Khan CV
Β 
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111
ΰΈ„ΰΈ³ΰΈ¨ΰΈ±ΰΈžΰΈ—ΰΉŒΰΈžΰΈ·ΰΉ‰ΰΈ™ΰΈΰΈ²ΰΈ™ ΰΈŠΰΈ±ΰΉ‰ΰΈ™ΰΈ›ΰΈ£ΰΈ°ΰΈ–ΰΈ‘ΰΈ¨ΰΈΆΰΈΰΈ©ΰΈ²ΰΈ›ΰΈ΅ΰΈ—ΰΈ΅ΰΉˆ ΰΉ”111
Β 
Cremation of IBM Stock Certificates
Cremation of IBM Stock CertificatesCremation of IBM Stock Certificates
Cremation of IBM Stock Certificates
Β 
Selenium Success
Selenium SuccessSelenium Success
Selenium Success
Β 

Similar to Project Book

Micro robotic cholesteatoma surgery
Micro robotic cholesteatoma surgeryMicro robotic cholesteatoma surgery
Micro robotic cholesteatoma surgeryPrasanna Datta
Β 
Pet appilcation[1]
Pet  appilcation[1]Pet  appilcation[1]
Pet appilcation[1]SanzzuTimilsina
Β 
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...AsifaAndleeb
Β 
M1 - Photoconductive Emitters
M1 - Photoconductive EmittersM1 - Photoconductive Emitters
M1 - Photoconductive EmittersThanh-Quy Nguyen
Β 
M2 - Graphene on-chip THz
M2 - Graphene on-chip THzM2 - Graphene on-chip THz
M2 - Graphene on-chip THzThanh-Quy Nguyen
Β 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkZHENG YAN LAM
Β 
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...James Seyforth
Β 
IGS_final_report_bronchoscopy
IGS_final_report_bronchoscopyIGS_final_report_bronchoscopy
IGS_final_report_bronchoscopyEduard Cortes
Β 
FUSION IMAGING
FUSION IMAGINGFUSION IMAGING
FUSION IMAGINGVibhuti Kaul
Β 
BSc Thesis Jochen Wolf
BSc Thesis Jochen WolfBSc Thesis Jochen Wolf
BSc Thesis Jochen WolfJochen Wolf
Β 
pet scanner machine
pet scanner machinepet scanner machine
pet scanner machineKalebKetema
Β 
Nuclear imaging in dentistry
Nuclear imaging in dentistryNuclear imaging in dentistry
Nuclear imaging in dentistryMammootty Ik
Β 
Positron Emission Tomography (PET).pdf
Positron Emission Tomography (PET).pdfPositron Emission Tomography (PET).pdf
Positron Emission Tomography (PET).pdfSELF-EXPLANATORY
Β 
nuclear medicine 10 marks questions and answers.docx
nuclear medicine 10 marks questions and answers.docxnuclear medicine 10 marks questions and answers.docx
nuclear medicine 10 marks questions and answers.docxGanesan Yogananthem
Β 
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdf
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdfMWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdf
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdfDr. MWEBAZA VICTOR
Β 
Nuclear Medicine - PET/CT
Nuclear Medicine - PET/CTNuclear Medicine - PET/CT
Nuclear Medicine - PET/CT@Saudi_nmc
Β 
Optimisation of X-Ray CT within SPECTCT Studies
Optimisation of X-Ray CT within SPECTCT StudiesOptimisation of X-Ray CT within SPECTCT Studies
Optimisation of X-Ray CT within SPECTCT StudiesLayal Jambi
Β 

Similar to Project Book (20)

Micro robotic cholesteatoma surgery
Micro robotic cholesteatoma surgeryMicro robotic cholesteatoma surgery
Micro robotic cholesteatoma surgery
Β 
Pet appilcation[1]
Pet  appilcation[1]Pet  appilcation[1]
Pet appilcation[1]
Β 
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...
aguidefordelineationoflymphnodalclinicaltargetvolumeinradiationtherapy-100218...
Β 
Lymphnodes
LymphnodesLymphnodes
Lymphnodes
Β 
M1 - Photoconductive Emitters
M1 - Photoconductive EmittersM1 - Photoconductive Emitters
M1 - Photoconductive Emitters
Β 
M2 - Graphene on-chip THz
M2 - Graphene on-chip THzM2 - Graphene on-chip THz
M2 - Graphene on-chip THz
Β 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural Network
Β 
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...
Realisation of a Digitally Scanned Laser Light Sheet Fluorescent Microscope w...
Β 
IGS_final_report_bronchoscopy
IGS_final_report_bronchoscopyIGS_final_report_bronchoscopy
IGS_final_report_bronchoscopy
Β 
FUSION IMAGING
FUSION IMAGINGFUSION IMAGING
FUSION IMAGING
Β 
BSc Thesis Jochen Wolf
BSc Thesis Jochen WolfBSc Thesis Jochen Wolf
BSc Thesis Jochen Wolf
Β 
pet scanner machine
pet scanner machinepet scanner machine
pet scanner machine
Β 
Spect technology
Spect technologySpect technology
Spect technology
Β 
Nuclear imaging in dentistry
Nuclear imaging in dentistryNuclear imaging in dentistry
Nuclear imaging in dentistry
Β 
Positron Emission Tomography (PET).pdf
Positron Emission Tomography (PET).pdfPositron Emission Tomography (PET).pdf
Positron Emission Tomography (PET).pdf
Β 
nuclear medicine 10 marks questions and answers.docx
nuclear medicine 10 marks questions and answers.docxnuclear medicine 10 marks questions and answers.docx
nuclear medicine 10 marks questions and answers.docx
Β 
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdf
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdfMWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdf
MWEBAZA VICTOR - Oncologic and Cardiologic PET CT Diagnosis.pdf
Β 
Nuclear Medicine - PET/CT
Nuclear Medicine - PET/CTNuclear Medicine - PET/CT
Nuclear Medicine - PET/CT
Β 
Spect technology
Spect technologySpect technology
Spect technology
Β 
Optimisation of X-Ray CT within SPECTCT Studies
Optimisation of X-Ray CT within SPECTCT StudiesOptimisation of X-Ray CT within SPECTCT Studies
Optimisation of X-Ray CT within SPECTCT Studies
Β 

Project Book

  • 1. 1 Software Engineering Department Analysis of PHANTOM images in order to determine the reliability of PET/SPECT cameras Authors Archil Pirmisashvili (ID: 317881407) Gleb Orlikov (ID: 317478014) Supervisor Dr. Miri Cohen Weiss
  • 2. 2 Software Engineering Department Table of contents: 1. INTRODUCTION............................................................................................................................3 2. THEORY.........................................................................................................................................5 2.1 BACKGROUND ............................................................................................................................................ 5 2.1.1 Image registration by maximization of combined mutual information and gradient information [1]: 5 2.1.2 Multi-modal volume registration by maximization of mutual Information [2]:.................................. 6 2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head [3]:............................................................................................................... 9 2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]:.................................... 12 2.2 DETAILED DESCRIPTION ............................................................................................................................... 13 2.2.1 Introduction:................................................................................................................................. 13 2.2.2 The problem is:.............................................................................................................................. 14 2.2.3 Our solution to problem is: ............................................................................................................ 14 2.3 EXPECTED RESULTS..................................................................................................................................... 14 3. SOFTWARE ENGINEERING DOCUMENTS.................................................................................16 3.1 REQUIREMENTS (USE CASE) ......................................................................................................................... 16 3.2 GUI....................................................................................................................................................... 17 3.3 PROGRAM STRUCTURE – ARCHITECTURE, DESIGN.............................................................................................. 20 3.3.1 UML class diagram........................................................................................................................ 20 3.3.2 Sequence diagram......................................................................................................................... 22 3.3.3 Activity diagram............................................................................................................................ 22 3.4 TESTING PLAN........................................................................................................................................... 23 3.4.1 Test scenario for: Main interface ................................................................................................... 23 3.4.2 Test scenario for – Program Option ............................................................................................... 24 3.4.3 Test scenario for – Mask Generator............................................................................................... 24 3.4.4 Test scenario for – DICOM images selection................................................................................... 25 3.4.5 Test scenario for – Manually correction ......................................................................................... 26 4. RESULT AND CONCLUSION ......................................................................................................27 4.1 QA TESTING PROCESS ................................................................................................................................ 27 4.2 PROBLEMS AND SOLUTIONS.......................................................................................................................... 27 4.2.1 Working with set of DICOM images: .............................................................................................. 27 4.2.2 Creation of PET/CT mask: .............................................................................................................. 27 4.2.3 Find the best slices: ....................................................................................................................... 27 4.2.4 Fit the MASK to Best slice: ............................................................................................................. 28 4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image: ................................................. 29 4.3 RUNNING/SIMULATION .............................................................................................................................. 30 4.3.1 Simulation 1.................................................................................................................................. 30 4.3.2 Simulation 2.................................................................................................................................. 31 4.3.3 Simulation 3.................................................................................................................................. 32 4.4 FINAL CONCLUSION .................................................................................................................................... 33 REFERENCES .................................................................................................................................34
  • 3. 3 Software Engineering Department 1. Introduction Imaging visualization methods are widely used in modern medicine. These methods allow get images of human normal and pathological organs and systems. Beside CT and MRI methods, nuclear diagnostic is a branch of imaging diagnostic, in which multi-modality imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are widely used. These two methods use gamma cameras in order to provide 2D/3D images. The maintenance of these cameras requires periodical QA tests. Today this procedure takes at least 4 hours per camera. Therefore our goal is to automate this procedure to reduce time. Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as molecular medicine or molecular imaging & therapeutics. Nuclear medicine uses certain properties of isotopes and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that may be associated with disease. Relatively short lived isotope, such as 123 I is administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light signal, which is in turn amplified and converted into count data. Scintigraphy is a form of diagnostic test wherein radioisotopes are taken internally, for example intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the radiation emitted by the radiopharmaceuticals. Single-Photon Emission Computed Tomography (SPECT) is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which provides localization of functional SPECT data, is termed a SPECT/CT camera, and has shown utility in advancing the field of molecular imaging. In most other medical imaging modalities, energy is passed through the body and the reaction or result is read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly Thallium 201 TI, Technetium 99m TC, Iodine 123 I, and Gallium 67 Ga. The radioactive gamma rays are emitted through the body as the natural decaying process of these isotopes takes place. The emissions of the gamma rays are captured by detectors that surround the body. This essentially means that the human is now the source of the radioactivity, rather than the medical imaging devices such as X-Ray or CT. Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived positron emitting isotope, such as 18 F, is incorporated with an organic substance such as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners combine PET with a CT, or even MRI, to optimize the image reconstruction involved with positron imaging. This is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient management.
  • 4. 4 Software Engineering Department Figure 1: Positron annihilation event in PET Imaging phantoms, or simply "phantoms", are specially designed objects that are scanned or imaged in the field of medical imaging to evaluate, analyze, and tune the performance of various imaging devices. These objects are more readily available and provide more consistent results than the use of a living subject or cadaver, and likewise avoid subjecting a living subject to direct risk. Phantoms were originally employed for use in 2D x-ray based imaging techniques such as radiography or fluoroscopy, though more recently phantoms with desired imaging characteristics have been developed for 3D techniques such as MRI, CT, Ultrasound, PET, and other imaging methods or modalities. Figure 2: PHATOM A phantom used to evaluate an imaging device should respond in a similar manner to how human tissues and organs would act in that specific imaging modality. For instance, phantoms made for 2D radiography may hold various quantities of x-ray contrast agents with similar x-ray absorbing properties to normal tissue to tune the contrast of the imaging device or modulate the patients’ exposure to radiation. In such a case, the radiography phantom would not necessarily need to have similar textures and mechanical properties since these are not relevant in x-ray imaging modalities. However, in the case of ultrasonography, a phantom with similar rheological and ultrasound scattering properties to real tissue would be essential, but x-ray absorbing properties would not be needed. Physicists perform the PHANTOM studies in PET and SPECT cameras, each producing a stack of images that shows the 3D radioactive distribution as produced by the camera. The results can be measured and compared to either the ideal results or to previous results. Aim of QA test - Tomographic image quality is determined by a number of different performance parameters, primarily the scanner sensitivity, tomographic uniformity, contrast and spatial resolution, and the process that is used to reconstruct the images. Because of the complexity of the variation in the uptake of radiopharmaceuticals and the large range of patient sizes and shapes, the characteristics of radioactivity distributions can vary greatly and a single study with a phantom cannot simulate all clinical imaging conditions. Cameras produce images simulating those obtained in a total body imaging study involving both hot and cold lesions. Image quality is assessed by calculating image contrast and background variability ratios for both hot and cold
  • 5. 5 Software Engineering Department spheres. This test allows assessment of the accuracy of the absolute quantification of radioactivity concentration in the uniform volume of interest inside the phantom. 2. Theory 2.1 Background The goal of the test is to determine the two β€œbest” slices from the collection of image slices provided by the camera. Best slice is the slice image, which best matches to template of the ROI (regions-of-interest). Accordingly, we need firstly to define the template and then use it in order to find the two β€œbest” slices. Template contains positions of hot and cold ROI cylinders. There are some algorithms that work with CT and PET images: 2.1.1 Image registration by maximization of combined mutual information and gradient information [1]: Mutual information has developed into an accurate measure for rigid and affine mono- and multimodality image registration. The robustness of the measure is questionable, however. A possible reason for this is the absence of spatial information in the measure. The present paper proposes to include spatial information by combining mutual information with a term based on the image gradient of the images to be registered. The gradient term not only seeks to align locations of high gradient magnitude, but also aims for a similar orientation of the gradients at these locations. Method: The definition of the mutual information I of two images A and B combines the marginal and joint entropies of the images in the following manner: 𝐼(𝐴, 𝐡) = 𝐻(𝐴) + 𝐻(𝐡) βˆ’ 𝐻(𝐴, 𝐡) Here, H(A) and H(B) denote the separate entropy values of A and B respectively. H(A,B) is he joint entropy, i.e. the entropy of the joint probability distribution of the image intensities. Correct registration of the images is assumed to be equivalent to maximization of the mutual information of the images. This implies a balance between minimization of the joint entropy and maximization of the marginal entropies. Recently, it was shown that the mutual information measure is sensitive to the amount of overlap between the images and normalized mutual information measures were introduced to overcome this problem. Examples of such measures are the normalized mutual information introduced by Studholme: π‘Œ(𝐴, 𝐡) = 𝐻(𝐴) + 𝐻(𝐡) 𝐻(𝐴, 𝐡) and the entropy correlation coefficient used by Maes: 𝐸𝐢𝐢(𝐴, 𝐡) = 2𝐼(𝐴, 𝐡) 𝐻(𝐴) + 𝐻(𝐡) These two measures have a one-to-one correspondence. Image locations with a strong gradient are assumed to denote a transition of tissues, which are locations of high information value. The gradient is computed on a certain spatial scale. We have extended mutual information measures (both standard and normalized) to include spatial information that is present in each of the images. This extension is accomplished by multiplying the mutual information with a gradient term. The gradient term is based not only on the magnitude of the gradients, but also on the orientation of the gradients. The gradient vector is computed for each sample point x ={x1, x2, x3} in one image and its corresponding point in the other image, x`, which is found by geometric transformation of
  • 6. 6 Software Engineering Department x. The three partial derivatives that together form the gradient vector are calculated by convolving the image with the appropriate first derivatives of a Gaussian kernel of scale Οƒ. The angle Ξ±x,x` (Οƒ) between the gradient vectors is defined by: ∝ π‘₯,π‘₯` (𝜎) = π‘Žπ‘Ÿπ‘π‘π‘œπ‘  βˆ‡π‘₯(𝜎) βˆ™ βˆ‡π‘₯`(𝜎) |βˆ‡π‘₯(𝜎)||βˆ‡π‘₯`(𝜎)| with βˆ‡x(Οƒ) denoting the gradient vector at point x of scale Οƒ and | Β· | denoting magnitude. The proposed registration measure defined by: 𝐼 𝑛𝑒𝑀(𝐴, 𝐡) = 𝐺(𝐴, 𝐡)𝐼(𝐴, 𝐡) with 𝐺(𝐴, 𝐡) = βˆ‘ πœ”(𝛼 π‘₯,π‘₯`(𝜎)) (π‘₯,π‘₯`)∈(𝐴∩𝐡) π‘šπ‘–π‘›(|βˆ‡π‘₯(𝜎)|, |βˆ‡π‘₯`(𝜎)|) Similarly, the combination of normalized mutual information and gradient information is defined: π‘Œπ‘›π‘’π‘€(𝐴, 𝐡) = 𝐺(𝐴, 𝐡)π‘Œ(𝐴, 𝐡) 2.1.2 Multi-modal volume registration by maximization of mutual Information [2]: This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron- emission tomography (PET) images. Consider the problem of registering two different MR images of the same individual. When perfectly aligned these signals should be quite similar. One simple measure of the quality of a hypothetical registration is the sum of squared differences between voxel values. This measure can be motivated with a probabilistic argument. If the noise inherent in an MR image were Gaussian, independent and identically distributed, then the sum of squared differences is negatively proportional to the likelihood that the two images are correctly registered. Unfortunately, squared difference and the closely related operation of correlation are not effective measures for the registration of different modalities. Even when perfectly registered, MR and CT images taken from the same individual are quite different. In fact MR and CT are useful in conjunction precisely because they are different. This is not to say the MR and CT images are completely unrelated. They are after all both informative measures of the properties of human tissue. Using a large corpus of data, or some physical theory, it might be possible to construct a function F(Β·) that predicts CT from the corresponding MR value, at least approximately. Using F we could evaluate registrations by computing F(MR) and comparing it via sum of squared differences (or correlation) with the CT image. If the CT and MR images were not correctly registered, then F would not be good at predicting one from the other. While theoretically it might be possible to find F and use it in this fashion, in practice prediction of CT from MR is a difficult and under-determined problem. The the following derivation is referred to the two volumes of image data that are to be registered as the reference volume and the test volume. A voxel of the reference volume is denoted u(x), where the x are the coordinates of the voxel. A voxel of the test volume is denoted similarly as v(x). Given that T is a transformation from the coordinate frame of the reference volume to the test volume, v(T (x)) is the test volume voxel associated with the reference volume voxel u(x). Note that in order to simplify some of the subsequent equations we will use T to denote both the transformation and its parameterization. We seek an estimate of the transformation that registers the reference volume u and test volume v by maximizing their mutual information: (1) 𝑇̂ = arg π‘šπ‘Žπ‘₯ 𝑇 𝐼(𝑒(π‘₯), 𝑣(𝑇(π‘₯))) Mutual information is defined in terms of entropy in the following way: (2) 𝐼 (𝑒(π‘₯), 𝑣(𝑇(π‘₯))) ≑ β„Ž(𝑒(π‘₯)) + β„Ž (𝑣(𝑇(π‘₯))) βˆ’ β„Ž(𝑒(π‘₯), 𝑣(𝑇(π‘₯)))
  • 7. 7 Software Engineering Department h(Β·) is the entropy of a random variable, and is defined as β„Ž(π‘₯) ≑ βˆ’ ∫ 𝑝(π‘₯) ln(𝑝(π‘₯)) 𝑑π‘₯ , while the joint entropy of two random variables x and y is β„Ž(π‘₯) ≑ βˆ’ ∫ 𝑝(π‘₯, 𝑦)ln(𝑝(π‘₯, 𝑦)) 𝑑π‘₯ 𝑑𝑦. Entropy can be interpreted as a measure of uncertainty, variability, or complexity. The mutual information defined in Equation (2) has three components. The first term on the right is the entropy in the reference volume, and is not a function of T. The second term is the entropy of the part of the test volume into which the reference volume projects. It encourages transformations that project u into complex parts of v. The third term, the (negative) joint entropy of u and v, contributes when u and v are functionally related. The entropies described above are defined in terms of integrals over the probability densities associated with the random variables u(x) and v(T (x)). When registering medical image data we will not have direct access to these densities. The first step in estimating entropy from a sample is to approximate the underlying probability density p(z) by a superposition of functions centered on the elements of a sample A drawn from z: (3) 𝑝(𝑧) β‰ˆ π‘ƒβˆ— (𝑧) ≑ 1 𝑁𝐴 βˆ‘ 𝑅(𝑧 βˆ’ 𝑧𝑗) 𝑧 π‘—βˆˆπ΄ where NA is the number of trials in the sample A and R is a window function which integrates to 1. Pβˆ—(z) is widely known as the Parzen window density estimate. Unfortunately, evaluating the entropy integral: (4) β„Ž(𝑧) β‰ˆ βˆ’πΈπ‘§[𝑙𝑛𝑃 βˆ— (𝑧)] β‰ˆ βˆ’ 1 𝑁𝑏 βˆ‘ π‘™π‘›π‘ƒβˆ— (𝑧𝑖) π‘§π‘–βˆˆπ΅ where NB is the size of a second sample B. The sample mean converges toward the true expectation at a rate proportional to 1/√NB. We may now write an approximation for the entropy of a random variable z as follows: (5) β„Ž(𝑧) β‰ˆ β„Žβˆ—(𝑧) ≑ βˆ’ 1 𝑁𝑏 βˆ‘ 𝑙𝑛 1 𝑁𝐴 βˆ‘ 𝐺 πœ“(𝑧𝑖 βˆ’ 𝑧𝑗) 𝑧 π‘—βˆˆπ΄π‘§π‘–βˆˆπ΅ Where (Gaussian density function): 𝐺 πœ“(𝑧) ≑ (2πœ‹) βˆ’πœ‹ 2 | πœ“|βˆ’0.5 exp(βˆ’ 1 2 𝑧 𝑇 πœ“βˆ’1 𝑧) Next we examine the entropy of v(T (x)), which is a function of the transformation T . In order to find a maximum of entropy or mutual information, we may ascend the gradient with respect to the transformation T. After some manipulation, the derivative of the entropy may be written as follows: (6) 𝑑 𝑑𝑇 β„Ž βˆ— (𝑣(𝑇(π‘₯))) = 1 𝑁 𝐡 βˆ‘ βˆ‘ π‘Šπ‘’(𝑉𝑖, 𝑉𝑗)(𝑉𝑖 βˆ’ 𝑉𝑗) 𝑇 π‘₯ π‘—βˆˆπ΄π‘₯ π‘–βˆˆπ΅ πœ“βˆ’1 𝑑 𝑑𝑇 (𝑉𝑖 βˆ’ 𝑉𝑗 ) Using the following definitions: 𝑣𝑖 ≑ 𝑣(𝑇(π‘₯𝑖)), 𝑣𝑗 ≑ 𝑣 (𝑇(π‘₯𝑗)), 𝑣 π‘˜ ≑ 𝑣(𝑇(π‘₯ π‘˜)) And π‘Šπ‘£(𝑉𝑖, 𝑉𝑗) ≑ 𝐺 πœ“ 𝑣 (𝑉𝑖 βˆ’ 𝑉𝑗) βˆ‘ 𝐺 πœ“ 𝑣 (𝑉𝑖 βˆ’ π‘‰π‘˜)π‘₯ π‘˜βˆˆπ΄ The entropy approximation described in Equation (5) may now be used to evaluate the mutual information between the reference volume and the test volume [Equation (2)]. In order to seek a maximum of the mutual information, we will calculate an approximation to its derivative,
  • 8. 8 Software Engineering Department 𝑑 𝑑𝑇 𝐼(𝑇) β‰ˆ 𝑑 𝑑𝑇 β„Ž βˆ— (𝑒(π‘₯)) + 𝑑 𝑑𝑇 β„Ž βˆ— (𝑣(𝑇(π‘₯))) βˆ’ 𝑑 𝑑𝑇 β„Ž βˆ— (𝑒(π‘₯), 𝑣(𝑇(π‘₯))) Given these definitions we can obtain an estimate for the derivative of the mutual information as follows: 𝑑𝐼 𝑑𝑇 Μ‚ = 1 𝑁 𝐡 βˆ‘ βˆ‘ (𝑉𝑖 βˆ’ 𝑉𝑗) 𝑇 π‘₯ π‘—βˆˆπ΄π‘₯ π‘–βˆˆπ΅ Γ— [π‘Šπ‘£(𝑣𝑖, 𝑣𝑗) πœ“ 𝑣 βˆ’1 βˆ’ π‘Šπ‘€(𝑀𝑖, 𝑀𝑗) πœ“ 𝑣𝑣 βˆ’1 ] 𝑑 𝑑𝑇 (𝑣𝑖 βˆ’ 𝑣𝑗) The weighting factors are defined as: π‘Šπ‘£(𝑣𝑖, 𝑣𝑗) ≑ 𝐺 πœ“ 𝑣 (𝑣𝑖 βˆ’ 𝑣𝑗) βˆ‘ 𝐺 πœ“ 𝑣 (𝑣𝑖 βˆ’ 𝑣 π‘˜)π‘₯ π‘˜βˆˆπ΄ π‘Šπ‘€(𝑀𝑖, 𝑀𝑗) ≑ 𝐺 πœ“ 𝑣 (𝑉𝑖 βˆ’ 𝑉𝑗) βˆ‘ 𝐺 πœ“ 𝑣 (𝑉𝑖 βˆ’ π‘‰π‘˜)π‘₯ π‘˜βˆˆπ΄ If we are to increase the mutual information, then the first term in the brackets may be interpreted as acting to increase the squared distance between pairs of samples that are nearby in test volume intensity, while the second term acts to decrease the squared distance between pairs of samples whose intensities are nearby in both volumes. It is important to emphasize that these distances are in the space of intensities, rather than coordinate locations. The term 𝑑 𝑑𝑇 (𝑣𝑖 βˆ’ 𝑣𝑗) will generally involve gradients of the test volume intensities, and the derivative of transformed coordinates with respect to the transformation. We seek a local maximum of mutual information by using a stochastic analog of gradient descent. Steps are repeatedly taken that are proportional to the approximation of the derivative of the mutual information with respect to the transformation: Repeat: A ← {sample of size NA drawn from x} B ← {sample of size NB drawn from x} T ← 𝑇 + πœ† 𝑑𝐼̂ 𝑑𝑇 The parameter Ξ» is called the learning rate. The above procedure is repeated a fixed number of times or until convergence is detected. When using this procedure, some care must be taken to ensure that the parameters of transformation remain valid. In addition to the learning rate Ξ», the covariance matrices of the Parzen window functions are important parameters of this technique. It is not difficult to determine suitable values for these parameters by empirical adjustment, and that is the method we usually use. Referring back to Equation (3), ψ should be chosen so that Pβˆ—(z) provides the best estimate for p(z). In other words ψ is chosen so that a sample B has the maximum possible likelihood. Assuming that the trials in B are chosen independently, the log likelihood of ψ is: (7) 𝑙𝑛 ∏ 𝑃 βˆ— (𝑧𝑖) = βˆ‘ ln 𝑃 βˆ— (𝑧𝑖) π‘§π‘–βˆˆπ΅π‘§π‘–βˆˆπ΅ This equation bears a striking resemblance to Equation (4), and in fact the log likelihood of ψ is maximized precisely when the entropy estimator hβˆ—(z) is minimized. Was assumed that the covariance matrices are diagonal, (8) πœ“ = 𝐷𝐼𝐴𝐺(𝜎1 2 , 𝜎2 2 , … ) Following a derivation almost identical to the one described above derived an equation analogous to Equation (6), (9) 𝑑 𝑑 𝜎 π‘˜ β„Žβˆ— (𝑧) = 1 𝑁𝑏 βˆ‘ βˆ‘ π‘Šπ‘§(𝑧 𝑏, 𝑧 π‘Ž) π‘₯ π‘Žβˆˆπ‘Žπ‘₯ π‘βˆˆπ‘ ( 1 𝜎 𝐾 )( [𝑍] 𝐾 2 𝜎 𝐾 2 βˆ’ 1)
  • 9. 9 Software Engineering Department where [z]k is the z`th component of the vector z. In practice both the transformation T and the covariance ψ can be adjusted simultaneously; so while T is adjusted to maximize the mutual information, I (u(x), v(T (x))), ψ is adjusted to minimize hβˆ—(v(T (x))). 2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head [3]: Different imaging modalities provide different types of information that can be combined to aid diagnosis and surgery. Bone, for example, is seen best on X-ray computed tomography (CT) images, while soft-tissue structures are seen best on magnetic resonance (MR) images. Because of the complementary nature of the information in these two modalities, the registration of CT images of the head with MR images is of growing importance for diagnosis and for surgical planning. Furthermore, registration of images with patient anatomy is used in new interactive image-guided surgery techniques to track in real time the changing position of a surgical instrument or probe on a display of preoperative image sets of the patient. The definition of registration as the determination of a one-to-one mapping between the coordinates in one space and those in another, such that points in the two spaces that correspond to the same anatomic point are mapped to each other. Point-based registration involves the determination of the coordinates of corresponding points in different images and the estimation of the geometrical transformation using these corresponding points. The points may be either intrinsic, or extrinsic. Intrinsic points are derived from naturally occurring features, e.g., anatomic landmark points. Extrinsic points are derived from artificially applied markers, e.g., tubes containing copper sulfate. We use external fiducial markers that are rigidly attached through the skin to the skull. The points used for registration fiducial points or fiducials, as distinguished from β€œfiducial markers,” and pick as the fiducials the geometric centers of the markers. Determining the coordinates of the fiducials, which we callJiducia2 localization, may be done in image space or in physical space. Several techniques have been developed for determining the physical space coordinates of external markers. The algorithm finds markers in image volumes of the head. A three-dimensional (3-D) image volume typically consists of a stack of two-dimensional (2-D) image slices. The algorithm finds markers whose image intensities are higher than their surroundings. It is also tailored to find markers of a given size and shape. All of the marker may be visible in the image, or it may consist of both imageable and nonimageable parts. It is the imageable part that is found by the algorithm, and it is the size and shape of this imageable part that is important to the algorithm. Henceforth when we use the term β€œmarker” we are referring to only the imageable portion of the marker. Three geometrical parameters specify the size and shape of the marker adequately for the purposes of this algorithm: 1) the radius rm, of the largest sphere that can be inscribed within the marker, 2) the radius Rm, of the smallest sphere that can circumscribe the marker, and 3) the volume Vm, of the marker. Cylindrical markers with diameter d and height h for clinical experiments. For these markers: π‘Ÿ π‘š = min(𝑑, β„Ž) 2 , 𝑅 π‘š = βˆšπ‘‘2 + β„Ž2 2 , π‘‰π‘š = πœ‹π‘‘2 β„Ž 4 First, we must search the entire image volume to find marker-like objects. Second, for each marker-like object, we must decide whether it is a true marker or not and accurately localize the centroid for each true one. Therefore, the algorithm consists of two parts. Part one finds β€œcandidate voxels”. Each candidate voxel lies within a bright region that might be the image of a marker. The requirements imposed by Part One are minimal with the result that, for the M markers in that image, there are typically many more than M candidate points identified. Part Two selects from these candidates M points that are most likely to lie within actual
  • 10. 10 Software Engineering Department markers and provides a centroid for each one. Part One is designed so that it is unlikely to miss a true marker. Part Two is designed so that it is unlikely to accept a false marker. Part One takes the following input: The image volume of the head of a patient. The type of image (CT or MR). The voxel dimensions βˆ†π‘₯ 𝑣, βˆ†π‘¦π‘£, and βˆ†π‘§ 𝑣. The marker’s geometrical parameters rm, Rm and Vm. The intensity of an empty voxel. Part One produces as output a set of candidate voxels. Part Two takes the same input as Part One, plus two additional pieces of information: the set of candidate voxels produced by Part One and the number of external markers M known a priori to be present in the image. Part Two produces as output a list of M β€œfiducial points”. Each fiducial point is a 3-D position (zf, yf, zf ) that is an estimate of the centroid of a marker. The list is ordered with the first member of the list being most likely to be a marker and the last being the least likely. Part One operates on the entire image volume. 1. If the image is an MR image, a 2-D, three-by-three median filter is applied within each slice to reduce noise. 2. To speed up the search, a new, smaller image volume is formed by subsampling. The subsampling rate in x is calculated as ⌊ π‘Ÿ π‘š βˆ†π‘₯ 𝑣 βŒ‹. The subsampling rates in y and z are similarly calculated. 3. An intensity threshold is determined. For CT images, the threshold is the one that minimizes the within-group variance. For MR images, the threshold is computed as the mean of two independently determined thresholds. The first is the threshold that minimizes the within-group variance. The second is the threshold that maximizes the Kullback information value. 4. This threshold is used to produce a binary image volume with higher intensities in the foreground. Foreground voxels are typically voxels that are part of the image of markers or of the patient’s head. 5. If the original image is an MR image, spurious detail tends to appear in the binary image produced by the previous step. The spurious detail is composed of apparent holes in the head caused by regions that produce weak signal, such as the skull and sinuses. Thus, if the original image is an MR image, these holes in the binary image are filled. In this step each slice is considered individually. A foreground component is a two-dimensionally connected set of foreground voxels. The holes are background regions completely enclosed within a slice by a single foreground component. This step reduces the number of false markers. 6. Two successive binary, 2-D, morphological operations are performed on each slice. The operations taken together have the effect of removing small components and small protrusions on large components. In particular, the operations are designed to remove components and protrusions whose cross sections are smaller than or equal to the largest cross section of a marker. The operations are erosion and dilation, in that order. The structuring element is a square. The x dimension (in voxels) of the erosion structuring element is calculated as ⌈2𝑅 π‘š/βˆ†π‘₯ 𝑣 | βŒ‰ (the ceiling function and the prime refers to the subsampled image). The y dimension is similarly calculated. The size of the dilation structuring element in each dimension is the size of the erosion element plus one. 7. The binary image that was output by the previous step is subtracted from the binary image that was input to the previous step. That is, a new binary image is produced in which those voxels that were foreground voxels in the input image but background in the output image are set to foreground. The remaining voxels are set to background. The result is a binary image consisting only of the small components and protrusions that were removed in the previous step. 8. For the entire image volume, the foreground is partitioned into 3-D connected components. The definition of connectedness can be varied. We have found that including
  • 11. 11 Software Engineering Department the eight 2-D eight-connected neighbors within the slice plus the two 3-D six-connected neighbors on the neighboring slices works well for both CT and MR images. 9. The intensity-weighted centroid of each selected component is determined using the voxel intensities in the original image. The coordinates of the centroid position (xc, yc, zc) are calculated independently as follows: π‘₯ 𝑐 = βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)π‘₯𝑖𝑖 βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖 , 𝑦𝑐 = βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑦𝑖𝑖 βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖 , 𝑧𝑐 = βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑧𝑖𝑖 βˆ‘ (𝐼𝑖 βˆ’ 𝐼0)𝑖 10. The voxels that contain the points (xc, yc, zc) are identified. The voxels identified in the last step are the candidate voxels. The step of Part two: Part Two operates on a region of the original image around each candidate voxel. Desired to use the smallest region possible to improve speed. The region must contain all voxels whose centers are closer to the center of the candidate voxel than the longest marker dimension (2Rm), plus all voxels that are adjacent to these voxels. For convenience, we use a rectangular parallelepiped that is centered about the candidate voxel. The x dimension (in voxels) is calculated as 2⌈2𝑅 π‘š/βˆ†π‘₯ π‘£βŒ‰ + 3. The 3 represents the center voxel, plus an adjacent voxel on each end. The y and z dimensions are similarly calculated. For each of these regions Part Two performs the following steps: 1. It is determined whether or not there exists a β€œsuitable” threshold for the candidate voxel. This determination can be made by a brute-force checking of each intensity value in the available range of intensities. In either case a suitable threshold is defined as follows. For a given threshold the set of foreground (higher-intensity) voxels that are three- dimensionally connected to the candidate voxel are identified. The threshold is considered suitable if the size and shape of this foreground component is sufficiently similar to that of a marker. There are two rules that determine whether the size and shape of the component are sufficiently similar. a) The distance from the center of the candidate voxel to the center of the most distant voxel of the component must be less than or equal to the longest marker dimension (2Rm). b) The volume, Vc, of the component, determined by counting its voxels and multiplying by the volume of a single voxel 𝑉𝑣 = βˆ†π‘₯ 𝑣 Γ— βˆ†π‘¦π‘£ Γ— βˆ†π‘§ 𝑣, must be within the range ⌈ π›Όπ‘‰π‘š, π›½π‘‰π‘šβŒ‰. 2. If no such threshold exists, the candidate point is discarded. If there are multiple suitable thresholds, the smallest one (which produces the largest foreground component) is chosen in order to maximally exploit the intensity information available within the marker. 3. If the threshold does exist, the following steps are taken a) The intensity-weighted centroid of the foreground component is determined using the voxel intensities in the original image. The coordinates of the centroid position (xf, yf, zf ) are calculated as in Step 9 of Part One of the algorithm but with the foreground component determined in Step 1. b) The average intensity of the voxels in the foreground component is calculated using the voxel intensities in the original image. 4. The voxel that contains the centroid (xf, yf, zf) is iteratively fed back to Step 1 of Part Two. If two successive iterations produce the same centroid, the centroid position and its associated average intensity is recorded. If two successive iterations have not produced the same centroid by the fourth iteration, the candidate is discarded. The centroid positions (xf, yf, zf) are ranked according to the average intensity of their components. The M points with the highest intensities are declared to be fiducial points and are output in order by rank. A candidate with a higher intensity is considered more likely to be a fiducial point.
  • 12. 12 Software Engineering Department 2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]: The set of all straight lines in the picture plane constitutes two-parameter family. If we fix a parameterization for the family, then an arbitrary straight line can be represented by a single point in the parameter space. For reasons that become obvious, we prefer the so-called normal parameterization. As illustrated in Fig. 3, this parameterization specifies a straight line by the angle πœƒ of its normal and its algebraic distance p from the origin. The equation of a line corresponding to this geometry is: π‘₯π‘π‘œπ‘ πœƒ + π‘¦π‘ π‘–π‘›πœƒ = π‘Ÿ If we restrict πœƒ to the interval [0,Ο€), then the normal parameters for a line are unique. With this restriction, every line in the x-y plane corresponds to a unique point in the πœƒ βˆ’ π‘Ÿ plane. Suppose, now, that we have some set {(π‘₯1, 𝑦1), … , (π‘₯ 𝑛, 𝑦𝑛)} of n figure points and we want to find a set of straight lines that fit them. We transform the points (π‘₯𝑖, 𝑦𝑖) into the sinusoidal curves in the πœƒ βˆ’ π‘Ÿ plane defined by: (1) π‘Ÿ = π‘₯𝑖 π‘π‘œπ‘ πœƒ + 𝑦𝑖 π‘ π‘–π‘›πœƒ It is easy to show that the curves corresponding to co-linear figure points have a common point of intersection. This point in the πœƒ βˆ’ π‘Ÿ plane, say (πœƒ0, π‘Ÿ0) defines the line passing through the colinear points. Thus, the problem of detecting co-linear points can be converted to the problem of finding concurrent curves. Figure 3.The normal parameters for the line A dual property of the point-to-curve transformation can also be established. Suppose we of points in the πœƒ βˆ’ π‘Ÿ plane, all lying on the curve: π‘Ÿ = π‘₯0 π‘π‘œπ‘ πœƒ + 𝑦0 π‘ π‘–π‘›πœƒ Then it is easy to show that all these points correspond to lines in the x-y plane passing through the point (π‘₯0, 𝑦0). We can summarize these interesting properties of the point-to- curve transformation as follows: 1. A point in the picture plane corresponds to a sinusoidal curve in the parameter plane. 2. A point in the parameter plane corresponds to a straight line in the picture plane. 3. Points lying on the same straight line in the picture plane correspond to curves through a common point in the parameter plane. 4. Point s lying on the same curve in the parameter plane correspond to lines through the same point in the picture plane.
  • 13. 13 Software Engineering Department 2.2 Detailed description 2.2.1 Introduction: Physicians use phantom in the test, in order to simulate the human organs. They fill the cylinders with different volumes of radiopharmaceutical (depicted on figure 4). Figure 4:PET phantom viewed from above The phantom is placed into the PET camera, and the scan begins. As a result of the scan we get a set of slice images (figure 5). Figure 5: Image slice receive from the PET camera The best slice is the image slice that has no noises and all cylinders are clearly visible. So, the physicians need to select the β€œbest slice” from the set of received slices to work with it. They mark all clearly visible cylinders and get minimum, maximum and mean SUV (Standardized Uptake Value) statistics from marked regions. This data is needed for future calculations, such as ratios. At the end of the test they produce a report with attached hard copy image slice. If all the results meet the criteria then the camera has passed the test.
  • 14. 14 Software Engineering Department 2.2.2 The problem is: 1. Define the template – actually the template is the MASK that is applied on the slice image in order to define the ROIs and choosing the best slice from the set of images. Figure 6: Applied template mask 2. Fit the template to the PET slices size (scaling, rotating and moving). 3. According the template choose β€œbest” slice from all slices, given by the camera. 2.2.3 Our solution to problem is: 1. Find at least three spots using algorithm depicted in [3] from the CT image. This algorithm provides as a result centroids of all founded cylinders. By Z-coordinate of centroids and known thickness of the slice image we can get the number of the CT slice in order to build a template according to the found centroids. We got the slice and now we need to find all the circles on it, according the Haugh transformation algorithm [4]. Founded 8 circles gave us the needed template (figure 6). 2. To fit the template to the PET slices size, the following steps are applied: a) Extract a slice same as the slice found from PET camera (the template`s slice). b) Color the inner space of the phantom in the image in white. c) Find the center of this circle (center of phantom) using Haugh transformation algorithm [4]. d) Get the size of this white circle and transform (scale) template to its size. 3. We need to check all slices according to template in order to find the β€œbest” slice that has less errors and noises. Get needed values from founded ROIs. Then, do all calculations needed for the report. At the end provide the report with attached hard copy image slice. 2.3 Expected results To illustrate the expected results (β€œbest” slice image of phantom that contain β€œclear” (best fitted) information), we want to show two slices. The first one is bad one (figure 7) and the second one (figure 8) is good enough to be an expected result.
  • 15. 15 Software Engineering Department Figure 7: Bad slice image (not selected to be the best) Figure 8: Good slice image (candidate to be the best slice)
  • 16. 16 Software Engineering Department We get the best slice image, needed for QA test with marked ROIs (figure 9). Figure 9: Hard copy of final ROIs 3. Software Engineering documents 3.1 Requirements (Use case)
  • 17. 17 Software Engineering Department 3.2 GUI This is the main window of the program with filled test parameters: You can change the application settings using options window:
  • 18. 18 Software Engineering Department There is an option to generate MASKs with MASK generation application: The program automatically finds the best slice and fits the selected MASK to it. But there is an option to edit applied MASK if user does not like how it was applied:
  • 19. 19 Software Engineering Department During the loading if found more than one series in search directory, program pop-ups the β€œseries selection” window: For problem solving there is a help window with all explanations:
  • 20. 20 Software Engineering Department 3.3 Program structure – Architecture, Design 3.3.1 UML class diagram + CenterClosingCT(img : Image<Gray, Byte>) : Image<Gray, Byt... + ClosingImage(img : Image<Gray, Byte>, erodeElement : IntPtr,... + ConvertFromImageCoordinates(img : Image<Gray, Byte>, pnt ... + FindBestSlice(slices : List<DicomFile>, mask : CircleMask) : Dico... + FitCircleMask(img : Image<Gray, Byte>, msk : CircleMask) : Circ... + MakeBinaryImage(img : Image<Gray, Byte>, intensityThreshol... + SearchPhantomCenter(image : Image<Gray, Byte>, cannyThre... + SearchPhantomRadius(image : Image<Gray, Byte>, cannyThre... - shapes : List<Shape> + CircleMask(shapes : List<Shape>) + CircleMask(center : PointF, radius : Single, shapes :...
  • 21. 21 Software Engineering Department + lstReturn : List<Di... - allList : List<DicomF... + ChooseSeries(strLi... - SortList(list : List<D... - masks : Dictionary<... - PETimagesList : List... - PETimagesList3D : L... - SortList(list : List<D... - allList : List<DicomF... + SliceFitForm(allList ... - PETimagesList : List... - SortList(list : List<D... - shapes : Dictionary...
  • 22. 22 Software Engineering Department 3.3.2 Sequence diagram 3.3.3 Activity diagram
  • 23. 23 Software Engineering Department 3.4 Testing plan This section presents test scenario done for common user requirements for learning. 3.4.1 Test scenario for: Main interface # Taken Action Expected Results Pass/Fail 1 Start the application An empty (clear parameters) GUI opened. The application is ready for use. β€œRun test” & β€œCorrect Manually” buttons is disabled. All the other GUI components are enabled. Pass 2 β€œFile->Program option” Program options window opened. All the buttons enabled. Text fields show the paths that user has defined. Pass 3 β€œFile->Exit” Close the application. Pass 4 β€œMask->Generate Mask” Open the MASK generation application. All the GUI components are enabled. Pass 5 β€œHelp->About” Open the β€œabout” window. All the text fields are correctly shown. β€œOK” button enabled. Pass 6 β€œHelp->Help” Open the β€œhelp” (.chm) window. Pass 7 Paths β€œBrowse” button Open the browse window. All the GUI components are enabled. After the selection, the full path is shown in program window. Pass 8 Test parameter wrong values or empty fields Pop up error message. Pass 9 Mask combo box clicked Open the combo box dialog with list of MASKs that exists in MASKs folder. Pass 10 β€œLoad images” button with empty paths or MASK was not chosen Pop up error message. Pass 11 β€œLoad images” button with filled correct paths + chosen MASK Load the images. Update and show test log. Progress bar is running during the loading. After the loading is completed find the best slices, fit MASKs for them, show them in program main window and disable the β€œLoad images” button. β€œRun test” & β€œCorrect manually” buttons is enabled. While loading the images the β€œClear” button is disabled. If during the loading there more than one series in DICOM images folder, pop up the Pass
  • 24. 24 Software Engineering Department selection window (all GUI parameters are correct slider is disabled). 12 β€œCorrect manually” button Open fit the MASK manually window. All the GUI parameters are enabled. Best slice is shown in the window with automatically fitted MASK. Pass 13 β€œClear” button In all step of the test the button clears all the test parameters. Pass 14 β€œRun test” button Open the test result (.pdf) file. File filled with all correct calculation results. Pass 15 Exit program button Close the application. Pass 3.4.2 Test scenario for – Program Option # Taken Action Expected Results Pass/Fail 1 β€œBrowse” the path button Open the browse window. Text fields are disabled and show a path that user chose during the installation. All the GUI parameters are shown correctly. After the browse selection text fields show the chosen path. Pass 2 Exit button Close the application. Pass 3.4.3 Test scenario for – Mask Generator # Taken Action Expected Results Pass/Fail 1 β€œFile->Load Background” Open file dialog. All the GUI parameters correct. Show the background image when user has been selected it. Pass 2 β€œFile->Load Mask” Open file dialog. All the GUI parameters correct. Show the mask when user has been selected it. Pass 3 β€œFile->Save Mask” Open the browse dialog to save the (.msk) file of the created Mask. Pass 4 β€œFile->Exit” Close the application. Pass 5 β€œHelp->About” Open about dialog. All the GUI parameters correct. OK button enabled. Pass 6 Selection of ROI objects Highlight the chosen object. Make the transformation Pass
  • 25. 25 Software Engineering Department option opened for this object. 7 Mouse right button Open the object transformation dialog (if the object was not selected before the None option is checked in). Pass 8 Object selected + transformation did not select + (Up/Down key pressed or Left mouse clicked + move cursor up and down) Nothing was happened. Pass 9 Object selected + transformation selected + (Up/Down key pressed or Left mouse clicked + move cursor up and down) Transform of chosen object is working correctly. Pass 10 Orientation check box checked (by default) Show the PHANTOM outline circle. Pass 11 Orientation check box unchecked Does not show the PHANTOM direct circle. Pass 12 ROIs check box checked (by default) Show the ROIs circles. Pass 13 ROIs check box unchecked Does not show the ROIs circles. Pass 14 Exit button Close the application. Pass 3.4.4 Test scenario for – DICOM images selection # Taken Action Expected Results Pass/Fail 1 Combo box Drop down the founded series. Show images and enable the slider when series is selected. Pass 2 Slider moving Show the series DICOM images. Pass 3 Exit/Cancel button Close the window. Pop up error message. Stop the loading. Pass
  • 26. 26 Software Engineering Department 3.4.5 Test scenario for – Manually correction # Taken Action Expected Results Pass/Fail 1 Slider moving Show the DICOM slice images in the list of chosen series. Show the number of the image in β€œSlice” text field. Pass 2 Green direction buttons Move the Mask on chosen image, according to each button direction image (Up/Down/Left/Right). Pass 3 Purple rotation buttons Rotate the Mask on chosen image, according to each button direction image (left direction = counter clockwise, right direction = clockwise). Pass 4 Scale selection Scale the Mask on chosen image. Up values (>0) increase the Mask size, Down values (<0) decrease the Mask size. Pass 5 OK button Save the current mask position and chosen image as best slice. Close the window. Pass 6 Exit/Cancel button Close the window. Pass
  • 27. 27 Software Engineering Department 4. Result and conclusion During the work on the project, we dealt with number of problems. In this chapter we want to describe them and show our solutions. 4.1 QA Testing Process 1. Generate/Choose the PHANTOM Mask. 2. Load Series of 2D/3D image slices from source directories. 3. Find the best slice in each image slice series (2D/3D). As it was mentioned above, the best slice is the slice, which contains β€œclear” (best fitted) information. 4. Fit the masks. 5. Retrieve test values from ROIs according chosen Masks. 6. Generate report. 4.2 Problems and solutions 4.2.1 Working with set of DICOM images: Problem: In order to complete the QA test, user must select the path to 2D PET DICOM images and to 3D PET DICOM images. But source images directory can contain more than one series of PHANTOM slices. Solution: The program will load all image series and will ask the user to choose the series. User can run over the slices to see the quality of each set and choose the best one. Note: If there is only one set of to 2D PET DICOM images and one set of 3D PET DICOM images in folder, system automatically detect it (will not show the popup window). 4.2.2 Creation of PET/CT mask: Problem: Our test program uses a PHANTOM MASK for choosing the best slices and for calculating the ROIs SUV values. In the start of our work we had no MASK so there was nothing to apply. Solution: In RAMBAM medical center tester works only with one kind of PHANTOM, that can be changed in the future. For this MASK and for all future MASKS, we have created tool for MASK generation. 4.2.3 Find the best slices: Problem: According the Part A of our project we wanted to use β€œAn Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head” algorithm in order to obtain the best slice. But unfortunately, the algorithm did not work. So there is a need in another way to solve the problem.
  • 28. 28 Software Engineering Department Solution: Firstly we wanted to use β€œHough” algorithm in order to find all visible circles on the image. But different PET slices series have varies intensities, are noisy and it is very difficult to determine if it is a real image or noise. So we needed to provide new parameters to β€œHough” algorithm every time we had new series. There was no regularity in those parameters, so it was impossible to use this method. Another solution was to find the slice with highest intensity voxel on it. There was a problem with it, because each slice contains the max value and it can be real or caused by noise. During our tries, we have mentioned, that changing image contrasts affects the visibility of image parts. So, by setting the specific image contrast we can mark the hot spots only. By counting hot spots in each slice we can determine the quality of the slice and it can be candidate for the best slice. For each candidate and his neighbors, define how many visible circles the slice has (using β€œHough” algorithm) and the difference between neighbor slice circles. Compare the numbers of visible circles on the slice, and choose the slice with max circle numbers (not above 4 in our case), having lowest difference between the neighbors slice circles, as the best slice. Note: The contrast in DICOM image is defined by two parameters: window width, window center. These parameters defines range window of gray levels. There are two PET/CT cameras machines in RAMBAM hospital: GE Discovery 690 (new model), GE Discovery LS(old model). For marking the hot spots we used following settings: D690 (new model) – window width = 1, window center = (4000 + 9085 – window width provided in DICOM file (tag: (0028, 1051)). LS (old model) – window width = 1, window center = 40 * (400 – (energy window limit upper limit (tag: (0054, 0016)) – energy window lower limit (tag: (0054, 0014)))). 4.2.4 Fit the MASK to Best slice: Problem: After finding the best slice we need to fit the MASK. Initially, we took a CT image, and used image processing (β€œopening”, β€œclosing”, and filtering), then we used a β€œHough” algorithm to find the β€œBone cylinder” and center of PHANTOM slice and after that we calculated scale/moving/rotation factors. But this solution was not good, because all this actions change source image and caused discrepancy. After this work we understood that it is impossible to transform CT fitted MASK to PET image because different size of these images. So, we decided to fit the MASK directly on PET image. Solution: Firstly we found the highest/lowest/left/right points of PHANTOM and received square. By square we got the center of PHANTOM and radius as scale factor for the MASK. Then we converted the image to binary image by applying threshold found some of hot spot circles (center and radius). By center of PHANTOM and centers of these circles we found rotation factor for the MASK. Note: To find rotation angle we need to determine angles differences between Hot spot center and axes Y (as shown in figure 10). Let us call angle between center of Hot spot on the PET image and axes y – Ξ±, angle between center of Hot spot on the MASK and axes Y – Ξ². The difference between angles: βˆ†=∝ βˆ’π›½. The rotation angle is an average between all Hot spots Ξ”`s. And afterwards we fit the MASK by applying all the transformations with founded factors.
  • 29. 29 Software Engineering Department Figure 10: Rotation angle 4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image: Problem: The values that, are stored in DICOM image are in Bq/ml, but for our QA test those values in units of SUV, so we need to convert the Bq/ml to SUV. Solution: If the original image units are Bq/ml and all necessary data are present, PET images can be displayed in units of SUVs. If the PET image units field (DICOM Tag: <0054, 1001>) is set to BQML, then the PET images may be displayed in SUVs or in uptake in the form of Bq/ml. The application must do the conversion from activity concentration to SUV. GE (as we work only with GE cameras) applications provide the following SUV types: 1. SUV Body Weight (SUVbw) – this value we need for our test. 2. SUV Body Surface Area (SUVbsa). 3. SUV Lean Body Mass (SUVlbm). Calculations: SUVbw = PET image pixels βˆ™ Weights in grams 𝐼𝑛𝑗𝑒𝑐𝑑𝑒𝑑 π·π‘œπ‘ π‘’ PET image pixels and injected dose are decay corrected to the start of scan. PET image pixels are in units of Activity/Volume. Images converted to SUVbw are displayed with units of gr/ml. Images with initial units of uptake (Bq/ml) may be converted to SUVs and back to uptake or to another SUV type. However if the images are loaded in some units other than uptake, then no conversion shall be allowed. This holds true even if the units are the same as SUV units. This is because there is no way to know exactly how the SUVs were calculated. SUV computation requires the following DICOM attributes to be filled in: weight = patient weight = Study Patient Weight (10,1010) tracer activity = Total Dose (18, 1074) measured time = Radio Pharmaceutical Start Time (18, 1072) administered time = Radio Pharmaceutical Start Time (18, 1072) half life = Radio Nuclide Half Life(0018,1075)
  • 30. 30 Software Engineering Department scan time = Series Date (0008, 0021) + Series Time (0008,0031) Note: Series Date/Time can be overwritten if the original PET images are post processed and a new series is generated. The software needs to check that the acquisition Date/Time (0008, 0023) and (0008, 0033) is equal to or later than the Series Date/Time. If it isn’t, the Series Date/Time has been overwritten and for GE PET images the software should use a GE private attribute (0009x, 100d) for the scan start DATETIME. Proceed to calculate SUVs as below. The formula we use for SUV factors are: SUVbw = 𝑝𝑖π‘₯𝑒𝑙 βˆ™ π‘€π‘’π‘–π‘”β„Žπ‘‘ π‘Žπ‘π‘‘π‘’π‘Žπ‘™ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦ π‘Žπ‘π‘‘π‘’π‘Žπ‘™ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦ = π‘‘π‘Ÿπ‘Žπ‘π‘’π‘Ÿ π‘Žπ‘π‘‘π‘–π‘£π‘–π‘‘π‘¦ βˆ™ 2 βˆ’(π‘ π‘π‘Žπ‘› π‘‘π‘–π‘šπ‘’βˆ’π‘šπ‘’π‘Žπ‘ π‘’π‘Ÿπ‘’π‘‘ π‘‘π‘–π‘šπ‘’) β„Žπ‘Žπ‘™π‘“ 𝑙𝑖𝑓𝑒 Note: In the GE PET Images, Total Dose(18,1074) = NET Activity to the patient at Series Time (0008, 0031). 4.3 Running/Simulation 4.3.1 Simulation 1 Date of QA test: 03/12/12 Camera: Discovery D690 FOV2:
  • 31. 31 Software Engineering Department FOV1: Test Result: Test was successfully passed. 4.3.2 Simulation 2 Date of QA test: 03/12/12 Camera: Discovery LS FOV2:
  • 32. 32 Software Engineering Department FOV1: Test Result: Test was successfully passed. 4.3.3 Simulation 3 Date of QA test: 08/05/13 Camera: Discovery D690 We deliberately have rotated the MASK in order to fail the QA Test. As you can see on the pictures the calculated results have not passed the criteria. So, the Test failed! FOV2:
  • 33. 33 Software Engineering Department FOV1: Test Result: Test Failed. 4.4 Final conclusion As we see during our project, that β€œAn Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head” algorithm is not applicable in our project. This algorithm probably is fine for the work with CT images, but our project focuses on PET images, so we found the best solution for it. In a work with image processing, you need to pay attention that the image processing result generally are not accurate, and if there is a need in precision you need to try additional techniques in order to double check your results. In addition, if there are similar images but with different quality you need to adjust the contrast in order to improve your image. Our work was based only on two kinds of GE cameras, so the project is oriented for them. Any additions may cause additional changes in algorithms and calculations.
  • 34. 34 Software Engineering Department References [1] J. P. Pluim, J. B. Maintz, and M. A. Viergever, β€˜β€˜Image registration by maximization of combined mutual information and gradient information,’’IEEE Trans. Med. Imaging 19, 809– 814 ~2000. [2] W. M. Wells III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, β€˜β€˜Multi-modal volume registration by maximization of mutual information,’’ Med. Image Anal 1, 35–51 ~1996. [3] Matthew Y. Wang, Calvin R. Maurer, Jr., J. Michael Fitzpatrick,* Member, IEEE, and Robert J. Maciunas, β€œIEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 43, NO. 6, JUNE 1996.” [4] Use of the Hough transformation to detect lines and curves in pictures. Technical note 36. April 1971. By: Richard O.Duda and Peter E. Hart. Artificial intelligence center.