FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
Get more info here:
>>> https://bit.ly/38FXJav
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
More Related Content
Similar to FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
Similar to FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks (20)
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
1. Received October 17, 2016, accepted November 10, 2016, date of publication December 1, 2016, date of current version January 4, 2017.
Digital Object Identifier 10.1109/ACCESS.2016.2633272
A Novel Fractional-Order Differentiation Model
for Low-Dose CT Image Processing
YANLING WANG1,2, YANLING SHAO3, ZHIGUO GUI1,2, QUAN ZHANG1,2,
LINHONG YAO3, AND YI LIU1,2
1National Key Laboratory for Electronic Measurement Technology, School of Information and Communication Engineering,
North University of China, Taiyuan 030051, China
2Key Laboratory of Instrumentation Science and Dynamic Measurement, School of Information and Communication Engineering,
North University of China, Taiyuan 030051, China
3School of Science, North University of China, Taiyuan 030051, China
Corresponding author: Y. Liu (liuyi1987827@gmail.com)
This work was supported in part by the National Nature Science Foundation of China under Grant 61271357, in part by the National Key
Scientific Instrument and Equipment Development Project under Grant 2014YQ24044508, in part by the Opening Project of the State
Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, under Grant KFJJ13-11M, in part by the Natural
Science Foundation of Shanxi Province under Grant 2015011046, in part by the Shanxi Province Science Foundation for Youths under
Grant 201601D021080, and in part by the Research Project supported by the Shanxi Scholarship Council of China under Grant 2016-085.
ABSTRACT Low-dose CT (LDCT) images tend to be degraded by excessive mottle noise and steak artifacts.
In this paper, we proposed a novel fractional-order differentiation model that can be applied to LDCT image
processing as a post-processing technique. The anisotropic diffusion model (proposed by Perona and Malik,
i.e., PM model) has good performance in flat regions, total variation (TV) model works better in edge
preservation, and fractional-order differentiation models can mitigate block effect while preserving fine
details and more structure. The proposed model is based on the weighted combinations of the fractional-
order PM model and the fractional-order TV model, which maintains the advantages of PM model, TV
model, and fractional-order differentiation models. Moreover, the local intensity variance was added to both
weighted coefficient and diffusion coefficient of the proposed model to properly preserve edges and details.
A variety of simulated phantom data, including the Shepp–Logan head phantom, the pelvis phantom, and
the actual thoracic phantom, were used for experimental validation. The results of numerical simulation and
clinical data experiments demonstrate that the proposed approach has a better performance in both noise
suppression and detail preservation, when compared with several other existing methods.
INDEX TERMS Low-dose CT, image processing, fractional-order differentiation model, edge and detail
preservation.
I. INTRODUCTION
Although X-ray computed tomography (CT) have gained
widely applications in the medical field, the concern on
X-ray dose is emerging as high-dose radiation may increase
stochastic risks during radiological procedure. The radiation
dose delivered to patients during CT examinations therefore
needs to be reduced [1]. Among all the methods for reducing
radiation dose (such as reducing tube current, tube voltage,
and scanning time. etc.), the most simple way is to lower mA
(milliampere)/mAs (milliampere second). This way, how-
ever, often leads to degraded reconstruction images with
increased mottle noise, non-stationary streak artifacts, and
decreased CNR (contrast-to-noise ratio) [2]. Streak artifacts
occur most frequently in the bony structures at the base of
the skull and petrous bone regions, because the very dense
structures are only partially included in the slice, resulting in
high contrast errors [3]. Many techniques have been proposed
to remove noise and artifacts in LDCT. They are generally
divided into three categories: projection processing meth-
ods, iterative reconstruction algorithms, and post-processing
methods.
The first category treats the projection data as an image
(called sinogram) and reconstructed images can be obtained
from the processed projections [4], [5]. Noise reduction in
sinogram space before filtered back projection (FBP) is an
effective way to obtain high quality reconstructed LDCT
images. Nonlinear filtering [6], penalized-likelihood filter-
ing [7], and fuzzy filtering [3] were respectively proposed to
VOLUME 4, 2016
2169-3536
2016 IEEE. Translations and content mining are permitted for academic research only.
Personal use is also permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
8487
2. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
suppress excessive quantum noise and keep edges in the sino-
gram. The second one, i.e., iterative reconstruction algorithm,
achieves noise suppression in the procedure of reconstruction.
Specifically speaking, it looks for an optimal solution by
maximizing or minimizing a prior-regularized cost function
that is constructed according to the noise properties of the
projections [8]–[18]. In the past decade, studies in this area
mainly focus on the design of priors. Many valuable priors
have been proposed, such as the anisotropic prior [8], the
TV based priors [9]–[11], and the nonlocal priors [12], [13].
Although yielding excellent results through incorporating
image prior information into optimization, these algorithms
can’t be broadly used due to the requirement of more detailed
information for reconstruction, such as scanning geometry,
correction physics, and photon statistics [14]. The third one,
i.e., post-processing method, by contrast, is more repro-
ducible and can be performed on different scanning sys-
tems. Since noise and streak artifacts seriously damage the
structures, the key issue of these methods is to keep the
structures well when reducing noise and artifacts, and in
the meantime no new artifacts and blurred details are intro-
duced. A few of outstanding filters considering both noise
suppression and edge preservation have been proposed to
improve the quality of LDCT images, for instance, nonlocal
means filtering [14], [19], dictionary learning based filter-
ing [20], [21], and partial differential equations (PDEs) based
filtering [22], [23].
Over the past two decades, extensive research has been
conducted on PDEs for image denoising. Two well-known
second-order PDEs are the PM model and the TV model.
The PM model proposed by Perona and Malik in 1990 [24]
is based on the following partial differential equation
∂u
∂t
= div[c(|∇u|) · ∇u], (1)
where div is the divergence operator, |∇u| is the absolute
value of the gradient of image u, and the diffusion coeffi-
cient c(·) is a monotone decreasing smooth function of the
magnitude of local image gradient ∇u. A possible diffusion
coefficient function is given by
c(|∇u|) = 1/[1 + |∇u|2
/k2
], (2)
where k is the gradient threshold.
The TV model proposed by Rudin, Osher, and Fatemi
in 1992 [25] has the following energy function
E(u) =
Z
(|∇u| +
λ
2
|u − u0|2
)dxdy, (3)
where the first term is a regularization term that denotes the
total variation of denoised image u, the second term is a
fidelity term, u0 is the noisy image, and λ is the regulariza-
tion parameter. By using the gradient descent method, the
TV denoising model was obtained as follows:
∂u
∂t
= ∇ · (
∇u
|∇u|
) − λ(u − u0). (4)
The PM model has good performance in flat regions with
uniform intensity distribution, and the TV model works better
in preserving edges. Zhang et al. [26] proposed a novel model
(i.e., PMTV model) by weighted combinations of PM model
and TV model. Yahya et al. [27] proposed a new denois-
ing technique by blending isotropic diffusion, PM model,
and TV model. Although the above second-order PDEs can
reduce noise level while preserving the image features, they
tend to make the processed image look ‘‘blocky’’, because
the images used by second-order PDEs to approximate an
observed image are often piecewise constant. In order to
reduce blocky effect, a class of four-order PDEs were intro-
duced by You and Kaveh in 2000 [28], but these methods
often lead to speckle effect.
To overcome those aforementioned limitations, fractional-
order PDEs have recently been researched and applied to the
field of image processing and computer vision. For example,
Bai and Feng [29] proposed a class of FPM models for image
denoising, in which the energy function is defined as
E(u) =
Z
f (|Dα
u|)d, (5)
where is the image support region, Dαu = (Dα
x u, Dα
y u),
|Dαu| =
q
(Dα
x u)2 + (Dα
y u)2, and f (|Dαu|) ≥ 0 is an
increasing function associated with the diffusion coefficient
shown as
c(t) =
f 0(
√
t)
√
t
. (6)
When α = 1, equation (5) is precisely PM model in [24];
when α = 2, equation (5) is precisely the fourth-order
PDE in [28]; when 1 ≤ α ≤ 2, the FPM model (5)
can be considered as an interpolation between the second-
order and the fourth-order anisotropic diffusion equations.
Zhang and Wei [30] developed a FTV model as follows:
E(u) =
Z
(|Dα
u| +
λ
2
|u − u0|2
)dxdy. (7)
The FTV model can be seen as the generalization of TV
model, and the variation order is changed from integer (one)
in TV model to fraction (α) in FTV model. Zhang et al. [31]
first applied fractional calculus to medical image processing.
They proposed two fractional-order equations for CT metal
artifacts reduction [31], [32], and gave two novel fractional-
order models for CT image reconstruction [33], [34]. Hu [35]
proposed a fractional-order diffusion scheme for sinogram
restoration of LDCT.
Although the above fractional-order differentiation meth-
ods can, to a certain extent, reach a good trade-off between
noise removal and edge preservation, some noise and steak
artifacts still exist. According to what is mentioned above,
PM model has good performance in flat regions, TV model
works better in edge preservation, and fractional-order dif-
ferentiation models can mitigate block effect while preserv-
ing fine details and more structure. In order to keep the
advantages of these models, we integrated FPM model and
8488 VOLUME 4, 2016
3. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
FTV model to obtain our new model (namely, FPMTV
model). Additional, the local intensity variance usually can
be used to distinguish desirable image features from arti-
facts. In our study, the local intensity variance was added
to weighted coefficient of the proposed model to properly
adjust the weights of FPM model and FTV model. On the
other hand, both intensity variance and gradient were treated
as two local pixel characteristics in the diffusion coeffi-
cient of the proposed model to further preserve edges and
details. The proposed model is appropriate for applications in
LDCT image processing because it can effectively preserve
edges and fine details while removing mottle noise and steak
artifacts.
The remainder of this paper is organized as follows.
Section 2 first briefly overviews the definitions of
fractional-order calculus. The proposed fractional-order dif-
ferentiation model and its numerical computation are then
discussed. Section 3 presents experimental results from mul-
tiple samples including the Shepp-Logan head phantom, the
pelvis phantom, and the actual thoracic phantom. In the end,
Section 4 gives brief conclusion for this study.
II. MATERIALS AND METHODS
A. FRACTIONAL-ORDER CACULUS
The fractional-order calculus has been researched for cen-
turies. Though defined in a number of ways, most of the
definitions are classified into two categories, the time domain
ones and the frequency ones. A discrete Fourier transform
was used to calculate the fractional differential values in [29],
but the frequency definition often causes high computational
cost. The Grümwald-Letniklv (G-L) and the Riemann- Liou-
ville (R-L) definitions are the most famous and universal time
domain definitions. The G-L definition expresses a function
by weighted sum around the function, which makes it suit-
able for applications in image processing. According to [36],
α-order differential of signal f (x) was defined by the G-L as:
aG
Dα
t f (x) , lim
h→0
1
hα
t−a
h
X
m=0
(−1)m
α
m
f (x − mh), (8)
where α ∈ R, the duration of f (x) is [a, t] (a t, a ∈
R, t ∈ R), G denotes G-L definition, G
a Dα
t denotes G-L-based
fractional-order differential operator, h = (t − a) /n is the
step size, and the formula
α
m
is the binomial coefficient
defined as
α
m
=
0(α + 1)
0(m + 1)0(α − m + 1)
, (9)
where 0(m) = (m−1)! is the gamma function of independent
variable m.
B. THE PROPOSED FRACTIONAL-ORDER
DIFFERENTIATION MODEL
In this section, we present a modified fractional-order dif-
ferentiation model based on the weighted combinations of
FPM model and FTV model. Moreover, the local intensity
variance in [37] is introduced in both weighted coefficient
and diffusion coefficient of the proposed model to properly
preserve edges and details. The local intensity variance can
distinguish desirable image features from artifacts because
desirable image features in the neighborhood of the image
usually have larger intensity variance than the artifacts. The
energy function of the proposed model is given by (10):
E(u) =
Z
[ϕ|Dα
u| + γ f (|Dα
u|) +
λ
2
|u − u0|2
]dxdy (10)
where
ϕ = η(2 − η),
γ = (η − 1)2,
(11)
and
η = e
−1
σ2
t,N /L
, (12)
here L is a positive constant. σt,N keeps changing as
the increase of iteration times, and σt,N in a fine detail area or
edge is usually larger than that in the noisy background or flat
region. η can be used to determine whether the region is the
flat area or the edge because the larger σt,N , the larger η. The
local intensity variance σt,N in each region of the image is
computed by using (13) and (14) as a measure of the number
of image details.
σ2
t (x, y) =
1
9
1
X
i=−1
1
X
j=−1
[ut(x + i, y + j) − ūt(x, y)]2
, (13)
σ2
t,N (x, y) = 1 +
σ2
t (x, y) − Minσ2
t
Maxσ2
t − Minσ2
t
· 254, (14)
where Maxσ2
t and Minσ2
t denote the maximal and minimal
intensity variances of the image at iteration t. ūt(x, y) is the
mean of gray levels in a 3 × 3 neighborhood window.
The local intensity variance was added to weighted coeffi-
cient of the proposed model to properly adjust the weights of
FPM model and FTV model. In the flat region where σt,N is
small, η is also small according to (12), and when η is close
to zero, the proposed model will highlight the importance of
FPM model. In the fine detail area or edge where σt,N is large,
η is also large according to (12), and when η is close to one,
the proposed model will emphasize the role of FTV model.
It is very difficult to calculate fractional-order differential
equations directly by the G-L definition. In this paper, we
establish a fractional order differential model by combin-
ing the G-L definition with convolution integral. According
to [38] and (8), we can obtain the discrete fractional-order
differential equation
Dα
f =
X
k≥0
(−1)k
α
m
f (x − k). (15)
Let convolution kernel function
vα
(z) =
(−1)z
α
z
, z ≥ 0,
0, z 0.
(16)
VOLUME 4, 2016 8489
4. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
We have
vα
∗ f =
Z +∞
−∞
vα
(z)f (x − z)dz. (17)
Equation (15) can be viewed as an approximate discretization
of convolution integral (17), namely, equation (17) may be
treated as an approximate representation of the G-L defi-
nition (8). Equation (17) is easier to calculate and analyze
than (8). Replacing the fractional gradient Dαu of (10) with
convolution integral vα ∗ u, we obtain the fractional-order
differential model based on convolution integral as follows
E(u) =
Z
[ϕ|vα
∗ u| + γ f (|vα
∗ u|)
+
λ
2
|u − u0|2
]dxdy, (18)
where
8. =
q
(vα ∗ u)2
x + (vα ∗ u)2
y,
(vα
∗ u)x =
Z +∞
−∞
vα
(z)u(x − z, y)dz,
(vα
∗ u)y =
Z +∞
−∞
vα
(z)u(x, y − z)dz. (19)
To solve the problem (18), we take any test function
φ ∈ C∞() and define
g (ε) := E(u + εφ)
=
Z
[ϕ
12. + γ f (|vα
∗ (u + εφ) |)
+
λ
2
|u + εφ − u0|2
]dxdy. (20)
When g0
ε(0) = 0, we have
ϕ
Z
(vα ∗ u)x(vα ∗ φ)x + (vα ∗ u)y(vα ∗ φ)y
|vα ∗ u|
dxdy
+ γ
Z
f 0
(|vα
∗ u|)
·
[(vα ∗ u)x(vα ∗ φ)x + (vα ∗ u)y(vα ∗ φ)y]
|vα ∗ u|
dxdy
+ λ
Z
(u − u0) · φdxdy = 0. (21)
According to (6), we get
ϕ
Z
(vα ∗ u)x(vα ∗ φ)x + (vα ∗ u)y(vα ∗ φ)y
|vα ∗ u|
dxdy
+ γ
Z
c(|vα
∗ u|2
)
· [(vα
∗ u)x(vα
∗ φ)x + (vα
∗ u)y(vα
∗ φ)y]dxdy
+ λ
Z
(u − u0) · φdxdy = 0. (22)
We can enlarge the support region of the image u0(x, y) from
to R2 as follows:
u0(x, y) =
u0(x, y), (x, y) ∈ ,
0, (x, y) /
∈ .
(23)
Using the Parseval equation
Z
R2
f · gdxdy =
Z
R2
ˆ
f · ¯
ĝdω1dω2,
we can obtain
ϕ
Z
R2
(
(vα ∗ u)x
|vα ∗ u|
·
(vα ∗ φ)x
+
(vα ∗ u)y
|vα ∗ u|
·
(vα ∗ φ)y)dω1dω2
+ γ
Z
R2
c(|vα ∗ u|2) · [
(vα ∗ u)x ·
(vα ∗ φ)x
+
(vα ∗ u)y ·
(vα ∗ φ)y]dω1dω2
+ λ
Z
R2
(¯
û − ¯
û0) · φ̂dω1dω2 = 0. (24)
In the frequency domain, the convolution integral (17) is
satisfied with
vα ∗ u0 = b
vα · b
u0,
where b
u0 and b
vα denote the Fourier transform of u0 and vα
respectively. By using the frequency domain properties of
convolution integral, we get
ϕ
Z
R2
[
(vα ∗ u)x
|vα ∗ u|
· b
vα(ω1) · φ̂
+
(vα ∗ u)y
|vα ∗ u|
· b
vα(ω2) · φ̂]dω1dω2
+ γ
Z
R2
c(|vα ∗ u|2)·[
(vα ∗ u)x · b
vα(ω1) · φ̂
+
(vα ∗ u)y · b
vα(ω2) · φ̂]dω1dω2
+ λ
Z
R2
(¯
û − ¯
û0) · φ̂dω1dω2 = 0. (25)
Taking the conjugate of both sides of (25), we can obtain the
Euler-Lagrange equation of (10) as follows:
ϕ[
(vα ∗ u)x
|vα ∗ u|
· b
vα(ω1) +
(vα ∗ u)y
|vα ∗ u|
· b
vα(ω2)]
+ γ [
c(|vα ∗ u|2)·[
(vα ∗ u)x · b
vα(ω1)
+
(vα ∗ u)y · b
vα(ω2)] + λ(û − b
u0) = 0. (26)
Since b
vα(ω) = b
vα(−ω), we have
ϕ[
(vα ∗ u)x
|vα ∗ u|
· b
vα(−ω1) +
(vα ∗ u)y
|vα ∗ u|
· b
vα(−ω2)]
+ γ [
c(|vα ∗ u|2)·[
(vα ∗ u)x · b
vα(−ω1)
+
(vα ∗ u)y · b
vα(−ω2)] + λ(û − b
u0) = 0. (27)
Taking the inverse Fourier transform of (27) by using the
frequency domain properties of convolution integral, we get
ϕ[(vα
(−z) ∗
(vα ∗ u)x
|vα ∗ u|
)x + (vα
(−z) ∗
(vα ∗ u)y
|vα ∗ u|
)y]
+ γ · c(|vα
∗ u|2
) · [(vα
(−z) ∗ (vα
∗ u)x)x
+ (vα
(−z) ∗ (vα
∗ u)y)y] + λ(u − u0) = 0. (28)
8490 VOLUME 4, 2016
13. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
According to (17), we obtain
ϕ · [
Z +∞
−∞
vα
(z) ·
(vα ∗ u)x
|vα ∗ u|
(x + z, y)dz
+
Z +∞
−∞
vα
(z) ·
(vα ∗ u)y
|vα ∗ u|
(x, y + z)dz]
+γ · c(|vα
∗ u|2
) · [
Z +∞
−∞
vα
(z) · (vα
∗ u)x(x + z, y)dz
+
Z +∞
−∞
vα
(z) · (vα
∗ u)y(x, y + z)dz] + λ(u − u0) = 0.
(29)
Since vα(z) = 0 (z 0), we have
ϕ · [
Z +∞
0
vα
(z) ·
(vα ∗ u)x
|vα ∗ u|
(x + z, y)dz
+
Z +∞
0
vα
(z) ·
(vα ∗ u)y
|vα ∗ u|
(x, y + z)dz]
+ γ c(|vα
∗ u|2
) · [
Z +∞
0
vα
(z) · (vα
∗ u)x(x + z, y)dz
+
Z +∞
0
vα
(z) · (vα
∗ u)y(x, y + z)dz] + λ(u − u0) = 0,
(30)
where
c(|vα
∗ u|2
) = 1/[1 +
|vα ∗ u|2
κ2
], (31)
here κ is a constant and acts as an edge strength threshold in
the diffusion coefficient function.
In order to further preserve edges and details, both intensity
variance and gradient are treated as two local pixel charac-
teristics in the diffusion coefficient c(·) of FPMTV model.
According to [37], the diffusion coefficient c(·) is revised as
c(|vα
∗ u|2
, σ2
t,N ) = 1/[1 +
|vα ∗ u|2 · σ4
t,N
k2
1
], (32)
where k1 = k0e−1t(n−1). Here k0 is a positive constant, 1t is
the step size, n is the number of iterations. And k1/σ2
t,N can
be seen as an adaptive version of κ in (31). Let
Uα
(x, y) := ϕ · [
Z +∞
0
vα
(z) ·
(vα ∗ u)x
|vα ∗ u|
(x + z, y)dz
+
Z +∞
0
vα
(z) ·
(vα ∗ u)y
|vα ∗ u|
(x, y + z)dz]
+ γ c(|vα
∗ u|2
, σ2
t,N )
· [
Z +∞
0
vα
(z) · (vα
∗ u)x(x + z, y)dz
+
Z +∞
0
vα
(z) · (vα
∗ u)y(x, y + z)dz]. (33)
The Euler-Lagrange equation (27) is written as
Uα
(x, y) + λ(u − u0) = 0. (34)
The Euler-Lagrange equation (34) can be solved through the
following gradient descent procedure
∂u
∂t
= −Uα
(x, y) − λ(u − u0). (35)
C. NUMERICAL COMPUTATION
To proceed with the numerical computation in solving (35),
we assume that both the discrete noisy image u0(i, j) and
denoised image u(i, j) are M × N pixels, where i =
0, 1, . . . , M − 1, j = 0, 1, . . . , N − 1. Let N1 = min{M, N},
the numerical computation of the proposed FPMTV algo-
rithm is implemented in (36)-(39).
According to (15), we can discretize (vα ∗u)x and (vα ∗u)y
as follows:
(vα ∗ u)x(i, j) =
N1−1
P
k=0
(−1)k
α
k
u(i − k, j),
(vα ∗ u)y(i, j) =
N1−1
P
k=0
(−1)k
α
k
u(i, j − k),
(36)
where u(i, j) = 0 for iM − 1, jN − 1 or i, j 0. Let
TVDx(i, j) =
(vα ∗ u)x(i, j)
√
(i, j) + ε
,
TVDy(i, j) =
(vα ∗ u)y(i, j)
√
(i, j) + ε
,
(37)
PMDx(i, j) = c(|vα ∗ u|2, σ2
t,N ) · (vα ∗ u)x(i, j)
=
k2
1 · (vα ∗ u)x(i, j)
k2
1 + (i, j) · σ4
t,N + ε
,
PMDy(i, j) = c(|vα ∗ u|2, σ2
t,N ) · (vα ∗ u)y(i, j)
=
k2
1 · (vα ∗ u)y(i, j)
k2
1 + (i, j) · σ4
t,N + ε
,
(38)
where
(i, j) = [(vα
∗ u)x(i, j)]2
+ [(vα
∗ u)y(i, j)]2
,
and ε is a very small positive number. Therefore, the dis-
cretization for Uα(x, y) is
Uα
i,j(u) =
N1−1
X
k=0
(−1)k
α
k
· {ϕ · [TVDx(i + k, j)
+ TVDy(i, j + k)]
+ γ · [PMDx(i + k, j) + PMDy(i, j + k)]}.
(39)
To summarize, the FPMTV algorithm consists of the follow-
ing steps.
(1) Initialization: u(0) = u0, determine the values of param-
eter α, λ, L, k0 and the iteration step length 1t.
(2) Iteration: For n=1, 2, 3..., compute u(n+1) according
to the following steps:
Step1: Compute Uα
n = ((Uα
i,j(u(n)))M−1
i=0 )N−1
j=0 by using
(36)-(39);
Step2: Compute
u(n+1)
= u(n)
− 1t · [Uα
n + λ(u(n)
− u0)].
VOLUME 4, 2016 8491
14. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
FIGURE 1. Visualization of the phantoms and images used in our study. (a) original Shepp-Logan head
phantom; (b) original pelvis phantom; (c) processed HDCT image by the AS-LNLM method; (d) and (e) are
the reconstructed LDCT images from simulated noisy sonograms by FBP (Hanning filter with cutoff at 80%
Nyquist frequency); and (f) original LDCT image (30 mAs).
If u(n+1) satisfies a given condition, we terminate the iteration
and output u(n+1), otherwise, let n := n + 1 and return to
Step1.
III. EXPERIMENTS AND ANALYSIS
In this section, experiments based on both digital phantom
simulations and clinical datasets were preformed to validate
the proposed FPMTV method. Fig. 1(a) shows the Shepp-
Logan phantom that is composed of 256 pixels×256 pixels.
Fig. 1(b) shows the pelvis phantom with 256 pixels×252
pixels. Fig. 1(d) and Fig. 1(e) show the LDCT images that
are reconstructed from simulated noisy sonograms by using
the FBP with Hanning filter (cutoff frequency equal to 80%
Nyquist frequency). An anatomical model of a human chest
torso was used in our experiments as the thoracic phantom,
and CT images were obtained from a multi-detector row
Siemens Somatom Sensation 16 CT scanner with a tube
voltage of 120 kVp. The original high-dose CT (HDCT)
image was collected with a higher tube current of 240 mAs.
Fig. 1(c) shows the processed HDCT image by the artifact
suppressed large-scale nonlocal means (AS-LNLM) method.
The processed HDCT image has a better performance in
noise and artifacts suppression than the original one, there-
fore, the processed HDCT image can be taken as the ref-
erence image. Fig. 1(f) illustrates the LDCT image which
was obtained with a reduced tube current of 30 mAs. Both
Fig. 1(c) and (f) are composed of 512 pixels×512 pixels.
All experiments were implemented in MATLAB 2012b
on a PC with Intel(R) Pentium(R) CPU 2.60 GHz and
4Gb RAM.
A. RELATED PARAMETERS AND ASSESSMENT CRITERIA
For quantitative analyses, the peak signal-to-noise
ratio (PSNR) and the structural similarity (SSIM) [39], which
have been typically used in CT reconstructed image quality
evaluation, were utilized in this paper. The SSIM index was
used to measure the structure similarity between two images
and can be calculated by
SSIM =
2ūoriginalū(σuoriginalu + c2)
(ū2
original + ū2 + c1)(σ2 + σ2
original + c2)
, (40)
where
σuoriginalu = Cov
uoriginal, u
=
1
N − 1
(un − ū) uoriginaln − ūoriginal
,
ū =
1
N
N
X
n=1
un,
σ2
=
1
N − 1
N
X
n=1
(un − ū)2
,
ūoriginal =
1
N
N
X
n=1
uoriginaln ,
σ2
original =
1
N
N
X
n=1
uoriginaln − ūoriginal
2
,
here c1 and c2 are constants set according to [39]. un and
uoriginal are the pixel values of the denoised image and the
8492 VOLUME 4, 2016
15. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
FIGURE 2. The SSIM curves of the FPMTV iterative algorithm with
different fractional orders α.
original image respectively, and N is the total number of
pixels in the reconstructed image.
The PSNR can be calculated via the mean squared
error (MSE) which is computed according to the following
formula
MSE =
1
N
N
X
n=1
un − uoriginaln
2
. (41)
Then
PSNR = 10log10
max(un, uoriginaln )2
MSE
. (42)
The selection of fractional order α is very important for
high quality denoised image. In this paper, we deter-
mined the value of parameter α by both SSIM curves and
the visual effect of denoised image. Take the Shepp-Logan
head phantom for example, Fig. 2 shows SSIM curves of the
FPMTV iterative algorithm with different fractional orders
α that changes from 0.85 to 1.85 with an interval of 0.2.
As can be seen from Fig. 2, with α increasing from 1.05 to
1.85, the SSIM value becomes smaller and smaller, and the
SSIM value at α =0.85 is smaller than that at α = 1.05.
Obviously, the optimal SSIM value is obtained at α = 1.05.
On the other hand, the LDCT image (Fig. 1(d)) was processed
by FPMTV algorithm with different fractional orders α, and
the processed images are shown in Fig. 3. We can see that
Fig. 3(a) (α = 0.85) suffers from very obvious blocky effect.
From Fig. 3(b) (α = 1.05), we can observe that block effect
and streak artifacts are eliminated, and that edges and details
are preserved. Fig. 3(c) (α = 1.25) appears only a few streak
artifacts. However, we can easily see from Fig. 3(d)-(f) (α =
1.45, α = 1.65, α = 1.85) that streak artifacts become more
and more obvious with the increase of α. Judging from both
SSIM improvement and the visual effect of denoised image,
α = 1.05 is found the best choice in this experiment.
The iteration stop criterion was chosen by the maximum
SSIM value in our experiments, because the SSIM metric
has been widely proven to have much better consistency
with the qualitative visual performance. A larger SSIM value
indicates better structure similarity between denoised image
and original ground truth image. The SSIM curve of the
iterative algorithm can be used to select the optimal iteration
point to stop the whole iteration process. For instance, we
can conclude from the SSIM curve with α = 1.05 in Fig. 2
that the iteration should be stopped at iter=80 in the Shepp-
Logan head phantom study. At the same time, Fig. 2 shows
FIGURE 3. Comparison of processed images by using the proposed FPMTV algorithm with different
fractional orders α on the LDCT image (shown in Fig. 1(d)). (a) α = 0.85, (b) α = 1.05, (c) α = 1.25,
(d) α = 1.45, (e) α = 1.65, and (f) α = 1.85.
VOLUME 4, 2016 8493
16. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
FIGURE 4. The comparative experiments on a pelvis phantom. (a) original phantom,
(b) LDCT image, (c) processed image by TV method (240 steps), (d) processed image by
PMTV method (160 steps), (e) processed image by FTV method (250 steps), and
(f) processed image by FPMTV method (50 steps). From left to right, the images in the
second, third, and fourth columns show the zoomed ROIs specified in (a), and all of the
zoomed images are from the corresponding images of the first column.
that the best numbers of iterations with different α are almost
the same, so we can use the same method to select the iteration
stop time in other phantom studies.
The other parameters of FPMTV model were set manually
by comprehensive analysis of the SSIM index, the PSNR
index and the visual effect of processed images. We set
8494 VOLUME 4, 2016
17. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
TABLE 1. PSNR, SSIM values of three ROIs (marked by red squares in the following pelvis phantom) of processed images by TV, PMTV, FTV, and FPMTV.
1t = 0.05, λ = 0.01, L = 0.1, and k0 = 100 for FPMTV
model in all phantom studies.
B. EXPERIMENTAL RESULTS
1) THE PELVIS PHANTOM STUDY
To assess the performance of the proposed FPMTV algo-
rithm, the following methods including TV model [25],
PMTV model [26], and FTV model [30] were chosen for the
comparative experiments on LDCT images. The parameters
in TV, PMTV, and FTV were set according to the suggestions
in [25], [26], [30]. The processing results of the LDCT image
(Fig. 1(e)) are shown in Fig. 4, in which (c), (d), (e), and (f) are
the processed images by TV, PMTV, FTV, FPMTV methods
respectively. To further compare the performance of multiple
denoising algorithms, three regions of interest (ROIs) and
their corresponding zoomed images are shown in Fig. 4. The
three ROIs identified by red squares are shown in the original
pelvis phantom (Fig. 4 (a)), and their zoomed images are
shown in Fig. 4 (a1-a3). Fig. 4 (b-b3) show the LDCT image
and its zoomed ROIs. By compare the images in
Fig. 4 (a-a3) with (b-b3), we can see that mottle noise and
streak artifacts severely degrade the reconstructed images.
Fig. 4 (c-c3) processed by TV suffer from obvious blocky
effect (pointed by blue arrows). Mottle noise and steak
artifacts (pointed by blue arrows) can be clearly seen in
Fig. 4 (d-d3) processed by PMTV. Obviously, the processed
images by FTV (Fig. 4 (e-e3)) perform better than that by
TV and PMTV, but the performance in preserving edges and
details is really bad (pointed by blue arrows). In addition,
it is easily observed from the processed images by FPMTV
(Fig. 4 (f-f3)) that the FPMTV algorithm can robustly reduce
mottle noise, streak artifacts and blocky effect while preserv-
ing edges and details. With the original phantom as reference,
we can see that the processed images by FPMTV are the
most close to the reference images. Fig. 4 demonstrates
that the proposed FPMTV model perform better than other
algorithms.
For further quantitative analysis, Table 1 shows the PSNR
and SSIM values of three ROIs (marked by red squares in
the pelvis phantom in Table 1) of processed images by TV,
PMTV, FTV, and FPMTV. In order to intuitively illustrate
FIGURE 5. Histogram of PSNR, SSIM values in Table 1. The corresponding
algorithms are shown in figure legend.
the PSNR and SSIM values of different denoising tech-
niques, Fig. 5 plots the histogram of the PSNR and SSIM
values in Table 1. As we can see from Table 1 and Fig. 5,
FPMTV has the highest PSNR/SSIM for all of the ROIs.
It means that the image processed by FPMTV is the most
close to the original phantom. Based on both visual effect
and quantitative analysis, the experimental results demon-
strate that the FPMTV algorithm can effectively smooth
noisy background while preserving edges and details in the
LDCT images.
VOLUME 4, 2016 8495
18. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
FIGURE 6. The comparative experiments on an actual thoracic phantom. (a) processed HDCT image by the
AS-LNLM method, (b) LDCT image, (c) processed image by TV (250 steps), (d) processed image by PMTV
(200 steps), (e) processed image by FTV (260 steps), and (f) processed image by FPMTV (60 steps). From left to
right, the images in the second, third, fourth, and fifth columns show the zoomed ROIs specified in (a), and all of
the zoomed images are from the corresponding images of the first column.
2) THE ACTUAL THORACIC PHANTOM STUDY
In this study, we took an anatomical model of a human chest
torso as the thoracic phantom, and the parameter settings were
the same as that in the pelvis phantom study. Fig. 6 shows
processing results of the LDCT image (Fig. 1(f)) by the four
discussed algorithms (TV, PMTV, FTV, and FPMTV). The
processed HDCT image by AS-LNLM method is shown in
Fig. 6(a) as the reference image, in which four ROIs are
marked by red squares. Fig. 6 (a1-a4) show the zoomed
images of the four ROIs. The original LDCT image (30 mAs)
and its zoomed ROIs are shown in Fig. 6 (b-b4), in which
there are obvious mottle noise and steak artifacts. It means
that LDCT scanning can severely degrade reconstructed
images. The images processed by TV (Fig. 6(c-c4)) suffer
8496 VOLUME 4, 2016
19. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
TABLE 2. PSNR, SSIM values of five ROIs (marked by red squares in the following actual thoracic phantom) of processed images by TV, PMTV, FTV, and
FPMTV.
FIGURE 7. Histogram of PSNR, SSIM values in Table 2. The corresponding
algorithms are shown in figure legend.
from obvious blocky effect. Fig. 6(d-d4) and Fig. 6(e-e4)
are the processed images by PMTV and FTV respectively.
By comparison, PMTV and FTV perform better than TV
in blocky effect suppression, however, PMTV introduces
new artifacts to processed images, whereas FTV blurs edges
and details. On the base of the analysis above, TV, PMTV,
and FTV are all inappropriate for LDCT images processing
because the inpainting ignores detailed features and may
lead to misdiagnosis. The image processed by the proposed
FPMTV model is illustrated in Fig. 6(f), and its correspond-
ing zoomed images of ROIs are shown in Fig. 6(f1-f4).
We can see that mottle noise and steak artifacts are suppressed
effectively, and that edges and fine details are preserved
commendably. This observation verifies the effectiveness of
the FPMTV method in LDCT image processing. Moreover,
Fig. 6 confirms the same conclusion as Fig. 4 that the FPMTV
method has superior performance over other denoising meth-
ods (see the areas pointed by blue arrows in the third column
in Fig. 6).
For further comparison, Table 2 displays the PSNR and
SSIM values of five ROIs (marked by red squares in the
actual thoracic phantom in Table 2) of processed images by
TV, PMTV, FTV, and FPMTV respectively. Fig. 7 shows the
PSNR and SSIM values in Table 2 in a different and more
intuitive way. It is worth mentioning that the quantitative
results in Table 2 and Fig. 7 have a similar trend as in the
pelvis phantom study. The proposed algorithm has the highest
PSNR/SSIM for all of the ROIs.
IV. CONCLUSION
PDEs are generally treated as fine candidates for noise
removal in LDCT image processing, however, the tradi-
tional integral-order differentiation methods may often cause
blocky effect and speckle effect, which inevitably blur edges
and fine details of an image. At the same time, these
details blurred by integral-order algorithms in medical images
may have important clinical application value. To over-
come the disadvantages, fractional-order PDEs have been
recently researched and applied to medical image process-
ing. Although several fractional-order differentiation meth-
ods such as FPM and FTV can, to a certain extent, reach a
good trade-off between noise removal and edges preservation,
some mottle noise and steak artifacts still exist. To fix this
problem, we integrated FPM model and FTV model to obtain
FPMTV model. Additionally, the local intensity variance was
added to weighted coefficient and diffusion coefficient of
the FPMTV model to properly preserve edges and details.
Utilizing the weighted coefficient, we can adaptively control
the FPMTV model, which alternates between FPM model
and FTV model in accordance with the image feature. In the
VOLUME 4, 2016 8497
20. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
flat region, the FPMTV model highlighted the importance
of FPM model. In the fine detail area or edge, the FPMTV
model emphasized the role of FTV model. The experimental
results show that compared with other denoising algorithms
(TV, PMTV, FTV), the proposed FPMTV method achieves
superior performance in terms of both noise suppression and
edges preservation in LDCT images.
The computational cost should be mentioned here.
Fractional-order differentiation methods suffer from heavy
computational burden because many more pixels are needed
by the computation of fractional-order PDEs than that of
integral-order PDEs. The FPMTV has the same computa-
tional complexity as FTV. However, the computational cost
of FPMTV was greatly reduced by cutting down on the
number of iterations in our study. From Fig. 4, we can see
that 50 steps and 250 steps are required separately by FPMTV
and FTV in the pelvis phantom study. Moreover, Fig. 6
shows that 60 steps and 260 steps separately are needed
by FPMTV and FTV in the actual thoracic phantom study.
In the further study, more ways should be found to reduce
the computational cost. For example, the new algorithm may
be developed based on the graphics processing unit (GPU) to
raise the calculation speed. Many research results have shown
the possibility of GPU to enormously accelerate the iterative
procedure [40], [41]. With the powerful tool, fractional-order
differentiation methods will be more suitable for clinical
applications.
It is worth mentioning that selection of fractional order
α is very important for a high quality processed image. The
order in our study was set by both SSIM improvement and
the visual effect of denoised image. Further study is needed
for the adaptive selection of fractional order. On the other
hand, the proposed fractional-order differentiation model can
be applied to LDCT image processing as a post-processing
technique. In the future, we will also extend the application
of fractional-order PDEs to other categories of LDCT image
processing techniques, including projection processing meth-
ods and iterative reconstruction algorithms.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for
their valuable suggestions and comments which improved the
quality of this paper greatly.
REFERENCES
[1] H. Lu, X. Li, I.-T. Hsiao, and Z. Liang, ‘‘Analytical noise treatment for low-
dose CT projection data by penalized weighted least-square smoothing in
the K-L domain,’’ Proc. SPIE, vol. 4682, pp. 146–152, May 2002.
[2] R. D. Lee, ‘‘Common image artifacts in cone beam CT,’’ AADMRT
Newslett., pp. 1–7, Jul. 2008.
[3] Y. Liu, Z. Gui, and Q. Zhang, ‘‘Noise reduction for low-dose X-ray CT
based on fuzzy logical in stationary wavelet domain,’’ Opt. Int. J. Light
Electron, vol. 124, no. 18, pp. 3348–3352, 2013.
[4] A. Manduca et al., ‘‘Projection space denoising with bilateral filtering and
CT noise modeling for dose reduction in CT,’’ Med. Phys., vol. 36, no. 11,
pp. 4911–4919, 2009.
[5] L. Yu et al., ‘‘Sinogram smoothing with bilateral filtering for low-dose
CT,’’ Proc. SPIE, vol. 6913, p. 691329, Mar. 2008.
[6] T. Li et al., ‘‘Nonlinear sinogram smoothing for low-dose X-ray CT,’’ IEEE
Trans. Nucl. Sci., vol. 51, no. 5, pp. 2505–2513, Oct. 2004.
[7] E. C. Ehman et al., ‘‘Noise reduction to decrease radiation dose and
improve conspicuity of hepatic lesions at contrast-enhanced 80-kV hepatic
CT using projection space denoising,’’ AJR, vol. 198, no. 2, pp. 405–411,
2012.
[8] J. Wang, T. Li, and L. Xing, ‘‘Iterative image reconstruction for CBCT
using edge-preserving prior,’’ Med. Phys., vol. 36, no. 1, pp. 252–260,
2009.
[9] J. H. Jørgensen, E. Y. Sidky, and X. Pan, ‘‘Quantifying admissible under-
sampling for sparsity-expoiting iterative image reconstruction in X-ray
CT,’’ IEEE Trans. Med. Imag., vol. 32, no. 2, pp. 460–473, 2013.
[10] Y. Liu, H. Shangguan, Q. Zhang, H. Zhu, H. Shu, and Z. Gui, ‘‘Median
prior constrained TV algorithm for sparse view low-dose CT reconstruc-
tion,’’ Comput. Biol. Med., vol. 60, pp. 117–131, May 2015.
[11] X. Han et al., ‘‘Algorithm-enabled low-dose micro-CT imaging,’’ IEEE
Trans. Med. Imag., vol. 30, no. 3, pp. 606–620, Mar. 2011.
[12] Y. Chen, J. Ma, Q. Feng, L. Luo, P. Shi, and W. Chen, ‘‘Nonlocal prior
Bayesian tomographic reconstruction,’’ J. Math. Imag. Vis., vol. 30, no. 2,
pp. 133–146, 2008.
[13] Y. Chen et al., ‘‘Bayesian statistical reconstruction for low-dose
X-ray computed tomography using an adaptive-weighting nonlocal prior,’’
Comput. Med. Imag. Graph., vol. 33, no. 7, pp. 495–500, 2009.
[14] Z. Li et al., ‘‘Adaptive nonlocal means filtering based on local noise level
for CT denoising,’’ Med. Phys., vol. 41, no. 1, p. 011908, 2014.
[15] L. Ouyang, T. Solberg, and J. Wang, ‘‘Noise reduction in low-dose
cone beam CT by incorporating prior, volumetric image information,’’
Med. Phys., vol. 39, no. 5, pp. 2569–2577, 2012.
[16] E. Y. Sidky and X. Pan, ‘‘Image reconstruction in circular cone-beam
computed tomography by constrained total-variation minimization,’’ Phys.
Med. Biol., vol. 53, no. 17, pp. 4777–4807, 2008.
[17] Y. Liu and Z. Gui, ‘‘A statistical iteration approach with energy
minimization to sinogram noise reduction for low-dose X-ray CT,’’
Opt. Int. J. Light Electron, vol. 123, no. 23, pp. 2174–2178, 2012.
[18] S. Schafer et al., ‘‘Intraoperative imaging for patient safety and QA: Detec-
tion of intracranial hemorrhage using C-arm cone-beam CT,’’ Proc. SPIE,
vol. 8671, p. 86711X, Mar. 2013.
[19] Y. Chen et al., ‘‘Improving low-dose abdominal CT images by weighted
intensity averaging over large-scale neighborhoods,’’ Eur. J. Radiol.,
vol. 80, no. 2, pp. e42–e49, 2011.
[20] Y. Chen et al., ‘‘Atifact suppressed dictionary learning for low-dose
CT image processing,’’ IEEE Trans. Med. Imag., vol. 33, no. 12,
pp. 2271–2292, Dec. 2014.
[21] S. Ghadrdan, J. Alirezaie, J.-L. Dillenseger, and P. Babyn, ‘‘Low-dose
computed tomography image denoising based on joint wavelet and sparse
representation,’’ in Proc. EMBC, Aug. 2014, pp. 3325–3328.
[22] J. Gu, L. Zhang, G. Yu, Y. Xing, and Z. Chen, ‘‘X-ray CT metal artifacts
reduction through curvature based sinogram inpainting,’’ J. X-Ray Sci.
Technol., vol. 14, no. 2, pp. 73–82, 2006.
[23] Y. Kim, S. Yoon, and J. Yi, ‘‘Effective sinogram-inpainting for metal
artifacts reduction in X-ray CT images,’’ in Proc. ICIP, Sep. 2010,
pp. 597–600.
[24] P. Perona and J. Malik, ‘‘Scale-space and edge detection using anisotropic
diffusion,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7,
pp. 629–639, Jul. 1990.
[25] L. I. Rudin, S. Osher, and E. Fatemi, ‘‘Nonlinear total variation based noise
removal algorithms,’’ Phys. D, Nonlinear Phenomena, vol. 60, nos. 1–4,
pp. 259–268, 1992.
[26] X. Zhang, R. Wang, and L. C. Jiao, ‘‘Partial differential equation model
method based on image feature for denoising,’’ in Proc. M2RSM, Jan. 2011,
pp. 1–4.
[27] A. A. Yahya, J. Tan, and M. Hu, ‘‘A blending method based on partial dif-
ferential equations for image denoising,’’ Multimedia Tools Appl., vol. 73,
no. 3, pp. 1843–1862, 2014.
[28] Y.-L. You and M. Kaveh, ‘‘Fourth-order partial differential equations
for noise removal,’’ IEEE Trans. Image Process., vol. 9, no. 10,
pp. 1723–1730, Oct. 2000.
[29] J. Bai and X. C. Feng, ‘‘Fractional-Order anisotropic diffusion for image
denoising,’’ IEEE Trans. Image Process., vol. 16, no. 10, pp. 2492–2502,
Oct. 2007.
[30] J. Zhang and Z. Wei, ‘‘Fractional variational model and algorithm for
image denoising,’’ in Proc. ICNC, Oct. 2008, pp. 524–528.
8498 VOLUME 4, 2016
21. Y. Wang et al.: Novel Fractional-Order Differentiation Model for LDCT Image Processing
[31] Y. Zhang, Y. Pu, J. Hu, Y. Liu, and J. Zhou, ‘‘A new CT metal arti-
facts reduction algorithm based on fractional-order sinogram inpainting,’’
J. X-Ray Sci. Technol., vol. 19, no. 3, pp. 373–384, 2011.
[32] Y. Zhang, Y.-F. Pu, J.-R. Hu, Y. Liu, Q.-L. Chen, and J.-L. Zhou, ‘‘Efficient
CT metal artifact reduction based on fractional-order curvature diffusion,’’
Comput. Math. Method Med., vol. 2011, p. 173748, Jul. 2011.
[33] Y. Zhang, W. Zhang, Y. Lei, and J. Zhou, ‘‘Few-view image reconstruction
with fractional-order total variation,’’ J. Opt. Soc. Amer. A, vol. 31, no. 5,
pp. 981–995, 2014.
[34] Y. Zhang, Y. Wang, W. Zhang, F. Lin, Y. Pu, and J. Zhou, ‘‘Statistical
iterative reconstruction using adaptive fractional order regularization,’’
Biomed. Opt. Exp., vol. 7, no. 3, pp. 1015–1029, 2016.
[35] S. Hu, ‘‘External fractional-order gradient vector Perona-Malik diffusion
for sinogram restoration of low-dosed X-ray computed tomography,’’ Adv.
Math. Phys., vol. 2013, no. 2013, p. 516919, 2013.
[36] Y. Pu, W. Wang, J. Zhou, Y. Wang, and H. Jia, ‘‘Fractional differential
approach to detecting texture features of digital image and its fractional
differential filter implementation,’’ Sci. China F Inf. Sci., vol. 51, no. 9,
pp. 1319–1339, 2008.
[37] S. M. Chao and D.-M. Tsai, ‘‘An improved anisotropic diffusion model for
detail and edge-preserving smoothing,’’ Pattern Recognit. Lett., vol. 31,
no. 13, pp. 2012–2023, 2010.
[38] M. D. Ortigueira, ‘‘A coherent approach to noninteger order derivatives,’’
Signal Process., vol. 86, no. 10, pp. 2505–2515, 2006.
[39] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image quality
assessment: From error visibility to image quality assessment: From error
visibility to structural similarity,’’ IEEE Trans. Image Process., vol. 13,
no. 4, pp. 600–612, Apr. 2004.
[40] W. Xu and K. Mueller, ‘‘A performance-driven study of regularization
methods for GPU-accelerated iterative CT,’’ in Proc. Workshop High
Perform. Image Reconstruct. (HPIR), 2009, pp. 20–23.
[41] W. Xu, Z. Zheng, E. Papenhausen, S. Ha, and K. Mueller, ‘‘Iterative cone
beam CT resconstruction on GPUs—A computational perspective,’’ in
Graphics Processing Unit-Based High Performance Computing in Radi-
ation Therapy. Boca Raton, FL, USA: CRC Press, 2015.
YANLING WANG received the M.S. degree in
applied mathematics from the North University of
China in 2010, where she is currently pursuing the
Ph.D. degree in signal and information processing.
Her interested research is image processing with
partial differential equation.
YANLING SHAO received the Ph.D. degree in
mathematics from the Beijing Institute of Tech-
nology in 2005. She joined the Department of
Mathematics, North University of China, in 1983,
where she has been a professor. She has authored
more than one hundred scientific publications. Her
primary research interests include matrix theory
and graph theory.
ZHIGUO GUI received the Ph.D. degree in sig-
nal and information processing from the North
University of China in 2004. He is currently a
Professor with the North University of China. His
research interests include signal and information
processing, image processing and recognition, and
image reconstruction.
QUAN ZHANG received the Ph.D. degree in com-
puter science and technology from Southeast Uni-
versity in 2014. He currently engages in teaching
and research work as an Associate Professor with
the North University of China. His research inter-
ests include medical imaging reconstruction and
medical image analysis.
LINHONG YAO received the Ph.D. degree in fun-
damental mathematics from Beijing Normal Uni-
versity in 2012. She joined the College of Science,
North University of China. Her research inter-
ests include partial differential equations, nonlin-
ear analysis, and mathematical method in image
processing.
YI LIU received the Ph.D. degree in signal and
information processing from the North University
of China in 2014. She joined the School of Infor-
mation and Communication Engineering, North
University of China, in 2015. Her research inter-
ests include CT imaging and image processing.
VOLUME 4, 2016 8499