Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
ICTS 2005 The Proceeding
1. ISSN 1858 -1633
i
Proceeding of Annual International Conference
Information and Communication Technology Seminar
Volume 1, Number 1, August 2005
Executive Board
Rector of ITS
Dean of Information Technology Faculty (FTIF) ITS
Head of Informatics Department FTIF ITS
Editorial Board
Achmad Benny Mutiara
Gunadharma University, Indonesia
Agus Zainal
Sepuluh Nopember Institute of Technology,
Indonesia
Akira Asano
Hiroshima University, Japan
Archi Delphinanto
Eindhoven University of Technology, The
Netherlands
Arif Djunaidy
Sepuluh Nopember Institute of Technology,
Indonesia
Daniel Siahaan
Sepuluh Nopember Institute of Technology,
Indonesia
Handayani Tjandrasa
Sepuluh Nopember Institute of Technology
Happy Tobing
Cendrawasih University, Indonesia
Hideto Ikeda
Ritsumeikan University, Japan
Johny Moningka
Indonesia University, Indonesia
Joko Lianto
Sepuluh Nopember Institute of Technology,
Indonesia
Kridanto Surendro
Bndung Institute of Technology, Indonesia
Marco J Patrick
V-SAT Company, Portugal
Mauridhi Hery Purnomo
Sepuluh Nopember Institute of Technology,
Indonesia
Muchammad Husni
Sepuluh Nopember Institute of Technology,
Indonesia
Nanik Suciati
Sepuluh Nopember Institute of Technology,
Indonesia
Riyanarto Sarno
Sepuluh Nopember Institute of Technology,
Indonesia
Rothkrantz
Delft University of Technology, Indonesia
Retantyo Wardoyo
Gajah Mada University, Indonesia
Siska Fitriana
Delft University of Technology, The
Netherlands
Supeno Djanali
Sepuluh Nopember Institute of Technology,
Indonesia
Zainal Hasibuan
Indonesia University, Indonesia
Yudhi Purwananto
Sepuluh Nopember Institute of Technology,
Indonesia
2. ISSN 1858 -1633
ii
Editor-in-Chief
Umi Laili Yuhana
Sepuluh Nopember Institute of Technology, Indonesia
Contact Address
Informatics Department FTIF, ITS
Gedung Teknik Informatika ITS, Jl. Raya ITS, Sukolilo
Surabaya 60111, INDONESIA
Telp. (031) 5939214
Fax (031) 5913804
Homepage: http://if.its.ac.id/icts
email:icts@if.its.ac.id
3. ISSN 1858 -1633
iii
PREFACE
This proceedings contain sorted papers from Information and Communication Technology
Seminar (ICTS) 2005. ICTS 2005 is the first annual international event from Informatics
Department, Faculty of Information Technology, ITS. This event is a forum for computer
science, information and communication technology community for discussing and exchanging
the information and knowledge in their areas of interest. It aims to promote activities in research,
development, and application on computer science, information and communication technology.
This year, the seminar is held to celebrate the 20th Anniversary of Informatics Department,
Faculty of Information Technology, ITS.
There are 41 papers accepted and finally 36 papers are fit and proper to be presented. The topics
of those papers are: (1) Artificial Intelligence (2) Image Processing (3) Computing (4) Computer
Network and Security (5) Software Engineering and (6) Mobile Computing.
We would like to thank to the keynote speakers, the authors, the participants, and all parties for
the success of ICTS 2005.
Editorial Team
4. ISSN 1858 -1633
iv
Proceeding of Annual International Conference
Information and Communication Technology Seminar
Volume 1, Number 1, August 2005
Table of Content
Mathematical Morphology And Its Applications ...................................................................1-9
Akira Asano
Molecular Dynamics Simulation On A Metallic Glass-System: Non-Ergodicity
Parameter ...............................................................................................................................10-16
Achmad Benny Mutiara
Tomographic Imaging Using Infra Red Sensors.................................................................17-19
Dr. Sallehuddin Ibrahim and Md. Amri Md. Yunus
Mammographic Density Classification Using Multiresolution Histogram Technique ...20-23
Izzati Muhimmah, Erika R.E. Denton, and Reyer Zwiggelaar
Ann Soft Sensor To Predict Quality Of Product Based On Temperature Or Flow Rate
Correlation..............................................................................................................................24-28
Totok R. Biyanto
Application Of Soft Classification Techniques For Forest Cover Mapping ....................29-36
Arief Wijaya
Managing Internet Bandwidth: Experience In Faculty Of Industrial Technology, Islamic
University Of Indonesia.........................................................................................................37-40
Mukhammad Andri Setiawan
Mue: Multi User Uml Editor.................................................................................................41-45
Suhadi Lili, Sutarsa, and Siti Rochhimah
Designing Secure Communication Protocol For Smart Card System, Study Case: E-Purse
Application..............................................................................................................................46-48
Daniel Siahaan, and I Made Agus
Fuzzy Logics Incorporated To Extended Weighted-Tree Similarity Algorithm For Agent
Matching In Virtual Market.................................................................................................49-54
Sholeh Hadi Setyawan and Riyanarto Sarno
Shape Matching Using Thin-Plate Splines Incorporated To Extended Weighted-Tree
Similarity Algorithm For Agent Matching In Virtual Market..........................................55-61
Budianto and Riyanarto Sarno
Text-To-Video Text To Facial Animation Video Convertion............................................62-67
Hamdani Winoto, Hadi Suwastio, and Iwan Iwut T.
5. ISSN 1858 -1633
v
Share-It: A UPnP Application For Content Sharing..........................................................68-71
Daniel Siahaan
Modified Bayesian Optimization Algorithm For Nurse Scheduling .................................72-75
I N. Sutapa, I. H. Sahputra, and V. M. Kuswanto
Politeness In Phoning By Using Wap And Web..................................................................76-80
Amaliah Bilqis, and Husni Muhammad
Impelementation Of Hierarchy Color Image Segmentation For Content Based Image
Retrieval System.....................................................................................................................81-85
Nanik Suciati and Shanti Dewi
Decision Support System For Stock Investment On Mobile Device.................................86-90
Ivan Satria and Dedi Trisnawarman
Fuzzy Logic Approach To Quantify Preference Type Based On Myers Briggs Type
Indicator (MBTI)....................................................................................................................91-93
Hindriyanto Dwi Purnomo, Srie Yulianto Joko Prasetyo
Security Concern Refactoring.............................................................................................94-100
Putu Ashintya Widhiartha, and Katsuhisa Maruyama
A Parallel Road Traffic Simulator Core..........................................................................101-104
Dwi Handoko, Wahju Sediono, and Made Gunawan
C/I Performance Comparison Of An Adaptive And Switched Beam In The Gsm Systems
Employing Smart Antenna................................................................................................105-110
Tito Yuwono, Mahamod Ismail, and Zuraidah bt Zainuddin
Identification Of Solvent Vapors Using Neural Network Coupled Sio2
Resonator Array.................................................................................................................111-114
Muhammad Rivai, Ami Suwandi JS , and Mauridhi Hery Purnomo
Comfortable Dialog For Object Detection.......................................................................115-122
Rahmadi Kurnia
A Social Informatics Overview Of E-Government Implementation: Its Social Economics
And Restructuring Impact ................................................................................................123-129
Irwan Sembiring and Krismiyati
Agent Based Programming For Computer Network Monitoring .................................130-134
Adang Suhendra
Computer Assisted Diagnosis System Using Morphology Watershed For Breast Carcinoma
Tumor..................................................................................................................................135-139
Sri Yulianto and Hindriyanto
6. ISSN 1858 -1633
vi
Evaluation Of Information Distribution Algorithms Of A Mobile Agent-Based Demand-
Oriented Information Service System.............................................................................140-144
I. Ahmed and M J. Sadiq
Online Mobile Tracking On Geographics Information System Using Pocket Pc........145-150
M. Endi Nugroho and Riyanarto Sarno
A Simple Queuing System To Model The Traffic Flow At The Toll-Gate: Preliminary
Results .................................................................................................................................151-153
Wahju Sediono, Dwi Handoko
Multimodal-Eliza Perceives And Responds To Emotion ...............................................154-158
S. Fitrianie and L.J.M. Rothkrantz
Motor Dc Position Control Based On Moving Speed Controlled By Set Point Changing
Using Fuzzy Logics Control System................................................................................159-166
Andino Maseleno, Fajar Hayyin, Hendra, Rahmawati Lestari, Slamet Fardyanto, and Yuddy
Krisna Sudirman
A Variable-Centered Intelligent Rule System.................................................................167-174
Irfan Subakti
Multiple Null Values Estimating In Generating Weighted Fuzzy Rules Using Genetic
Simulated Annealing..........................................................................................................175-180
Irfan Subakti
Genetic Simulated Annealing For Null Values Estimating In Generating Weighted Fuzzy
Rules From Relational Database Systems ......................................................................181-188
Irfan Subakti
Image Thresholding By Measuring The Fuzzy Sets Similarity.....................................189-194
Agus Zainal Arifin and Akira Asano
Development Of Scheduler For Linux Virtual Server In Ipv6 Platform Using Round Robin
Adaptive Algorithm, Study Case: Web Server ...............................................................195-198
Royyana Muslim Ijtihadie and Febriliyan Samopa
7. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
1
MATHEMATICAL MORPHOLOGY AND ITS APPLICATIONS
Akira Asano
Division of Mathematical and Information Sciences, Faculty of Integrated Arts and Sciences,
Hiroshima University
Kagamiyama 1-7-1, Higashi-Hiroshima, Hiroshima 739-8521, JAPAN
email: asano@mis.hiroshima-u.ac.jp
ABSTRACT
This invited talk presents the concept of
mathematical morphology, which is a mathematical
framework of quantitative image manipulations. The
basic operations of mathematical morphology, the
relationship to image processing filters, the idea of
size distribution and its application to texture analysis
are explained.
1. INTRODUCTION
Mathematical morphology treats an effect on an
image as an effect on the shape and size of objects
contained in the image. Mathematical morphology is a
mathematical system to handle such effects
quantitatively based on set operations [1–5].The word
stem “morpho-” originates in a Greek word meaning
“shape,” and it appears in the word “morphing,”
which is a technique of modifying an image into
another image smoothly.
The founders of mathematical morphology, G.
Math´eron and J. Serra, were researchers of l ´ Ecole
Nationale Sup´erieure des Mines de Paris in France,
and had an idea of mathematical morphology as a
method of evaluating geometrical characteristics of
minerals in ores [6]. Math´eron is also the founder of
the random closed set theory, which is a fundamental
theory of treating random shapes, and kriging, which
is a statistical method of estimating a spatial
distribution of mineral deposits from trial diggings.
Mathematical morphology has relationships to these
theories and has been developed as a theoretical
framework of treating spatial shapes and sizes of
objects. The International Symposium on
Mathematical Morphology (ISMM), which is the
topical international symposium focusing on
mathematical morphology only, has been organized
almost every two years, and its seventh symposium
was held in April 2005 in Paris as a cerebration of 40
years anniversary of mathematical morphology [7].
The paper explains the framework of mathematical
morphology, especially opening, which is the
fundamental operation of describing operations on
shapes and sizes of objects quantitatively, in Sec. 2.
Section 3. proves “filter theorem,” which guarantees
that all practical image processing filters can be
constructed by combinations of morphological
operations. Examples of expressing median filters and
average filters by combinations of morphological
operations are also shown in this section. Section 4.
explains granulometry, which is a method of
measuring the distribution of sizes of objects in an
image, and shows an application to texture analysis by
the author.
2. BASIC OPERATIONS OF
MATHEMATICAL MORPHOLOGY
The fundamental operation of mathematical
morphology is “opening,” which discriminates and
extracts object shapes with respect to the size of
objects. We explain opening on binary images at first,
and basic operations to describe opening.
2.1 Opening
In the context of mathematical morphology, an
object in a binary image is regarded as a set of vectors
corresponding to the points composing the object. In
the case of usual digital images, a binary image is
expressed as a set of white pixels or pixels of value
one. Another image set expressing an effect to the
above image set is considered, and called structuring
element. The structuring element corresponds to the
window of an image processing filter, and is
considered to be much smaller than the target image to
be processed.
Let the target image set be X, and the structuring
element be B. Opening of X by B has a property as
follows:
where Bz indicates the translation of B by z,
defined as follows:
This property indicates that the opening of X with
respect to B indicates the locus of B itself sweeping all
the interior of X, and removes smaller white regions
than the structuring element, as illustrated in Fig. 1.
Since opening eliminates smaller structures and
smaller bright peaks than the structuring element, it
has a quantitative smoothing ability.
8. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
2
Fig. 1. Effect of opening.
2.2. Fundamental Operations of
Mathematical Morphology
Although the property of opening in E. (1) is
intuitively understandable, this is not a pixelwise
operation. Thus opening is defined by a composition
of simpler pixelwise operations. In order to define
opening, Minkowski set subtraction and addition are
defined as the fundamental operations of mathematical
morphology.
Minkowski set subtraction has the following
property: It follows from x ∈ Xb that x − b ∈ X. Thus
the definition of Minkowski set subtraction in Eq. (3)
can be rewritten to the following pixelwise operation:
The reflection of B, denoted ˇB , is defined as
follows:
Using the above expressions, Minkowski set
subtraction is expressed as follows:
Since we get from the definition of reflection in
Eq. (6) that , it follows
that . We get the
relationship in Eq. (7) by substituting it into Eq.
(5).This relationship indicates that is the
locus of the origin of when sweeps all the
interior of X.
For Minkowski set addition, it follows that
Thus we get
Fig. 2. opening composed of fundamental operations.
It indicates that is composed by pasting a
copy of B at every point within X.
Using the above operations, erosion and dilation
of X with respect to B are defined as and
. respectively.
We get from Eq. (7) that
It indicates that is the locus of the origin
of B when B sweeps all the interior of X. The opening
XB is then defined using the above fundamental
operations as follows:
The above definition of opening is illustrated in
Fig. 2. A black dot indicates a pixel composing an
image object in this figure. As shown in the above, the
erosion of X by B is the locus of the origin of B when
B sweeps all the inside of X. Thus the erosion in the
first step of opening produces every point where a
copy of B included in X can be located. The
Minkowski addition in the second step locates a copy
of B at every point within . Thus the opening
of X with respect to B indicates the locus of B itself
sweeping all the interior of X, as described at the
beginning of this section. In other words, the opening
removes regions of X which are too small to include a
copy of B and preserves the others.
The counterpart of opening is called closing,
defined as follows:
The closing of X with respect to B is equivalent to
the opening of the background, and removes smaller
spots than the structuring element within image
objects. This is because the following relationship
between opening and closing holds:
where Xc
indicate the complement of X and
defined as The relationship of Eq. (13)
is called duality of opening and closing1
.
1
There is another notation system which denotes
opening as X ◦ B and closing as X • B.
9. Mathematical Morphology and Its Application – Akira Asano
ISSN 1858-1633 @2005 ICTS
3
Figure 3 summarizes the illustration of the effects
of basic morphological operations2
.
Fig. 3. Effects of erosion, dilation, opening, and closing
Fig. 4. Umbra. The spatial axis x is illustrated one-dimensional for
simplicity.
2.3. In the Case of Gray Scale Images
In the case of gray scale image, an image object is
defined by the umbra set. If the pixel value
distribution of an image object is denoted as f(x),
where is a pixel position, its umbra U[f(x)]
is defined as follows:
Consequently, when we assume a “solid” whose
support is the same as a gray scale image object and
whose height at each pixel position is the same as the
pixel value at this position, the umbra is equivalent to
this solid and the whole volume
below this solid within the support, as illustrated in
Fig. 4.
A gray scale structuring element is also defined in
the same manner. Let f(x) be the gray scale pixel value
at pixel position x and g(y) be that of the structuring
element. Erosion of f by g is defined for the umbrae
similarly to the binary case, and reduced to the
following operation [4, 5]:
Dilation is also reduced to
where w(g) is the support of g. These equations
indicate that the logical AND and OR operations in
the definition for binary images are replaced with the
infimum and supremum operations (equivalent to
minimum and maximum in the case of digital images),
respectively.
Expanding the idea, mathematical morphological
operations can be defined for sets where the infimum
and supremum among the elements are defined in
some sense. For example, morphological operations
for color images cannot be defined straightforwardly
from the above classical definitions,
since a color pixel value is defined by a vector and
the infimum and supremum are not trivially defined.
The operations can be defined if the infimum and
supremum among colors are defined [8, 9]. Such set is
called lattice, and mathematical morphology is
generally defined as operations on a lattice [10].
3. MATHEMATICAL MORPHOLOGY
AND IMAGE PROCESSING FILTER
3.1. Morphological filter
Image processing filter is generally an operation at
each pixel by applying a calculation to the pixel and
the pixels in its neighborhood and replacing the pixel
value with the result of calculation, for the purposes of
noise removal, etc. Morphological filter in broader
sense is restricted to translation-invariant and
increasing one. An operation Ψ on a set (image) X is
translation-invariant if
In other words, it indicates that the effect of the
operation is invariant wherever the operation is
applied. An operation Ψ is increasing if
In other words, the relationship of inclusion is
preserved before and after applying the operation.
Let us consider a noise removing filter for
example; Since noise objects in an image should be
removed wherever it is located, the translation-
invariance is naturally required for noise removing
filters. An increasing operation can express an
operation that removes smaller objects and preserves
larger objects, but cannot express an operation that
removes larger and preserves smaller. Noise objects
are, however, usually smaller than meaningful objects.
Thus it is also natural to consider increasing
operations only3
.
Morphological filter in narrower sense is all
translation-invariant, increasing, and idempotent
operations. The filter Ψ is idempotent if
2
There is another definition of morphological operations which
denotes the erosion in the text as X _ B and call the Minkowski set
addition in the text “dilation.” The erosion and dilation are not dual
in this definition,
3
Edge detecting filter is not increasing, since it removes
all parts of objects
10. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
4
Consequently, iterative operations of Ψ is
equivalent to one operation of Ψ . The opening and
closing are the most basic morphological filters in
narrower sense.
3.2. Filter theorem
The filter theorem states that all morphological
filters (in broader sense) can be expressed by OR of
erosions and AND of dilations. It guarantees that
almost all practical filters can be expressed by
morphological operations, i. e. mathematical
morphology is really a fundamental operation set of
image object manipulations. Let Ψ(X) be a filter on
the image X.
The theorem states that there exists for all Ψ(X) a
set family Ker[Ψ] satisfying the following.
It also states that there exists for all Ψ(X) a set
family Ker[Ψ] satisfying the following.
Here the set family Ker[Ψ] is called kernel of
filter Ψ, defined as follows:
where “0” indicates the origin of X. The proof of
the filter theorem in Eq. (20) is presented in the
following. A more general proof is found in Chap. 4
of [10]. Let us consider an arbitrary vector (pixel)
for a structuring element
. From the definition of
. Consequently,
Since Ψ is increasing, the relationship B ⊆ X−h is
invariant by filter Ψ. Thus we get
. Since Ψ is
translation-invariant, we get
by translating
. From the above discussion,
for all structuring
element . Thus
. Let us consider
an arbitrary vector . Since Ψ is
translation-invariant,
. Thus we get
. Since
,
and is satisfied if ,
we get . By denoting by
, we get .
Consequently, there exists a structuring
element such that
, i. e. any pixel in
Ψ(X) can be included in by using a
certain structuring element . Thus
From the above discussion, it holds that
3.3. Morphological expressions of median
filter and average filter
The filter theorem guarantees that all translation-
invariant increasing filters can be constructed by
morphological operations. However, the kernel is
generally redundant and each practical filter is often
expressed by morphological operations with fewer
numbers of structuring elements. In this subsection,
morphological expressions of median filter and
average filter are shown with examples. These filters
are usually applied to gray scale images,
morphological operations and logical operations are
reduced to minimum and maximum operations.
Details of the proof are found in [11, 12].
3.3.1. Median filter: The median filter whose
window size is n pixel is expressed as follows:
the minimum of maximum ( or the maximum of
minimum) in every possible subwindow of [n/2 +
1] pixels in the window.
The operations deriving the maximum and
minimum in each subwindow at every pixel are the set
addition and set subtraction using the subwindow as
the structuring element, respectively. Since the
maximum and minimum operations are extensions of
logical OR and AND operations based on fuzzy logic,
respectively, the median filter is expressed by the
combination of morphological and logical operations,
as shown in Figs. 5 and 6.
11. Mathematical Morphology and Its Application – Akira Asano
ISSN 1858-1633 @2005 ICTS
5
Fig. 5. Subwindows of [n/2 + 1] pixels.
3.3.2. Average filter: The simplest average filter
operation, that is, the average of two pixel values x
and y, is expressed by the minimum and the
supremum, as follows:
or
as shown in Fig. 7.
Fig. 6. Median expressed by the maximum and minimum.
Fig. 7. Average expressed by the maximum and minimum.
4. GRANULOMETRY AND TEXTURE
ANALYSIS
Texture is an image composed by repetitive
appearance of small structures, for example surfaces
of textiles, microscopic images of ores, etc. Texture
analysis is a fundamental application of mathematical
morphology, since it was developed for the analysis of
minerals in ores. In this section, the concept of size in
the sense of mathematical morphology and the idea of
granulometry for measuring granularity of image
objects are explained. An example of texture analysis
applying granulometry by the author is also presented.
4.1. Granulometry and size distribution
Opening of image X with respect to structuring
element B means residue of X obtained by removing
smaller structures than B. It indicates that opening
works as a filter to distinguish object structures by
their sizes. Let 2B, 3B, . . . , be homothetic
magnifications of the basic structuring element B.
We then perform opening of X with respect to the
homothetic structuring elements, and obtain the image
sequence XB, X2B, X3B, . . . . In this sequence, XB is
obtained by removing the regions smaller than B, X2B
is obtained by removing the regions smaller than X2B,
X3B is obtained by removing the regions smaller than
3B, . . . . If B is convex, it holds that X XB X2B
X3B . . . . The size of rB is defined as r, and this
sequence of opening is called granulometry [10]. We
then calculate the ratio of the area (for binary case) or
the sum of pixel values (for gray scale case) of XrB to
that of the original X at each r. The area of an image is
defined
by the area occupied by an image object, i. e. the
number of pixels composing an image object in the
case of discrete images. The function from a size r to
the corresponding ratio is monotonically decreasing,
and unity when the size is zero. This function is called
size distribution function. The size distribution
function of size r indicates the area ratio of the regions
whose sizes are greater than or equal to r.
12. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
6
Fig. 8. Granulometry and size density function
The r-times magnification of B, denoted rB, is
usually defined in the context of mathematical
morphology as follows:
where {0} denotes a single dot at the origin.
Let us consider a differentiation of the size
distribution function. In the case of discrete sizes, it is
equivalent to the area differences of the image pairs
corresponding to adjacent sizes in XB, X2B, X3B, ....
For example, the area difference between X2B and X3B
corresponds to the part included inX2B but excluded
fromX3B, that is, the part whose size is exactly 2. The
sequence of the areas corresponding to each size
exactly, derived as the above, is called pattern
spectrum [13], and the sequence of the areas relative
to the area of the original object is called size density
function [14]. An example of granulometry and size
density function is illustrated in Fig. 8.
Size distribution function and size density function
have similar properties to probability distribution
function and probability density function, respectively,
so that such names are given to these functions.
Similarly to probability distributions, the average and
the variance of size of objects in an image can be
considered. Higher moments of a size distribution can
be also defined, which are called granulometric
moments, and image objects can be characterized
using these moments [14–16].
4.2. Application to texture analysis
As described in Sec. 2., morphological opening is a
regeneration of an image by arranging the structuring
element, and removes smaller white regions in binary
case or smaller regions composed of brighter pixels
than its neighborhood in gray scale case than the
structuring element. Thus opening is effective for
eliminating noisy pixels that are brighter than its
neighborhood.
Since opening generates the resultant image by an
arrangement of the structuring element, the hape and
pixel value distribution of the structuring element
directly appear in the resultant image. It causes
artifacts if the shape and pixel value distribution are
not related to the original image.
This artifact can be suppressed by using a
structuring element resembling the shape and pixel
value distribution contained in the original image.
Such structuring element cannot be defined generally,
but can be estimated for texture images, since texture
is composed an arrangement of small objects
appearing repetitively in the texture.
We explain in this subsection a method of
developing the optimal artifact-free opening for noise
removal in texture images [17]. This method estimates
the structuring element which resembles small objects
appearing repetitively in the target texture. This is
achieved based on Primitive, Grain, and Point
Configuration (PGPC) texture model, which we have
proposed to describe a texture, and an optimization
method with respect to the size distribution function.
The optimal opening of suppressing the artifacts is
achieved by using the estimated structuring element.
In the case of noise removal, the primitive cannot be
estimated by the target image itself, since the original
uncorrupted image corresponding to the target image
is unknown. This problem is similar to that of the
image processing by learning, which estimates the
optimal filter parameters by giving an example of
corrupted image and its ideal output to a learning
mechanism [18–20]. In the case of texture image,
however, if a sample uncorrupted part of the texture
similar to the target corrupted image is available, the
primitive can be estimated from this sample, since the
sample and the target image are different realization
but have common textural characteristics.
4.2.1. PGPC texture model and estimation
of the optimal structuring element: The PGPC
texture model regards a texture as an image composed
by a regular or irregular arrangement of objects that
are much smaller than the size of image and resemble
each other. The objects arranged in a texture are called
grains, and the grains are regarded to be derived from
one or a few typical objects called primitives.
We assume here that the grains are derived from
one primitive by homothetic magnification.We also
assume that the primitive is expressed by a structuring
element B, and let X be the target texture image. In
this case, XrB is regarded as the texture image
composed by the arrangement of rB only. It follows
that rB − (r + 1)B indicates the region included in the
arrangement of rB but not included in that of (r+1)B.
Consequently, XrB −X(r+1)B is the region where r-size
grains are arranged if X is expressed by employing an
13. Mathematical Morphology and Its Application – Akira Asano
ISSN 1858-1633 @2005 ICTS
7
arrangement of grains which are preferably large
magnifications of the primitive. The sequence X − XB,
XB − X2B, . . . , XrB − X(r+1)B, . . . , is the decomposition
of the target texture to the arrangement of the grains of
each size.
Since the sequence can be derived by using any
structuring element, it is necessary to estimate the
appropriate primitive that is a really typical
representative of the grains. We employ an idea that
the structuring element yielding the simplest grain
arrangement is the best estimate of the primitive,
similarly to the principle of minimum description
length (MDL). The simple arrangement locates a few
number of large magnifications for the expression of a
large part of the texture image, contrarily to the
arrangement of a large number of small-size
magnifications. We derive the estimate by finding the
structuring element minimizing the integral of 1 −
F(r), where F(r) is the size distribution function with
respect to size r. The function 1 − F(r) is 0 for r = 0
and monotonically increasing, and 1 for the maximum
size required to compose the texture by the
magnification of this size. Consequently, if the
integral of 1−F(r) is minimized as illustrated in Fig. 9,
the sizes of employed magnifications concentrate to
relatively large sizes, and the structuring element in
this case expresses the texture using the largest
possible magnifications. We regard this structuring
element as the estimate of primitive.
We estimate the gray scale structuring element in
two steps: the shape of structuring element is
estimated by the above method in the first step, and
the gray scale value at each pixel in the primitive
estimated in the first step is then estimated. However,
if the above method is applied to the gray scale
estimation, the estimate often has a small number of
high-value pixel and other pixels whose values are
almost zero. This is because the umbra of any object
can be composed by arranging the umbra of one-pixel
structuring element, as illustrated in Fig. 10. This is
absolutely not a desired estimate. Thus we modify the
above method, and minimize 1 − F(1), i. e. the
residual area of XB instead of the above method. In
this case, the composition by this structuring element
and its magnification is the most admissible when the
residual area is the minimum, since the residual region
cannot be composed of even the smallest
magnification.
The exploration of the structuring element can be
performed by the simulated annealing, which iterates a
modification of the structuring element and find the
best estimate minimizing the evaluation function
described in the above [21].
Fig. 9. Function 1 − F(r). Size r is actually discrete for digital
images.(a)Function and its integral. (b) Minimization of the integral.
Fig. 10. Any object can be composed by arranging one-pixel
structuring element.
4.2.2. Experimental results: Figures 11 and 12
show the example experimental results of noise
removal using the estimated primitives as the
structuring elements. All images contains 64×64 8-bit
gray scale pixels. In each example, the gray scale
primitive shown in (b) is estimated for the example
image (a). Each small square in (b) corresponds to one
pixel in the primitive, and the shape is expressed by
the arrangement of white squares. The primitive is
explored from connected figures of nine pixels within
5 × 5-pixel square. The gray scale value is explored by
setting the initial pixel value to 50 and modifying the
value in the range of 0 to 100.
The opening using the primitive (b) as the
structuring element is performed on the corrupted
image (c). This image is generated by adding a
uniformly distributed random value, which is in the
range between 0 and 255, to 1000 randomly selected
pixels of an image extracted from an image which is a
different realization of the same texture as (a).
Opening eliminates brighter peaks of small extent, so
that this kind of noise is employed for this experiment.
14. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
8
The result using the estimate primitive is shown in (d),
that using the flat structuring element whose shape is
the same as (b) is shown in (e), and that using the 3 ×
3-pixel square flat structuring element is shown in (f).
The “MSE” attached to each resultant image is
defined as the sum of pixelwise difference between
each image and the original uncorrupted image of (c),
which is not shown here, divided by the number of
pixels in the image.
Fig. 11. Experimental results (1).
The results (d) show high effectiveness of our
method in noise removal and detail preservation. The
results using the square structuring element contain
artifacts since the square shape appears directly in the
results, and the results using the binary primitives
yield regions of unnaturally uniform pixel values. The
comparison of (d) and (e) indicates that the
optimization of binary structuring elements is
insufficient and the grayscale optimization is
necessary.
Fig. 12. Experimental results (2).
In these examples, the assumptions that the grains
are derived from one primitive by homothetic
magnification and the primitive is expressed by one
structuring element are not exactly satisfied. However,
the results indicate that our method is applicable to
these cases practically.
5. CONCLUSIONS
This invited talk has explained the fundamental
concept of mathematical morphology, the filter
theorem and the relationship to image processing
filters, and the concept of size distribution and its
application to texture analysis, which is one of the
author’s research topics. The importance of
mathematical morphology is that it gives a
“mathematical” framework based on set operations to
operations on shapes and sizes of image objects.
Mathematical morphology has its origin in the
research of minerals; If the researchers of
mathematical morphology had concentrated to
practical problems only and had not made efforts of
mathematical formalization, mathematical
morphology could not have been extended to general
image processing or spatial statistics. It suggests that
researches on any topic considering general
frameworks is always important.
Acknowledgements
The author would like to thank Dr. Daniel Siahaan,
the Organizing Committee Chairman, and all the
Committee members, for this opportunity of the
invited talk in ICTS2005.
REFERENCES
[1] J. Serra, Image analysis and mathematical
morphology, Academic Press, 1982. ISBN0-
12-637242-X
[2] J. Serra, ed., Image analysis and mathematical
morphology Volume 2, Technical advances,
Academic Press, 1988. ISBN0-12-637241-1
[3] P. Soille, Morphological Image Analysis, 2nd
Ed., Springer, 2003.
[4] P. Maragos, Tutorial on advances in
morphological image processing and analysis,
Optical Engineering, 26, 1987, 623–632.
[5] R. M. Haralick, S. R. Sternberg, and X. Zhuang,
Image Analysis Using Mathematical
Morphology, IEEE Trans. Pattern Anal.
Machine Intell., PAMI-9, 1987, 532–550.
[6] G. Matheron, J. Serra, The birth of
mathematical morphology, Proc. 6th
International Symposium on Mathematical
Morphology, 1–16, CSIRO Publishing 2002.
ISBN0-643-06804-X
[7] International Symposium on Mathematical
Morphology, 40 years on
(http://ismm05.esiee.fr).
15. Mathematical Morphology and Its Application – Akira Asano
ISSN 1858-1633 @2005 ICTS
9
[8] M. L. Corner and E. J. Delp, Morphological
operations for color image processing, Journal
of Electronic Imaging, 8(3), 1999, 279–289.
[9] G. Louverdis, M. I. Vardavoulia, I. Andreadis,
and Ph. Tsalides, A new approach to
morphological color image processing, Pattern
Recognition, 35, 2002, 1733–1741.
[10] H. J. A. M. Heijmans, Morphological Image
Operators, Academic Press (1994). ISBN0-12-
014599-5
[11] P. Maragos and R. W. Schafer, Morphological
Filters- Part I, Their Set-Theorectic Analysis
and Relations to Linear Shift-Invariant Filters,
IEEE Trans. Acoust., Speech, Signal
Processing, ASSP-35(8), 1987, 1153–1169 .
[12] P. Maragos and R. W. Schafer, Morphological
Filters- Part II, Their Relations to Median,
Order-Statistic, and Stack Filters, IEEE Trans.
Acoust., Speech, Signal Processing, ASSP-
35(8), 1987, 1170–1184 .
[13] P. Maragos, Pattern spectrum and multiscale
shape representation, IEEE Trans. Pattern
Anal. Machine Intell., 11, 1989, 701–716.
[14] E. R. Dougherty, J. T. Newell, and J. B. Pelz,
Morphological texturebased maximuml-
likelihood pixel classification based on local
granulometric moments, Pattern Recognition,
25, 1992, 1181–1198.
[15] F. Sand and E. R. Dougherty, Asymptotic
granulometric mixing theorem, morphological
estimation of sizing parameters and mixture
proportions, Pattern Recognition, 31, 1998,
53–61.
[16] F. Sand and E. R. Dougherty, Robustness of
granulometric moments, Pattern Recognition,
32, 1999, 1657–1665.
[17] A. Asano, Y. Kobayashi, C. Muraki, and M.
Muneyasu, Optimization of gray scale
morphological opening for noise removal in
texture images, Proc. 47th IEEE International
Midwest Symposium on Circuits and Systems,
1, 2004, 313–316.
[18] A. Asano, T. Yamashita, and S. Yokozeki,
Learning optimization of morphological filters
with grayscale structuring elements, Optical
Engineering, 35(8), 1986, 2203–2213.
[19] N. R. Harvey and S. Marshall, The use of
genetic algorithms in morphological filter
design, Signal Processing, Image
Communication, 8, 1996, 55–71.
[20] N. S. T. Hirata, E. R. Dougherty, and J.
Barrera, Iterative Design of Morphological
Binary Image Operators, Optical Engineering,
39(12), 2000, 3106–3123.
[21] A. Asano, T. Ohkubo, M. Muneyasu, and T.
Hinamoto, Primitive and Point Configuration
texture model and primitive estimation using
mathematical morphology, Proc. 13th
Scandinavian Conf. on Image Analysis,
G¨oteborg, Sweden; Springer LNCS 2749,
2003, 178–185.
16. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
10
MOLECULAR DYNAMICS SIMULATION ON A METALLIC
GLASS-SYSTEM: NON-ERGODICITY PARAMETER
Achmad Benny Mutiara
Dept. of Informatics Engineering, Faculty of Industrial Technology, Gunadarma University
Jl.Margonda Raya No.100, Depok 16424, West-Java Indonesia
E-mail: amutiara@staff.gunadarma.ac.id
ABSTRACT
At the present paper we have computed non-
ergodicity paramater from Molecular Dynamics (MD)
Simulation data after the mode-coupling theory
(MCT) for a glass transition. MCT of dense liquids
marks the dynamic glass-transition through a critical
temperature Tc that is reflected in the temperature-
dependence of various physical quantities.
Here, molecular dynamics simulations data of a
model adapted to Ni0.2Zr0.8 are analyzed to deduce
Tc from the temperature-dependence of corresponding
quantities and to check the consistency of the
statements. Analyzed is the diffusion coefficients. The
resulting values agree well with the critical
temperature of the non-vanisihing non-ergodicity
parameter determined from the structure factors in the
asymptoticsolution of the mode-coupling theory with
memorykernels in “One-Loop” approximation.
Keywords: Glass Transition, Molecular Dynamics
Simulation, MCT
1. INTRODUCTION
The transition from a liquid to an amorphous solid
that sometimes occurs upon cooling remains one of
the largely unresolved problems of statistical physics
[1,2]. At the experimental level, the so-called glass
transition is generally associated with a sharp increase
in the characteristic relaxation times of the system,
and a concomitant departure of laboratory
measurements from equilibrium. At the theoretical
level, it has been proposed that the transition from a
liquid to a glassy state is triggered by an underlying
thermodynamic (equilibrium) transition [3]; in that
view, an “ideal” glass transition is believed to occur at
the so-called Kauzmann temperature, TK. At TK, it is
proposed that only one minimum-energy basin of
attraction is accessible to the system. One of the first
arguments of this type is due to Gibbs and diMarzio
[4], but more recent studies using replica methods
have yielded evidence in support of such a transition
in Lennard-Jones glass formers [3,5,6]. These
observations have been called into question by
experimental data and recent results of simulations of
polydisperse hard-core disks, which have failed to
detect any evidence of a thermodynamic transition up
to extremely high packing fractions [7]. Oneof the
questions that arises is therefore whether the
discrepancies between the reported simulated behavior
of hard-disk and soft-sphere systems is due to
fundamental differences in the models, or whether
they are a consequence of inappropriate sampling at
low temperatures and high densities.
Different, alternative theoretical considerations
have attempted to establish a connection between
glass transition phenomena and the rapid increase in
relaxation times that arises in the vicinity of a
theoretical critical temperature (the so-called “mode-
coupling” temperature, Tc), thereby giving rise to a
“kinetic” or “dynamic” transition [8]. In recent years,
both viewpoints have received some support from
molecular simulations. Many of these simulations
have been conducted in the context of models
introduced by Stillinger andWeber and by Kob and
Andersen [9]; such models have been employed in a
number of studies that have helped shape our current
views about the glass transition [5,10–14].
In the full MCT, the remainders of the transition
and the value of Tc have to be evaluated, e.g., from the
approach of the undercooled melt towards the
idealized arrested state, either by analyzing the time
and temperature dependence in the β-regime of the
structural fluctuation dynamics [15–17] or by
evaluating the temperature dependence of the so-
called gm-parameter [18,19]. There are further
posibilities to estimates Tc, e.g., from the temperature
dependence of the diffusion coefficients or the
relaxation time of the final α-decay in the melt, as
these quantities for T > Tc display a critical behaviour
|T − Tc|±γ. However, only crude estimates of Tc can be
obtained from these quantities, since near Tc the
critical behaviour is masked by the effects of
transversale currents and thermally activated matter
transport, as mentioned above.
On the other hand, as emphasized and applied in
[20–22], the value of Tc predicted by the idealized
MCT can be calculated once the partial structure
factors of the system and their temperature
dependence are sufficiently well known. Besides
temperature and particle concentration, the partial
structure factors are the only significant quantities
which enter the equations of the so-called
nonergodicity parameters of the system. The latter
vanish identically for temperatures above Tc and their
calculation thus allows a rather precise determination
of the critical temperature predicted by the idealized
theory.
At this stage it is tempting to consider how well
the estimates of Tc from different approaches fit
17. Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity
Parameter – Achmad Benny Mutiara
ISSN 1858-1633 @2005 ICTS
11
together and whether the Tc estimate from the non-
ergodicity parameters of the idealized MCT compares
to the values from the full MCT. Regarding this, we
here investigate a molecular dynamics (MD)
simulation model adapted to the glass-forming
Ni0.2Zr0.8 transition metal system. The NixZr1−x-
system is well studied by experiments [23,24] and by
MD-simulations [25–29], as it is a rather interesting
system whose components are important constituents
of a number of multi-component ’massive’ metallic
glasses. In the present contribution we consider, in
particular, the x = 0.2 compositions and concentrate
on the determination of Tc from evaluating and
analyzing the non-ergodicity parameter and the
diffusion coefficients.
In the literature, similar comparison of Tc estimates
already exist [20–22] for two systems. The studies
come, however, to rather different conclusions. From
Mdsimulations for a soft spheres model, Barrat et.al.
[20] find an agreement between the different Tc
estimates within about 15%. On the other hand, for a
binary Lennard-Jones system, Nauroth and Kob [22]
get from their MD simulations a significant deviation
between the Tc estimates by about a factor of 2.
Regarding this, the present investigation is aimed at
clarifying the situation for at least one of the important
metallic glass systems. Our paper is organized as
follows: In Section II, we present the model and give
some details of the computations.
Section III gives a brief discussion of some aspects
of the mode coupling theory as used here. Results of
our MD-simulations and their analysis are then
presented and discussed in Section IV.
2. SIMULATIONS
The present simulations are carried out as state-of-
theart isothermal-isobaric (N, T, p) calculations. The
Newtonian equations of N = 648 atoms (130 Ni and
518 Zr) are numerically integrated by a fifth order
predictorcorrector algorithm with time step ∆t =
2.5x10−15s in a cubic volume with periodic boundary
conditions and variable box length L. With regard to
the electron theoretical description of the interatomic
potentials in transition metal alloys by Hausleitner and
Hafner [30], we model the interatomic couplings as in
[26] by a volume dependent electron-gas term Evol(V )
and pair potentials φ(r) adapted to the equilibrium
distance, depth, width, and zero of the Hausleitner-
Hafner potentials [30] for Ni0.2Zr0.8 [31]. For this
model simulations were started through heating a
starting configuration up to 2000 K which leads to a
homogeneous liquid state. The system then is cooled
continuously to various annealing temperatures with
cooling rate −∂tT = 1.5x1012 K/s. Afterwards the
obtained configurations at various annealing
temperatures (here 1500-800 K) are relaxed by
carrying out additional isothermal annealing run at the
selected temperature. Finally the time evolution of
these relaxed configurations is modelled and
analyzed. More details of the simulations are given in
[31].
3. THEORY
In this section we provide some basic formulae
that permit calculation of Tc and the non-ergodicity
parameters fij(q) for our system. A more detailed
presentation may be found in Refs. [20–22,32,33]. The
central object of the MCT are the partial intermediate
scattering functions which are defined for a binary
system by [34]
where
is a Fourier component of the microscopic density
of species i. The diagonal terms α = β are denoted as
the incoherent intermediate scattering function
The normalized partial- and incoherent
intermediate
scattering functions are given by
where the Sij(q) = Fij(q, t = 0) are the partial static
structure factors. The basic equations of the MCT are
the set of nonlinear matrix integrodifferential
equations given by
where F is the 2×2 matrix consisting of the partial
intermediate scattering functions Fij(q, t), and the
frequency matrix Ω2 is given by
S(q) denotes the 2 × 2 matrix of the partial
structure factors Sij(q), xi = Ni/N and mi means the
atomic mass of the species i. The MCT for the
idealized glass transition predicts [8] that the memory
kern M can be expressed at long times by
18. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
12
where ρ = N/V is the particle density and the
vertex Viαβ(q, k) is given by
and the matrix of the direct correlation function is
de- fined by
The equation of motion for Fs i (q; t) has a similar
form as eq.(6), but the memory function for the
incoherent intermediate scattering function is given
by:
In order to characterize the long time behaviour of
the intermediate scattering function, the non-
ergodicity parameters f (q) are introduced as
These parameters are the solution of eqs. (6)-(10)
at long times. The meaning of these parameters is the
following:
if fij(q) = 0, then the system is in a liquid state with
density fluctuation correlations decaying at long times.
If fij(q) > 0, the system is in an arrested,
nonergodic state, where density fluctuation
correlations are stable for all times. In order to
compute fij(q), one can use the following iterative
procedure [22]:
where the matrix A(q), B(q),C(q), D(q), N(q) is
given by
This iterative procedure, indeed, has two type of
solutions, nontrivial ones with f (q) > 0 and trivial
solutions f (q) = 0. The incoherent non-ergodicity
parameter fs i (q) can be evaluated by the following
iterative procedure:
As indicated by eq.(20), computation of the
incoherent non-ergodicity parameter fs i (q) demands
that the coherent non-ergodicity parameters are
determined in advance.
4. RESULTS AND DISCUSSIONS
4.1 Partial structure factors and
intermediate scattering functions
First we show the results of our simulations
concerning the static properties of the system in terms
of the partial structure factors Sij(q) and partial
correlation functions gij(r). To compute the partial
structure factors Sij(q) for a binary system we use the
following definition [35]
where
are the partial pair correlation functions. The MD
simulations yield a periodic repetition of the atomic
distributions with periodicity length L. Truncation of
the Fourier integral in Eq.(21) leads to an oscillatory
behavior of the partial structure factors at small q. In
order to reduce the effects of this truncation, we
compute from Eq.(22) the partial pair correlation
functions for distance r up to Rc = 3=2L. For
numerical evaluation of eq.(21), a Gaussian type
damping term is included
19. Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity
Parameter – Achmad Benny Mutiara
ISSN 1858-1633 @2005 ICTS
13
FIG. 1. Partial structure factors at T = 1400 K, 1300 K, 1200 K,
1100 K, 1000 K, 900 K and 800 K (from top to bottom); a) Ni-Ni-
part, the curves are vertically shifted by 0.05 relative to each other;
b) Ni-Zr-part, the curves are vertically shifted by 0.1 relative to each
other; and c) Zr-Zr-part, the curves are vertically shifted by 0.5
relative to each other.
FIG. 2. Comparison between our MD-simulations and experimental
results [23] of the total Faber-Ziman structure factor SFZ tot (q) and
the partial Faber-Ziman structur factors aij(q) for Ni0:2Zr0:8.
with R = Rc=3. Fig.1 shows the partial structure
factors Sij(q) versus q for all temperatures
investigated. The figure indicates that the shape of
Sij(q) depends weakly on temperature only and that, in
particular, the positions of the first maximum and the
first minimum in Sij(q) are more or less temperature
independent. In order to compare our calculated
structure factors with experimental ones, we have
determined the Faber- Ziman partial structure factors
aij(q) [37]
and the Faber-Ziman total structure factor SFZ tot
(q) [36]. For a binary system with coherent scattering
length bi of species i the following relationship holds:
In the evaluation of aij(q), we applied the same
algorithm as for Sij(q). By using aij(q) and with aids
of the experimental data of the average scattering
21. Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity
Parameter – Achmad Benny Mutiara
ISSN 1858-1633 @2005 ICTS
15
FIG. 4. The same as fig.3 but for the incoherent intermediate
scattering function; a) Ni-part and b) Zr-part.
For T > 1200 K we always obtain the trivial
solution fij(q) = 0 while at T = 1100 K and below we
get stable non-vanishing fij(q) > 0. The stability of the
non-vanishing solutions was tested for more than 3000
iteration steps. From this results we expect that Tc for
our system lies between 1100 and 1200 K. To estimate
Tc more precisely, we interpolated Sij(q) from our MD
data for temperatures between 1100 and 1200 K by
use of the algorithm of Press et.al. [39]. We observe
that at T = 1102 K a non-trivial solution of fij(q) can
be found, but not at T = 1105 K and above. It means
that the critical temperature Tc for our system is
around 1102 K. The non-trivial solutions fij(q) for this
temperature shall be denoted the critical non-
ergodicity parameters fcij(q). They are included in Fig.
3. As can be seen from Fig. 3, the absolute values and
the q-dependence of the calculated fcij(q) agree rather
well with the estimates from the scattering functions
master curve and, in particular, with the deduced
Kohlrausch-Williams-Watts amplitudes A(q) at 1100
K. By use of the critical non-ergodicity parameters
fcij(q), the computational procedure was run to
determine the critical non-ergodicity parameters fs
ci(q) for the incoherent scattering functions at T =
1102 K . Fig. 4 presents our results for so calculated fs
ci(q). Like Fig. 3 for the coherent non-ergodicity
parameters, Fig. 4 demonstrates for the fs ci(q) that
they agree well with the estimates from the incoherent
scattering functions master curve and, in particular,
with the deduced Kohlrausch-Williams-Watts
amplitudes A(q) at 1100 K.
4.3 Diffusion-coeffient
From the simulated atomic motions in the
computer experiments, the diffusion coefficients of the
Ni and Zr species can be determined as the slope of
the atomic mean square displacements in the
asymptotic long-time limit
FIG. 5. Diffusion coefficients Di as a function of 1000=T . Symbols
are MD results for Ni (square) and Zr (diamond); the full line are a
power-law approximation for Ni and for Zr. resp..
Fig. 5 shows the thus calculated diffusion
coefficients of our Ni0:2Zr0:8 model for the
temperature range between 800 and 2000 K. At
temperatures above approximately 1250 K, the
diffusion coefficients for both species run parallel
with temperature in the Arrhenius plot, indicating a
fixed ratio DNi=DZr ¼ 2:5 in this temperature regime.
At lower temperatures, the Zr atoms have a lower
mobility than the Ni atoms, yielding around 900 K a
value of about 10 for DNi=DZr. That means, here the
Ni atoms carry out a rather rapid motion within a
relative immobile Zr matrix. According to the MCT,
above Tc the diffusion coeffi- cients follow a critical
power law
with non-universal exponent ° [9,38]. In order to
estimate Tc from this relationship, we have adapted the
critical power law by a least mean squares fit to the
22. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
16
simulated diffusion data for 1050 K and above. The
results of the fit are included in Fig. 5 by dashed lines.
According to this fit, the system has a critical
temperature of 950 K. The parameters ° turn out as 1.8
for the Ni subsystem and 2.0 for the Zr system.
5. CONCLUSION
The results of our MD-simulations show that our
system behave so as predicted by MCT in the sense
that the diffusion coefficients follow the critical power
law. After analizing this coefficient we found that the
system has critical temperature of 950 K. diffusion-
processes. Our analysis of the ergodic region ( T > Tc
) and of the non-ergodic region ( T < Tc ) lead to Tc-
estimations which agree each other within 10 %.
These Tc-estimations are also in acceptable
compliance with the Tc-estimation from the dynamic
phenomenons. Within the scope of the precision of our
analysis, the critical temperatur Tc of our system is
about 1000 K.
REFERENCES
[1] W. G¨otze and M. Sperl, J. Phys.: Condens.
Matter 16, 4807 (2004); W. G¨otze and M.
Sperl, Phys.Rev.Lett. 92, 105701 (2004)
[2] P. G. Debenedetti and F.H. Stillinger, Nature
410(6825), 259 (2001).
[3] M. Mezard and G. Parisi, Phys.Rev.Lett. 82(4),
747 (1999).
[4] J.H. Gibbs and E. A. DiMarzio, J.Chem.Phys.
28(3), 373 (1958).
[5] B.Colluzi, G.Parisi, and P.Verrochio,
Phys.Rev.Lett.84(2), 306(2000).
[6] T. S. Grigera and G. Parisi, Phys. Rev. E 63,
045102(R) (2001).
[7] L. Santen and W. Krauth, Nature 405(6786),
550 (2000).
[8] W. G¨otze and L. Sj¨ogren, Rep. Prog. Phys.
55(3), 241 (1992)
[9] W. Kob und H.C. Andersen, Phys. Rev. E 51(5),
4626 (1995).
[10] S. Sastry, P. G. Debendetti and F. H. Stillinger,
Nature 393(6685), 554 (1998).
[11] F. Sciortino, W. Kob, and P. Tartaglia, Phys.
Rev. Lett. 83(16), 3214 (1999).
[12] C. Donati, S. C. Glotzer, P. H. Poole, W. Kob,
and S. J. Plimpton, Phys.Rev. E 60(3), 3107
(1999)
[13] B. Colluzi, G. Parisi, and P. Verrochio, Phys.
Rev. Lett. 112(6), 2933 (2000).
[14] R. Yamamoto and W. Kob, Phys.Rev. E 61(5),
5473 (2000)
[15] T. Gleim and W. Kob, Eur. Phys. J. B 13, 83
(2000).
[16] A. Meyer, R. Busch, and H. Schober, Phys.
Rev. Lett. 83, 5027 (1999); A. Meyer, J.
Wuttke, W. Petry, O.G. Randl, and H.
Schober, Phys. Rev. Lett. 80, 4454 (1998).
[17] H.Z. Cummins, J. Phys.Cond.Mat. 11, A95
(1999).
[18] H. Teichler, Phys. Rev. Lett. 76, 62(1996).
[19] H. Teichler, Phys. Rev. E 53, 4287 (1996).
[20] J.L. Barrat und A. Latz, J. Phys. Cond. Matt. 2,
4289 (1990).
[21] M. Fuchs, Thesis, TU-Muenchen (1993); M.
Fuchs und A. Latz, Physica A 201, 1 (1993).
[22] M. Nauroth and W. Kob, Phys. Rev. E 55, 657
(1997).
[23] M. Kuschke, Thesis, Universit¨at Stuttgart
(1991).
[24] Yan Yu, W.B. Muir und Z. Altounian, Phys.
Rev. B 50, 9098 (1994).
[25] B. B¨oddekker, Thesis, Universit¨at G¨ottingen
(1999); B¨oddekker and H. Teichler, Phys.
Rev. E 59, 1948 (1999)
[26] H. Teichler, phys. stat. sol. (b) 172, 325 (1992)
[27] H. Teichler, in: Defect and Diffusion Forum
143-147, 717 (1997)
[28] H. Teichler, in: Simulationstechniken in der
Materialwissenschaft, edited by P. Klimanek
and M. Seefeldt (TU Bergakadamie, Freiberg,
1999).
[29] H. Teichler, Phys. Rev. B 59, 8473 (1999).
[30] Ch. Hausleitner and Hafner, Phys. Rev. B 45,
128 (1992).
[31] A.B. Mutiara, Thesis, Universit¨at G¨ottingen
(2000).
[32] W. G¨otze, Z. Phys. B 60, 195 (1985).
[33] J. Bosse und J.S. Thakur, Phys. Rev. Lett. 59,
998(1987).
[34] B. Bernu, J.-P. Hansen, G. Pastore, and Y.
Hiwatari, Phys. Rev A 36, 4891 (1987); ibid.
38, 454 (1988).
[35] J.P. Hansen and I.R. McDonald, Theory of
Simple Liquids, 2nd Ed. ( Academic Press,
London, 1986).
[36] T.E. Faber und J.M. Ziman, Phil. Mag. 11, 153
(1965).
[37] Y. Waseda, The Structure of Non-Crystalline
Materials, (McGraw-Hill, New York, 1980).
[38] J.-P. Hansen and S. Yip, Transp. Theory Stat.
Phys. 24, 1149 (1995).
[39] W.H. Press, B.P. Flannery, S.A. Teukolsky
und W.T. Vetterling, Numerical Recipes,
2nd.Edition (University Press, Cambrigde,
New York, 1992)
23. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
17
TOMOGRAPHIC IMAGING USING INFRA RED SENSORS
Sallehuddin Ibrahim & Md. Amri Md. Yunus
Department of Control and Instrumentation Engineering
Faculty of Electrical Engineering, Universiti Teknologi Malaysia
81310 UTM Skudai, Johor, Malaysia
salleh@fke.utm.my
ABSTRAK
This paper is concerned with the development of
a tomographic imaging system in order to measure a
two-phase flow involving solid particles flowing in
air. The general principle underlying a tomography
system is to attach a number of non-intrusive
transducers in plane formation to the vessel to be
investigated and recover from those sensors an image
of the corresponding cross section through the vessel.
The method made use of infra red sensors. The
sensors were configured as a combination of two
orthogonal and two rectilinear light projection system.
It has the ability to accurately visualize concentration
profiles in a pipeline. The imaging is carried out on-
line without invading the fluids. The system used a
combination of two orthogonal and two rectilinear
infra-red projections. The sensors were installed
around a vertical transparent flow pipe. Several results
are presented in this paper showing the capability of
the system to visualize the concentration profiles of
solids flowing in air.
Keywords: Infra red, imaging, tomography, sensor.
1. INTRODUCTION
Flow imaging is gaining importance in process
industries. As such suitable systems must be
developed for this purpose. Flow measurements
belong to the most difficult ones because the medium
being measured can occur in various physical states,
which complicate the measuring procedure. They are,
namely, temperature, density, viscosity, pressure,
multi-component media (liquid-gas, solid-gas), the
type of flow, etc. The choice of the method is further
directed by specific requirements for the flowmeter,
e.g. the measuring range, minimum loss of pressure,
the shortest possible recovery section, a sensor
without moving parts, continuous operation of the
sensor, etc.
Tomography methods have been developed rapidly
for visualizing two-phase flow of various industrial
processes, e.g. gas/oil flows in oil pipelines [1],
gas/solid flows in pneumatic conveyors [2], and
separation/mixing processes in chemical vessels [3].
Infra red tomography involves the measurement of
light attenuation detected by various infra red sensors
installed around a flow pipe, and the reconstruction of
cross-sectional images using the measured data and a
suitable algorithm.
Tomography began to be considered seriously as it
can be used to directly analyze the internal
characteristics of process flow so that resources will
be utilized in a more efficient manner and in order to
meet the demand and regulations for product quality.
The use of tomography to analyze the flow regime
began in the late 1980s. Besides, concern about
environmental pollution enhanced the need to find
alternative methods of reducing industrial emission
and waste. In those applications, the system must be
robust and does not disturb the flow in pipelines. It
should be able to operate in aggressive and fast
moving fluids. This is where tomography can play an
important role as it can unravel the complexities of
flow without invading it.
Tomography can be combined with the
characteristics of infra red sensors to explore the
internal characteristics of a process flow. Infra red
tomography is conceptually straightforward and
inexpensive. It has a dynamic response and can be
more portable compared to other types of radiation-
based tomography system. The image reconstruction
by an infra red tomography system should be directly
associated to visual images observed in transparent
sections of the pipelines. It has other advantages such
as negligible response time relative to process
variations, high resolution, and immunity to electrical
noise and interference. This paper will explain how
such a system can be designed and constructed to
measure the distribution of solids in air flowing in a
pipeline.
2. IMAGE RECONSTRUCTION
The projection of infra red beam from the emitter
towards the detector can be illustrated mathematically
in Figure 1. The coordinate system can be utilized to
describe line integrals and projections. The object of
interest is represented by a two-dimensional function
f(x,y) and each line integral is represented by the (Ø,
x’) parameters. Line AB can be expressed as
'sincos xyx =+ φφ (1)
where
⎥
⎦
⎤
⎢
⎣
⎡
⎥
⎦
⎤
⎢
⎣
⎡
−
=⎥
⎦
⎤
⎢
⎣
⎡
y
x
y
x
φφ
φφ
cossin
sincos
'
'
(2)
which resulted in the following line integral
24. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
18
∫=
linex
dxyxfxp
)',(
),()'(
φ
φ (3)
Using a delta function, this can be rewritten as
(4)
where
N = total number of horizontal cells/pixel
M = total number of vertical cells/pixel
The algorithm for reconstruction is performed by
approximating the density at a point by summing all
the ray sum of the ray through the point. This has been
termed the discrete back projection method and can be
formulated mathematically as
(5)
(5)
Figure 1. Infra red path from emitter to detector
3. SYSTEM DESIGN
The measurement system composed of four
subsystems: (1) sensor, (2) signal conditioning, (3)
data acquisition system, and (4) image reconstruction
and display.
Figure 2 depicts the measurement section around
the flow pipe which contains 64 pairs of infra red
sensors configured in a combination of orthogonal and
rectilinear projection. A similar number of sensors are
installed downstream. The output of the downstream
sensors should be a replica of the output from the
upstream sensors but experienced a time delay. Both
signals can be cross-correlated to obtain the velocity
of the flow. The flow pipe has an external diameter of
80 mm and an internal diameter of 78mm. Since the
infra red sensors are the critical part, the selection of
infra red transmitters and receivers are considered
carefully. The sensors should be arranged such that
they cover the whole pipe. In tomography, the more
number of sensors used means higher resolution is
achieved. Another set of sensors were constructed
100mm downstream to measure velocity using the
cross-correlation method. The sensors used must be of
high performance, compact, require minimum
maintenance or calibration and be intrinsically safe.
For this purpose, the emitter SFH 4510 and detector
SFH 2500 was chosen due to its low cost and fast
switching time.
The infra red emitters and detectors are arranged in
pairs. They are linked to the measurement section
through optical fibers. Each receiver circuit consists of
a photodiode, pre-amplification, amplification and a
filter. The receiver circuits are connected to a data
acquisition card manufactured by Keithley. The card
is of the type DAS-1800. Light generated by the
emitters passed to flow pipe and is attenuated if it hits
an object. The light reaches the receiver and is then
converted by a photodiode into current by the
receiving circuit. The signal is processed by a signal
conditioning circuit. Data then entered the data
acquisition system and is converted into a digital form
prior to entering the computer. A linear back
projection algorithm was developed which processed
the digitized signal and displayed the concentration
profile of the solids flowing in air. The algorithm is
programmed in the Visual C++ language which is a
powerful tool for such purpose.
Figure 2. Tomographic measurement section
The solid particles consisting of plastic beads were
dropped onto a gravity flow rig shown in Figure 3.
The rig costing about RM100,000 was supplied by
Optosensor. The beads were filled into a hopper and a
rotary valve controls the amount of beads flowing into
the rig. Thus user can set various flow rates. The
measurement section was installed around the flow
rig.
∑ ∑
−
=
−
=
∆⎥
⎦
⎤
⎢
⎣
⎡
∆+∂=
1
0
1
0
)sincos(),()'(
M
M
N
N
yxyxyxfxp φφφ
y’
X’
x
y
A
x’
B
)'( 1xpφ
),( yxf
°
φ
Projections
φφφφ ∆⎥
⎦
⎤
⎢
⎣
⎡
∆−+∂= ∑ ∑
−
=
−
=
1
0
1
0
')'sincos()',(
1
),(ˆ
M
m
N
n
xxyxxp
M
yxf
25. Tomographic Imaging Using Infra Red Sensors – Sallehuddin Ibrahim & Md. Amri
Md. Yunus
ISSN 1858-1633 @2005 ICTS
19
Figure 3. Gravity fvlow rig
Figure 4. The regression line of sensors output versus measured
mass flow rates
Figure 5(a). Concentration profile at a flow rate of 27 g/s
Figure 5(b). Concentration profile at a flow rate of 49 g/s
Figure 5(c). Concentration profile at a flow rate of 71 g/s
4. RESULT
Various experiments were carried out using
various algorithms and at various flow rates. The
regression graph is shown in Figure 4. The
concentration profiles at various flow rates are shown
in Figures 5(a) to 5(c).
Figure 4 shows that the output from upstream
sensors have similar values to that of the downstream
sensors and the sensors output is proportional to the
mass flow rates. Figures 5(a) to 5(c) show that the
infra red tomographic system is able to locate the
position of the beads inside the flow pipe. At higher
flow rates, the flow rig released more plastic beads
that at lower flow rates. As such more pixels are
occupied in Figures 5(b) and 5(c) compared to Figure
5(a).
5. CONCLUSION
A tomographic imaging system using infra red
sensors has been designed. The system is capable of
producing tomographic images for two-phase flow.
The spatial resolution and measurement accuracy can
be enhanced by adding more sensors. However, there
must be a compromise between the spatial resolution,
accuracy and real-time capability of the system. The
system has the potential of being applied in various
process industries.
REFERENCES
[1] S. Ibrahim and M. A. Md. Yunus ,
Preliminary Result for Infrared Tomography,
Elektrika., 6 (1), 2004, 1 – 4.
[2] S. McKee, T. Dyakowski, R.A. Williams,
T.A. Bell and T. Allen,Solids flow imaging
and attrition studies in a pneumatic conveyor,
Powder Technology, 82, 1995, 105 – 113.
[3] F. Dickin, Electrical resistance tomography
for process applications, Meas. Sci. Technol.,
7(3), 1996, 247-260.
Pipeline
Tank
Hopper
26. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
20
MAMMOGRAPHIC DENSITY CLASSIFICATION USING
MULTIRESOLUTION HISTOGRAM TECHNIQUE
Izzati Muhimmah1
, Erika R.E. Denton2
, Reyer Zwiggelaar3
1
Jurusan Teknik Informatika, FTI, Universitas Islam Indonesia, Yogyakarta 55584, Indonesia,
2
Department of Breast Imaging, Norwich and Norfolk University Hospital, Norwich NR2 7UY, UK,
3
Department of Computer Science, University of Wales Aberystwyth, Aberystwyth SY23 1LB, UK
ABSTRACT
Mammographic density is known to be an
important indicator of breast cancer risk. Some
quantitative estimation approaches have been
developed and among of them based on histogram
measurement. We implemented multi-resolution
histogram technique to classify density of
mammographic images. The results showed an
agreement of 38,33% (78:33% when minor
classification was allowed) compare to an expert
radiologist rating using SCC given in EPIC database.
This agreement is lower than the existing methods,
and some limitations are discussed.
Keywords : density classification, mammographic,
breast cancer, medical imaging, multi-resolution
histogram
1. INTRODUCTION
Mammographic images are 2D projections of the
x-ray attenuation properties of the 3D breast tissues
along the path of the x-rays. The connective and
epitheal tissues attenuate harder than fatty tissues,
which would 'burn' more silver grains on
mammograpic films. This phenomenon can be
interpreted that brighter area on the mammograms
films represents glandular tissues, whereas the dark
area represent fatty tissues.
Mammographic parenchymal patterns provide a
strong indicator of the risk of developing breast
cancer. The first attempt to relate them was introduced
by Wolfe[11]. This method subdivides breast
parenchymal pattern into four classes, which are N1,
P1, P2, and DY. The classification solely based on
visual assessment whether or not there are prominent
duct patterns and their severities as seen by
mammography, in other word, this approach gives
considerable heterogeneity in risk estimates reported.
The study of Boyd et. al [1] gives an alternative
approach to estimate breast cancer risk for given
mammograms. It is based on proportion of breast
occupied by densities (i.e, relative areas) to subdivide
mammogram into six class categories (SCC), which
the highest class indicates the highest risk, see Figure
1. This classification method can be seen as a
quantitative approach to estimate breast cancer risk, so
as can be implemented for computer-assisted methods.
As be mentioned earlier, mammographic densities
correlates with brighter appearance on mammograms.
In image analysis point of view, it can be seen as
correspondence between mammographic densities and
image intensities. Many studies have been done on
(semi-) automatic breast density segmentation and/or
classification, for example: Byng et al. proposed an
interactive threshold technique [2]. Karssemeijer
proposed to use the skewness, rather than the standard
deviation, of histograms of local breast areas [6].
Sivaramakhrisna et al. introduced variation images to
enhance the contrast between dense and fatty area, and
subsequently Kittler's optimal threshold was used to
segment the densities [9]. Zhou et al. suggested an
adaptive dynamic range compression technique [12].
Masek et.al presented results based on the comparison
of four histogram distance measures [7]. In general,
the described histogram based approaches for
automatic density estimation produce robust and
reliable results.
(a) (b) (c)
(d) (e) (f)
Figure 1. Example mammograms, where the SCC classification is
(a) 0%, (b) 0-10%, (c) 11-25%, (d) 26-50%, (e) 51-75%, and (f)
>75%.
Histogram is a simple representation of an image.
Moreover, the study by Zhou et al. [12] showed that
there were some typical histogram patterns for each
density class. Yet, due to statistical nature but not
spatial related feature of histogram, they also pointed
out that there are similar histograms that represent
27. Mammographic Density Classification Using Multiresolution Histogram Technique –
Izzati Muhimmah, Erika R.E. Denton, Reyer Zwiggelaar
ISSN 1858-1633 @2005 ICTS
21
different risks. On the other hand, a recently published
paper by Hadjidemetriou et al. showed that different
generic texture images with similar histograms can be
discriminated by a multi-resolution approach [5].
Based on these findings, our aim was to investigate
whether it is possible to automatically classify
mammographic density using a multi-resolution
histogram technique.
The remainder of this paper is outlined as follows:
the proposed multi-resolution histogram features are
described in Section 2. Data to validate this
methodology are explained in Section 3 and its
statistical analysis method is described in Section 4.
Section 5 gives results of the proposed method and
discussion on our findings. Finally, conclusions
appear in Section 6.
2. MULTI-RESOLUTION HISTOGRAM
TECHNIQUE
The main aim is to obtain feature vectors which
can be used to discriminate between the various
mammographic density classes. A feature vector
representing a mammogram is derived from a set of
histograms {h0, h1, h2, h3}, see Figure 2(b). h0 is
obtained from the original mammogram, and
histograms h1, h2 and h3 are obtained after Gaussian
Filtering the mammogram by 5x5 kernels and scaling
in three stages. For all four histograms only grey level
information from the breast area (ignoring the pectoral
muscle and background areas) is used and the
histograms are normalized with respect to this area.
For increasing scales this shows the general shift to
lower grey-level values and the narrowing of the
peaks in the histogram data. It should be noted that
these histograms deviate significantly from those
described by Hadjidemetriou et al. [5] which start with
delta function peaks which broaden on smoothing.
Subsequently, the set of histograms are
transformed into a set of cumulative histograms {c0,
c1, c2, c3}, see Figure 2(c). The feature vector for each
mammogram is constructed from the difference
between subsequent cumulative histograms: {c0 – c1,
c1 - c2, c2- c3}. See Fig. 2(d) for an example. Between
scales this shows a shift to lower grey-level values,
but the overall shape of the data remains more or less
constant. The dimensionality of the resulting feature
space is equal to 768.
(a)
(b)
(c)
(d)
Figure 2. Illustration of features selection process: (a) Example of
ROI (m29592L), (b) Histograms of (a) and its consecutive multi-
resolution images, (c) Cumulative histogram of (b), and (d)
Difference of consecutive cumulative histogram (c) form the
classification features
3. DATA
The above technique was evaluated on the dataset
comprised sixty post 1998 mammograms from the UK
NHS breast screening programme (EPIC database),
randomly selected representing the Boyd's SCC [1] as
classified by an expert radiologist. All mammograms
are Fuji UMMA film/screen combinations, medio-
lateral views, and digitized using mobile-phone-CCD
scanner with 8 bit per pixel accuracy. The breast areas
are segmented using threshold and morphological
operations, see Figure 2 (a) for an example. It should
be noted that these images are pair mammograms
(L/R) from thirty patients.
4. VALIDATION STRATEGY
For classification, an automatic method is built
based on the feature vectors in combination with a k-
nearest neighbor approach. Here we have used three
neighbors, an Euclidean distance (in [5] a L1 norm
was used) and Bayesian probability. However, it is
known that mammographic intensities vary with
exposure levels and film characteristics [3, 4]. And, an
imaging session, a woman likely had the mammogram
28. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
22
captured using similar films and/or exposure levels.
Hence, to minimize bias, we used a leave-one-woman-
out strategy in training. The result is shown in a form
of confusion matrices.
5. RESULTS AND DISCUSSION
The result is presented in a confusion matrix as in
Table 1. The results showed an agreement of 38.33%
in comparison with expert assessment and 78.33%
when minor classifications deviation is allowed. The
low rate of agreement is below the reported state of
the art, which comes partially as a surprise as some of
the state of the art work relies on information
extracted from single histograms.
Table 1. Comparison between automatic, histogram based, and
expert classification. Within the tables the proportion of dense tissue
is represented as 1: 0%, 2: 0-10%, 3: 11-25%, 4: 26-50%, 5: 51-
75%, and 6: 76-100%.
Instead of taking all six classes into account, for
mammographic risk assessment it might be more
appropriate to just take high and low density
estimation classes into consideration, which means
that the lower and higher three SCC classes are
grouped together. Using such an approach the
developed techniques shows an agreement of 80%
with the expert assessment.
We had applied this methodology into MIAS
database [10], and an agreement of 55.17% for SCC
and 61.56% for triple MIAS categories were achieved
[8]. The latest was similar to those reported by Masek
et al. [7], i.e 62.42% when using an Euclidean
distance. Their method is based on direct distance
measures of average histogram of original images for
each density class. It should be noted that we used less
data for training due to leave-one-woman-out strategy.
Moreover, this is inline with our own single histogram
(h0) results, which were 61,99% for triple MIAS
classification and 57,14% for SCC based
classification. These results might indicate there is
little benefit in using the multi-resolution histogram
approach.
It should be noted that our methodology slightly
deviated from Hadjidemetriou et al [5]. Their
implementation of the multi-scale approach includes a
subsampling step which makes a second normalization
essential. In our case, we only using smoothing stage
of the multi-scale approach without the subsampling.
As such the second normalization step is not used.
Despite that multi-resolution histogram technique
is claimed to be robust to match either synthetic,
Brodatz, or CUReT textures [5], our results could not
confirm its application in mammographic density
classification. We would like to investigate whether
this is caused by nature of the mammographic texture
patterns and/or imaging system effects. Thus,
additional pre-processing to enhance the contrast
between fatty and dense tissue, or to incorporate the
X-ray imaging protocol information, are areas of
future research.
6. CONCLUSION
We have presented an approach to mammographic
density classification, which uses multi-resolution
histogram information. It was shown that the approach
was insufficient when compared to the gold standard
provided by an expert radiologist, but when minor
classifications errors are allowed it provides a
performance of 78.33%. Future work will include
texture information.
ACKNOWLEDGMENT
We gratefully acknowledge Dr. Lilian Blot of
School of Computing, University of East Anglia, UK
for automatic detection of breast boundary tools.
REFERENCE
[1] NF Boyd, JW Byng, RA Jong, EK Fishell, LE
Little, AB Miller, GA Lockwood, DL
Tritchler, and Martin J. Yaffe. Quantitative
classification of mammographic densities and
breast cancer risk: results from the Canadian
national breast screening study. Journal of the
National Cancer Institute, 87(9):670.675,
May 1995.
[2] JW Byng, NF Boyd, E Fishell, RA Jong, and
Martin J. Yaffe. The quantitative analysis of
mammographic densities. Physics in Medicine
and Biology, 39:1629.1638, 1994.
[3] FUJIFILM. The fundamentals of medical
radiography. In FUJIFILM Technical
Handbook: Medical X-ray System. Fuji Photo
Film Co., Ltd, 2003.
[4] FUJIFILM. Fundamentals of sensitized
materials for radiography. In FUJIFILM
Technical Handbook: Medical X-ray System.
Fuji Photo Film Co., Ltd, 2003.
[5] Efstathios Hadjidemetriou, Michael D.
Grossberg, and Shree K. Nayar.
Multiresolution histograms and their use for
recognition. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 26(7):831-
847, July 2004.
[6] Nico Karssemeijer. Automated classification
of parenchymal patterns in mammograms.
29. Mammographic Density Classification Using Multiresolution Histogram Technique –
Izzati Muhimmah, Erika R.E. Denton, Reyer Zwiggelaar
ISSN 1858-1633 @2005 ICTS
23
Physics in Medicine and Biology, 43:365-378,
1998.
[7] M Masek, SM Kwok, CJS deSilva, and Y
Attikiouzel. Classification of mammographic
density using histogram distance measures.
CD-ROM Proceedings of the World Congress
on Medical Physics and Biomedical
Engineering, page 1, August 2003.
[8] Izzati Muhimmah, Erika R.E Denton, and Reyer
Zwiggelaar. Histograms in breast density
classification: Are 4 better than 1? Seminar
Nasional Aplikasi Teknologi Informasi 2005,
pages D13-D15, June 2005.
[9] Radhika Sivaramakhrishna, N. A.
Obuchowski, W. A. Chilcote, and K. A.
Powell. Automatic segmentation of
mammographic density. Academic Radiology,
8(3):250-256, March 2001.
[10] J Suckling, J Parker, D Dance, S Astley, I
Hutt, C Boggis, I Ricketts, E Stamatakis, N
Cerneaz, S Kok, P Taylor, D Betal, and J
Savage. The mammographic images analysis
society digital mammogram database. Exerpta
Medica. International Congress Series,
1069:375-378, 1994.
[11] John N. Wolfe. Risk for breast cancer
development determined by mammographic
parenchymal pattern. Cancer, 37:2486-2492,
1976.
[12] Chuan Zhou, Heang-Ping Chan, Nicholas
Petrick, Mark A. Helvie, Mitchell M.
Goodsitt, Berkman Sahiner, and Lubomir M
Hadjiiski. Computerized image analysis:
Estimation of breast density on mammograms.
Med Phys, 28(6):1056-1069, June 2001.
30. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
24
ANN SOFT SENSOR TO PREDICT QUALITY OF PRODUCT
BASED ON TEMPERATURE OR FLOW RATE CORRELATION
Totok R. Biyanto
Engineering Physic Dept. - FTI – ITS Surabaya
Kampus ITS Keputih, Sukolilo, Surabaya 60111
Tell: 62 31 5947188 Fax: 62 31 5923626
Email: totokrb@ep.its.ac.id
ABSTRACT
Analizer has slow respon performance, lack of
reliability, and expensive, then inferensial
measurement by using temperature measurement,
reflux flow rate and reboiler steam flow rate are
usualy the best way to predict it. This paper will
describe Artificial Neural Network (ANN) soft sensor
capability to predict mole fraction Distillate (Xd) and
mole fraction bottom product (Xb) at binary
distillation column.
Inferensial measurement could built by using
reflux flowrate and reboiler steam flowrate at LV
structure, by using smart positioner feedback signal, or
by measuring tray temperature.
Soft sensor which using tray temperature
correlation or flow rate correlation have good Root
Mean Square Error (RMSE). So, the conclusion is
application of ANN soft sensor can build using
temperature or flow rate correlation, depend on
control strategy or sensor availability.
Key-words: ANN soft sensor, composition prediction,
temperature and flow rate.
1. INTRODUCTION
ANN soft sensor was developed for a batch
distillation column, in order to estimate product
compositions using available temperature
measurements [10], and a non linear soft sensor was
developed using temperature top tray correlation for
ternary batch distillation column using Hysys plant
and Matlab [9]. The others researches are using some
flow rate and fuel gas burner pressure to predict
oxygen content at stack of boiler [3].
Composition measurement and control at binary
distillation column often use inferential measurement
and control, because analizer has slow respon
performance, lack of reliability, and expensive. [4,10]
Mole fraction distillate and bottom product could
predicted by using correlation between temperature
and mole fraction at certain trays. Inferential
composition measurement by using temperature
correlation usually use single or multi thermocouple at
certain place.
Another way to predict mole fraction distillate is
using reflux flow rate and to predict mole fraction
bottom product is using stream flow rate at reboiler.[1]
Relation between temperature and mole fraction
are non liner and influenced by pressure of distillation
column, mole fraction feed, flow feed, flow steam of
re-boiler, condenser level, etc. So, soft sensor must
have capability to predict composition product without
influenced by the others, non linier, easy to build, and
no need special instrumentations.
This paper will shown that ANN with the same
MLP structure, 6 history lengths, 13 hidden nodes and
trained for 200 times computer iteration, applied to
predict mole fraction composition distillate and
bottom product at methanol-water binary distillation
column by using temperature or flow rate correlation.
2. DISTILATION COLUMN AND
ARTIFICIAL NEURAL NETWORK
A single feed stream is feed as saturated liquid
onto the feed tray NF. Feed flow rate is F (mole/hour)
and composition XF (mole fraction more volatile
component). The overhead vapor totally condensed in
a condenser and flows into the reflux drum, whose
holdup of liquid is MD (moles). The content of the
drum is at its bubble point. Reflux is pumped back to
the top tray (NT) of column at a rate L. Overhead
distillate product is removed at a rate D. (Figure 1)
Fig. 1. Binary Distillation Column
31. Ann Soft Sensor to Predict Quality of Product Based on Temperature or Flow Rate
Correlation – Totok R. Biyanto
ISSN 1858-1633 @2005 ICTS
25
At the base of the column, liquid bottoms product
is removed at a rate B and with composition XB.
Vapor boil up is generated in thermosiphon reboiler at
rate V. Liquid circulates from the bottom of the
column through the tubes in the vertical tube-in shell
reboiler because of the smaller density of the vapor
liquid mixture in the reboiler tubes. We will assume
that the liquids in the reboiler and in the base of the
column are perfectly mixed together and have the
same composition XB and total holdup MB (moles).
The composition of the vapor leaving the base of the
column and entering tray 1st
is yB. It is equilibrium
with the liquid with composition XB.
The column contains a total of NT theoretical trays.
The liquid hold up on each tray including the
downcomer is MN. The liquid on each tray is assumed
to be perfectly mixed with composition XN. [3]
2.1 Rigorous Modeling of Distillation
Column
Condensor and reflux drum
Mass balance:
DLV
dt
dM
NTNT
D −−= +1 ……………….(1)
Component mass balance:
DNTNTNT
DD xDLyV
dt
xMd
)(
)(
1 +−= + ….(2)
Energy balance:
DNTNTNTNT
DD QDhHLHV
dt
hMd
+−−= ++ 11
)( ...(3)
Reboiler and base column
Mass balance:
BVL
dt
dM
RB
n
−−= 1 ……………………(4)
Component mass balance:
bBRB
BB BxyVxL
dt
xMd
−−= 11
)(
…….(5)
Energy balance:
bbBRB
BB QBhHVhL
dt
hMd
+−−= 11
)( ..........(6)
Feed tray (n = NF)
Mass balance:
NFNFNFNF
NF
VVFLL
dt
dM
−++−= −+ 11
..(7)
Component mass balance:
zNFNFNFNFNFNFNFNF
NFNF
FyVyVxLxL
dt
xMd
+−+−= −−++ 1111
)( .(8)
Energy balance:
FNFNFNFNFNFNFNFNF
NFNF
FhHVHVhLhL
dt
hMd
+−+−= −−++ 1111
)( .(9)
Nth
tray
Mass balance:
BVL
dt
dM
RB
n
−−= 1 ….…..…………….(10)
Component mass balance:
nnnnnnnn
nn
yVyVxLxL
dt
xMd
−+−= −−++ 1111
)( .(11)
Energy balance:
nnnnnnnn
nn
HVHVhLhL
dt
hMd
−+−= −−++ 1111
)( .(12)
2.2 Artificial Neural Network
There are many reasons to apply artificial neural
network as followed :
• Self-learning ability
• Non-linear mapping
• Massively parallel distributed processing
∑
Fig. 2. Neuron
Levenberg Marquard Learning Algorithm
Levenberg Marquardt algorithm can be described
as followed : [6]
1. Choose initial weight vector w(0)
and initial value
of λ(0)
.
Whereas w is matrix weight and λ search
direction.
2. Find out the right direction
)()]([ )()()()( iiii
wGfIwR −=+ λ ……(13)
then obtain f and substitute it to:
),(minarg N
N
w
ZwVw =
if VN(w(i)
+ f(i)
,ZN
) < VN (w(i)
,ZN
) then fulfill w(i+1)
= w(i)
+ f(i)
as new iteration, so, λ(i+1)
= λ(i)
. If w(i+1)
= w(i)
+ f(i)
not fulfill then find out new r
)(),(
),(),(
)()()()(
)()()(
)(
iiiNi
N
Nii
N
Ni
Ni
fwLZwV
ZfwVZwV
r
+−
+−
= ..(14)
If r(i)
> 0,75 then λ(i)
= λ(i)
/2
If r(i)
< 0,25 then λ(i)
= 2λ(i)
Whereas:
)()()( )()()()()()()(
GffffwL TiiTiiiii
−=+ λ ..(15)
3. If criteria is reached, calculation will stop. If
criteria is not reached, back to step 2.
32. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
26
Main TS - Stage Temperature (1__Main TS)
70
75
80
85
90
95
100
105
110
1 245 489 733 977 1221 1465 1709 1953 2197 2441 2685 2929
waktu (menit)
temp[C]
Main TS - Stage Temperature (14__Main TS)
64.5
64.6
64.7
64.8
64.9
65
65.1
65.2
1 248 495 742 989 1236 1483 1730 1977 2224 2471 2718 2965
waktu (menit)
temp[C]
B - Comp Mole Frac (Methanol)
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
1 248 495 742 989 1236 1483 1730 1977 2224 2471 2718 2965
waktu (menit)
%methanol
D - Comp Mole Frac (Methanol)
0.979
0.981
0.983
0.985
0.987
0.989
0.991
0.993
0.995
0.997
1 252 503 754 1005 1256 1507 1758 2009 2260 2511 2762 3013
waktu (menit)
%methanol
3. RESULTS
3.1 Soft sensor
Methodology to build soft sensor is begin from
design APRBS (Amplitude Pseudo Random Binary
Signal)[5], generate data set for training, train ANN
soft sensor and validate it.
Soft sensor built by ANN with Multi layer
Percepton (MLP), Neural Network AutoRegresive,
eXternal input structure (NNARX) [2], forward model
has 6 history lengths and 13 hidden nodes, which
trained by using Levenberg Marquard learning
algorithm for 200 times computer iteration is the best
ANN structure to produce good RMSE.
Input layer Hidden layer Output Layer
Fig. 3. ANN Architecture based temperature correlation
Xd (k)
Xd (k-1)
Xd (k-5)
L (k)
L (k-1)
L (k-5)
Qr (k)
Qr (k-1)
Qr (k-5)
Xb (k)
Xb (k-1)
Xb (k-5)
1
tgh
tgh
tgh
tgh
tgh
tgh
tgh
1
Lin
Lin
Xd (k )
Xb (k)
tgh = tangen hiperbolik
Lin = linier
Input layer Hidden layer Output Layer
Fig. 4. ANN architecture based flow rate correlation
Figure 3 is ANN soft sensor structure based
temperature correlation and Figure 4 is ANN soft
sensor structure based flow rate correlation
3.2 Using Temperature Correlation
The strongest relation with XD is 14th
tray
temperature, and the strongest relation with Xb is 1st
tray temperature. So, temperature sensor must be place
only on tray 1st
tray and 14th
tray.
CT
CT
TT
TT
TT Temperature Measurement
R
L D
V
Xd
Xb
B
F, Xr
Qr
Qc
PC
LC
LC
Fig. 5. Temperature Sensor of Distillation Column
In fact, temperature sensor usually available on
each tray but only temperature sensor on tray 1st
tray
and 14th
tray that will be used by soft sensor.(Figure
5).
Figure 6 is data set temperature and product
composition of distillation column, which use for
training data.
Fig. 6. Data set for training ANN soft sensor based temperature
correlation
0 500 1000 1500 2000 2500 3000 3500
0.98
0.985
0.99
0.995
1
Xd (output #2)
Solid : process output
Dash : model output
Fig. 7. Xd process and ANN based temperature output comparation
33. Ann Soft Sensor to Predict Quality of Product Based on Temperature or Flow Rate
Correlation – Totok R. Biyanto
ISSN 1858-1633 @2005 ICTS
27
0 500 1000 1500 2000 2500 3000 3500
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Xb (output #1)
Solid : process output
Dash : model output
Fig. 8. Xb process and ANN output based temperature
comparation
ANN Soft sensor based temperature correlation
has Root Mean Square Error (RMSE) which equal
with 5,9908 x 10-5
for the mole fraction of distillate
and RMSE is equal with 1,2686 x 10-4
for the mole
fraction of bottom. (Figure 7 and 8)
3.3. Using Flow Rate Correlation
Fig. 9. LV control strategy
Fig. 10. Data set for training ANN soft sensor based flow rate
correlation
Reflux flow rate (L) and steam flow rate at reboiler
(Qr) usually use for controlling mole fraction distillate
(Xd) and mole fraction bottom product (Xb) at binary
distillation column, especially for LV control strategy
is the best pairing composition control to keep
composition product (Figure 9). It is mean that flow
rate of reflux (L) has strongest correlation with Xd and
steam flow rate at reboiler (Qr) has strongest
correlation with Xb.
Respect to this correlation, ANN soft sensor built
by training ANN soft sensor using L-Xd and Qr-Xb
data sets. (Figure 10)
Fig. 11. Xd process and ANN based flow rate output comparation
.
Fig. 12. Xb process and ANN based flow rate output comparation
Soft sensor which using correlation between
reflux flow rate (L) and mole fraction of distillate (Xd)
has Root Mean Square Error (RMSE) which equal
with 6,6589x10-5 and soft sensor which using
correlation with steam flow rate at reboiler (Qr) has
RMSE is equal with 1,98100x10-4 for the mole
fraction of bottom (Xb) (Figure 11 & 12).
Soft sensor has fast respon performance, better of
reliability because of temperature and flow rate
measurement instrumentation have better reliability
compare to analyzer reliability, cheaper, low
operational and maintenance cost.
34. Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005
ISSN 1858-1633 @2005 ICTS
28
4. CONCLUSION
ANN soft sensor is easy to build, no special
instrumentations, and has good Root Mean Square
Error (RMSE). So, the conclusion is application of
ANN soft sensor can build using temperature or flow
rate correlation, depend on control strategy or sensor
availability.
REFERENCE
[1] Biyanto, TR., Santosa, HH, Modeling of
methanol-water binary distillation column using
a Neural Network, Journal Instrumentasi Vol 28
No1, Instrumentation Society of Indonesia,
Jakarta, January – June 2004
[2] Cybenko G, , Approximation by Super-position of
A Sigmoid Function, Mathematics of Control,
Signal, and Systems, Vol. 2(4), 303-314, 1989
[3] Luyben, W.L., Process Modeling, Simulation,
and Control for Chemical Engineers, McGraw-
Hill Inc. Singapore, 1990
[4] Luyben, W. L. Bjorn D. Tyreus, Michael L.
Luyben, Plant wide Process Control, Mc Graw
– Hill, New York, 1998.
[5] Nelles O, Isermann R, Basis Function Networks
for Interpolation of Local Linear Models,
Proc.of 35 th
Conference on Decision and
Control, Kobe, Japan, December,1996.
[6] Norgaard, M,.Ravn, O., Poulsen, N.K., and
Hansen L.K., Nopember 1999, Neural Network
for Modelling and Control of Dynamic Systems,
Springer London.