Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...IJECEIAES
The major issue of concern in change detection process is the accuracy of the algorithm to recover changed and unchanged pixels. The fusion rules presented in the existing methods could not integrate the features accurately which results in more number of false alarms and speckle noise in the output image. This paper proposes an algorithm which fuses two multi-temporal images through proposed set of fusion rules in stationary wavelet transform. In the first step, the source images obtained from log ratio and mean ratio operators are decomposed into three high frequency sub-bands and one low frequency sub-band by stationary wavelet transform. Then, proposed fusion rules for low and high frequency sub-bands are applied on the coefficient maps to get the fused wavelet coefficients map. The fused image is recovered by applying the inverse stationary wavelet transform (ISWT) on the fused coefficient map. Finally, the changed and unchanged areas are classified using Fuzzy c means clustering. The performance of the algorithm is calculated in terms of percentage correct classification (PCC), overall error (OE) and Kappa coefficient (K ). The qualitative and quantitative results prove that the proposed method offers least error, highest accuracy and Kappa value as compare to its preexistences.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Abstract
Terahertz sub-surface imaging offers an effective solution for surface and 3D imaging because of minimal
sample preparation requirements and its ability to “see” below the surface. Another important property is the ability
to inspect on a layer-by layer basis via a non-contact route, non-destructive route. Terahertz 3D imager designed
at Applied Research and Photonics (Harrisburg, PA) has been used to demonstrate reconstructive imaging with a
resolution of less than a nanometer. Gridding with inverse distance to power equations has been described for 3D
image formation. A continuous wave terahertz source derived from dendrimer dipole excitation has been used for
reflection mode scanning in the three orthogonal directions. Both 2D and 3D images are generated for the analysis
of silver iodide quantum dots’ size parameter. Layer by layer image analysis has been outlined. Graphical analysis
was used for particle size and layer thickness determinations. The demonstrated results of quantum dot particle
size checks well with those determined by TEM micrograph and powder X-ray diffraction analysis. The reported
non-contact measurement system is expected to be useful for characterizing 2D and 3D naomaterials as well as for process development and/or quality inspection at the production line.
Change Detection from Remotely Sensed Images Based on Stationary Wavelet Tran...IJECEIAES
The major issue of concern in change detection process is the accuracy of the algorithm to recover changed and unchanged pixels. The fusion rules presented in the existing methods could not integrate the features accurately which results in more number of false alarms and speckle noise in the output image. This paper proposes an algorithm which fuses two multi-temporal images through proposed set of fusion rules in stationary wavelet transform. In the first step, the source images obtained from log ratio and mean ratio operators are decomposed into three high frequency sub-bands and one low frequency sub-band by stationary wavelet transform. Then, proposed fusion rules for low and high frequency sub-bands are applied on the coefficient maps to get the fused wavelet coefficients map. The fused image is recovered by applying the inverse stationary wavelet transform (ISWT) on the fused coefficient map. Finally, the changed and unchanged areas are classified using Fuzzy c means clustering. The performance of the algorithm is calculated in terms of percentage correct classification (PCC), overall error (OE) and Kappa coefficient (K ). The qualitative and quantitative results prove that the proposed method offers least error, highest accuracy and Kappa value as compare to its preexistences.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
Abstract
Terahertz sub-surface imaging offers an effective solution for surface and 3D imaging because of minimal
sample preparation requirements and its ability to “see” below the surface. Another important property is the ability
to inspect on a layer-by layer basis via a non-contact route, non-destructive route. Terahertz 3D imager designed
at Applied Research and Photonics (Harrisburg, PA) has been used to demonstrate reconstructive imaging with a
resolution of less than a nanometer. Gridding with inverse distance to power equations has been described for 3D
image formation. A continuous wave terahertz source derived from dendrimer dipole excitation has been used for
reflection mode scanning in the three orthogonal directions. Both 2D and 3D images are generated for the analysis
of silver iodide quantum dots’ size parameter. Layer by layer image analysis has been outlined. Graphical analysis
was used for particle size and layer thickness determinations. The demonstrated results of quantum dot particle
size checks well with those determined by TEM micrograph and powder X-ray diffraction analysis. The reported
non-contact measurement system is expected to be useful for characterizing 2D and 3D naomaterials as well as for process development and/or quality inspection at the production line.
Abstract: Non-destructive terahertz reflection interferometry offers many advantages for sub-surface inspection such as interrogation of hidden defects and measurement of layers’ thicknesses. Here, we describe a terahertz reflection interferometry (TRI) technique for non-contact measurement of paint panels where the paint is comprised of different layers of primer, basecoat, topcoat and clearcoat. Terahertz interferograms were generated by reflection from different layers of paints on a metallic substrate. These interferograms’ peak spacing arising from the delay-time response of respective layers, allow one to model the thicknesses of the constituent layers. Interferograms generated at different incident angles show that the interferograms are more pronounced at certain angles than others. This “optimum” angle is also a function of different paint and substrate combinations. An automated angular scanning algorithm helps visualizing the evolution of the interferograms as a function of incident angle and also enables the identification of optimum reflection angle for a given paint-substrate combination. Additionally, scanning at different points on a substrate reveals that there are observable variations from one point to another of the same sample over its entire surface area. This ability may be used as a quality control tool for in-situ inspection in a production line.
Performance comparison of noise reduction in mammogram imageseSAT Journals
Abstract
Noise level present in mammogram images highly affects the image analysis and accuracy of classification. Hence removing noise
present in mammogram images is an important task. Noise present in the medical images depends on imaging modalities. The
dominant noise present in mammogram images are quantum noise. The objective of this work is to study the various filters such as
mean, median and wiener filter with different window size using standard benchmark (Digital Database for Screening
Mammography) DDSM dataset. Higher the value of the (Peak Signal to Noise Ratio) PSNR, better the image quality of the
restored image. The restored image quality of various filters was evaluated with PSNR value. We obtained, the wiener filter with
window size 3x3 gives better result for noise reduction in mammogram images.
Key Words: Mammogram, Quantum Noise, Mean Filter, Median Filter, Wiener Filter, DDSM and PSNR
A New Approach for Solving Inverse Scattering Problems with Overset Grid Gene...TELKOMNIKA JOURNAL
This paper presents a new approach of Forward-Backward Time-Stepping (FBTS)
utilizing Finite-Difference Time-Domain (FDTD) method with Overset Grid Generation (OGG)
method to solve the inverse scattering problems for electromagnetic (EM) waves. The proposed
FDTD method is combined with OGG method to reduce the geometrically complex problem to a
simple set of grids. The grids can be modified easily without the need to regenerate the grid
system, thus, it provide an efficient approach to integrate with the FBTS technique. Here, the
characteristics of the EM waves are analyzed. For the research mentioned in this paper, the
‘measured’ signals are syntactic data generated by FDTD simulations. While the ‘simulated’
signals are the calculated data. The accuracy of the proposed approach is validated. Good
agreements are obtained between simulation data and measured data. The proposed approach
has the potential to provide useful quantitative information of the unknown object particularly for
shape reconstruction, object detection and others.
Qualitative analysis of Fruits and Vegetables using Earth’s Field Nuclear Mag...IJERA Editor
Among the imaging techniques, magnetic resonance imaging (MRI) is a non-contact and a non-invasive technique to obtain images of the objects rich in water content and provides an excellent tool to study variation of contrast among the soft issues. It often utilizes a linear magnetic field gradient to obtain an image that combines the visualization of molecular structure and dynamics. It measures the characteristics of hydrogen nuclei of water and nuclei with similar chemical shifts, modified by chemical environment across the object. In the present work, MRI of fresh tomatoes has been recorded using Terranova-MRI for qualitative analysis. The technique is effective, powerful and reliable as an investigative tool in the quality analysis and diagnosis of infections in fruits and vegetables.
Nanoparticle Tracking Analysis (particle by particle technique)Anson Ho
NanoSight visualizes, measures and characterizes virtually all nanoparticles. Pls contact A&P Instrument Co.Ltd in Hong Kong for detail. Email: anson@anp.com.hk
Abstract: Non-destructive terahertz reflection interferometry offers many advantages for sub-surface inspection such as interrogation of hidden defects and measurement of layers’ thicknesses. Here, we describe a terahertz reflection interferometry (TRI) technique for non-contact measurement of paint panels where the paint is comprised of different layers of primer, basecoat, topcoat and clearcoat. Terahertz interferograms were generated by reflection from different layers of paints on a metallic substrate. These interferograms’ peak spacing arising from the delay-time response of respective layers, allow one to model the thicknesses of the constituent layers. Interferograms generated at different incident angles show that the interferograms are more pronounced at certain angles than others. This “optimum” angle is also a function of different paint and substrate combinations. An automated angular scanning algorithm helps visualizing the evolution of the interferograms as a function of incident angle and also enables the identification of optimum reflection angle for a given paint-substrate combination. Additionally, scanning at different points on a substrate reveals that there are observable variations from one point to another of the same sample over its entire surface area. This ability may be used as a quality control tool for in-situ inspection in a production line.
Performance comparison of noise reduction in mammogram imageseSAT Journals
Abstract
Noise level present in mammogram images highly affects the image analysis and accuracy of classification. Hence removing noise
present in mammogram images is an important task. Noise present in the medical images depends on imaging modalities. The
dominant noise present in mammogram images are quantum noise. The objective of this work is to study the various filters such as
mean, median and wiener filter with different window size using standard benchmark (Digital Database for Screening
Mammography) DDSM dataset. Higher the value of the (Peak Signal to Noise Ratio) PSNR, better the image quality of the
restored image. The restored image quality of various filters was evaluated with PSNR value. We obtained, the wiener filter with
window size 3x3 gives better result for noise reduction in mammogram images.
Key Words: Mammogram, Quantum Noise, Mean Filter, Median Filter, Wiener Filter, DDSM and PSNR
A New Approach for Solving Inverse Scattering Problems with Overset Grid Gene...TELKOMNIKA JOURNAL
This paper presents a new approach of Forward-Backward Time-Stepping (FBTS)
utilizing Finite-Difference Time-Domain (FDTD) method with Overset Grid Generation (OGG)
method to solve the inverse scattering problems for electromagnetic (EM) waves. The proposed
FDTD method is combined with OGG method to reduce the geometrically complex problem to a
simple set of grids. The grids can be modified easily without the need to regenerate the grid
system, thus, it provide an efficient approach to integrate with the FBTS technique. Here, the
characteristics of the EM waves are analyzed. For the research mentioned in this paper, the
‘measured’ signals are syntactic data generated by FDTD simulations. While the ‘simulated’
signals are the calculated data. The accuracy of the proposed approach is validated. Good
agreements are obtained between simulation data and measured data. The proposed approach
has the potential to provide useful quantitative information of the unknown object particularly for
shape reconstruction, object detection and others.
Qualitative analysis of Fruits and Vegetables using Earth’s Field Nuclear Mag...IJERA Editor
Among the imaging techniques, magnetic resonance imaging (MRI) is a non-contact and a non-invasive technique to obtain images of the objects rich in water content and provides an excellent tool to study variation of contrast among the soft issues. It often utilizes a linear magnetic field gradient to obtain an image that combines the visualization of molecular structure and dynamics. It measures the characteristics of hydrogen nuclei of water and nuclei with similar chemical shifts, modified by chemical environment across the object. In the present work, MRI of fresh tomatoes has been recorded using Terranova-MRI for qualitative analysis. The technique is effective, powerful and reliable as an investigative tool in the quality analysis and diagnosis of infections in fruits and vegetables.
Nanoparticle Tracking Analysis (particle by particle technique)Anson Ho
NanoSight visualizes, measures and characterizes virtually all nanoparticles. Pls contact A&P Instrument Co.Ltd in Hong Kong for detail. Email: anson@anp.com.hk
Monte Carlo Dose Algorithm Clinical White PaperBrainlab
Learn more: https://www.brainlab.com/iplan-rt
Conventional dose calculation algorithms, such as Pencil Beam are proven effective for tumors located in homogeneous regions with similar tissue consistency such as the brain. However, these algorithms tend to overestimate the dose distribution in tumors diagnosed in extracranial regions such as in the lung and head and neck regions where large inhomogeneities exist. Due to the inconsistencies seen in current calculation methods for extracranial treatments and the need for more precise radiation delivery, research has led to the creation and integration of improved calculation methods into treatment planning software.
Comparative Study of Evolutionary Algorithms for the Optimum Design Of Thin B...jmicro
With the increasing levels of Electromagnetic pollution almost exponentially in this modern age of
Electronics reported and highlighted by numerous studies carried out by scientists from all over the world,
inspire engineers to concentrate their research for the optimum design of multilayer microwave absorber
considering various parameters which are inherently conflicting in nature. In this paper we mainly focus
on the comparative study of different Evolutionary algorithms for the optimum design of thin broadband (2-
20GHz) multilayer microwave absorber for oblique incidence (300
) considering arbitrary polarization of
the electromagnetic waves. Different models are presented and synthesized using various Evolutionary
algorithm namely Firefly algorithm (FA), Particle swarm optimization (PSO), Artificial bee colony
optimization (ABC) and the best simulated results are tabulated and compared with each others.
A novel methodology for time-domain characterization of a full anechoic chamb...IJECEIAES
In this paper we present a novel methodology for time-domain characterization of a full anechoic chamber using the finite integral method. This approach is considered fast, accurate and not intensive for computer resources. The validation of this approach is carried out on CST-microwave studio for a full anechoic chamber intended for antennas measurement applications and electromagnetic exposure evaluation for cellular network. Low, medium and high gain sources are used in this study. The simulations are realized on a personal computer of medium performances (i7 CPU and 16 GB of RAM). The stability and the convergence of our approach are obtained thanks to local mesh and auto-regressive linear filtering techniques. The minimization of the simulation time is based on use of the Huygens sources in the place of the antennas. The maximum error of the chamber as well as the wave depolarization into the chamber are at one with the previous work and the catalogs of the principles chambers manufacturers for the proposed tests in this paper. The Full simulations time is about 15 hours in average.
TOMOGRAPHY OF HUMAN BODY USING EXACT SIMULTANEOUS ITERATIVE RECONSTRUCTION AL...cscpconf
In this paper an Exact Simultaneous Iterative Reconstruction Algorithm is developed and applied on a large semi human size normal biological model and a diseased model (liver region affected) to verify the efficiency of the algorithm. The algorithm is successfully reconstructed the normal model having 15%-20% perturbation i.e. change in permittivity during disease. In diseased case, reconstructed imaginary part of complex permittivity clearly detects the affected zone and it may help the medical diagnosis. Hence it may be a powerful tool for early detection of cancerous tumors as the interrogating wave is a noninvasive one at the ultra high frequency range. The resolution of this system is increased with the reduction of
wavelength by immersing the antenna system and the model in saline water region. The advantage of this algorithm is that the calculation of cofactor are done offline to save the computational time and cofactors are expressed as a function of distances irrespective of their positions
In the present paper the experimental study of
Nanotechnology involves high cost for Lab set-up and the
experimentation processes were also slow. Attempt has also
been made to discuss the contributions towards the societal
change in the present convergence of Nano-systems and
information technologies. one cannot rely on experimental
nanotechnology alone. As such, the Computer- simulations and
modeling are one of the foundations of computational
nanotechnology. The computer modeling and simulations
were also referred as computational experimentations. The
accuracy of such Computational nano-technology based
experiment generally depends on the accuracy of the following
things: Intermolecular interaction, Numerical models and
Simulation schemes used. The essence of nanotechnology is
therefore size and control because of the diversity of
applications the plural term nanotechnology is preferred by
some nevertheless they all share the common feature of control
at the nanometer scale the latter focusing on the observation
and study of phenomena at the nanometer scale. In this paper,
a brief study of Computer-Simulation techniques as well as
some Experimental result
This is a Powerpoint for basic understanding regarding Molecular dynamics and NAMD simulation to providing basic information, schematic representation, to understanding the mechanism or process of molecular dynamics ( MD), and NAMD simulation brief discussion.
Application of thermal error in machine tools based on Dynamic Bayesian NetworkIJRES Journal
In recent years, the growing interest toward complex manufacturing on machine tools and the
machining accuracy have solicited new efforts in the area of modeling and analysis of machine tools machining
errors. Therefore, the mathematical model study on the relationship between temperature field and thermal error
is the core content, which can improve the precision of parts processing and the thermal stability, also predict
and compensate machining errors of CNC machine tools. It is critical to obtain the thermal errors of a precision
machine tools in real-time. In this paper, based on Dynamic Bayesian Network (DBN), a pioneering modeling
method applied in thermal error research is presented. The dependence of thermal error and temperature field is
clearly described by graph theory, and the fuzzy classification method is proposed to reduce the computational
complexity, then forming a new method for thermal error modeling of machine tools.
Chaotic Secure Communication Using Iterated Filtering Method P. Karthik -Assistant Professor,
D. Gokul Prashanth -UG Scholar,
T. Gokul - UG Scholar,
Department of Electronics and Communication Engineering,
SNS College of Engineering, Coimbatore, India.
Design of a Selective Filter based on 2D Photonic Crystals Materials IJECEIAES
Two dimensional finite differences temporal domain (2D-FDTD) numerical simulations are performed in cartesian coordinate system to determine the dispersion diagrams of transverse electric (TE) of a two-dimension photonic crystal (PC) with triangular lattice. The aim of this work is to design a filter with maximum spectral response close to the frequency 1.55 μm. To achieve this frequency, selective filters PC are formed by combination of three waveguides W 1 K A wherein the air holes have of different normalized radii respectively r 1 /a=0.44, r 2 /a=0.288 and r /a= 0.3292 (a: is the periodicity of the lattice with value 0.48 μm). Best response is obtained when we insert three small cylindrical cavities (with normalized radius of 0.17) between the two half-planes of photonic crystal strong lateral confinement.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
Real-Time Simulation for Laser-Tissue Interaction Model
1. John von Neumann Institute for Computing
Real-Time Simulation for Laser-Tissue
Interaction Model
L.F. Romero, O. Trelles, M.A. Trelles
published in
Parallel Computing:
Current & Future Issues of High-End Computing,
Proceedings of the International Conference ParCo 2005,
G.R. Joubert, W.E. Nagel, F.J. Peters, O. Plata, P. Tirado, E. Zapata
(Editors),
John von Neumann Institute for Computing, J¨ulich,
NIC Series, Vol. 33, ISBN 3-00-017352-8, pp. 415-422, 2006.
c 2006 by John von Neumann Institute for Computing
Permission to make digital or hard copies of portions of this work
for personal or classroom use is granted provided that the copies
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page. To
copy otherwise requires prior specific permission by the publisher
mentioned above.
http://www.fz-juelich.de/nic-series/volume33
2. Real-time simulation for laser-tissue interaction model
L.F. Romeroa
, O. Trellesa
, M.A. Trellesb
a
Dept. Computer Architecture, University of Malaga, Spain
b
Medical Institute of Vilafortuny, Cambrils, Tarragona, Spain
The extensive use of laser as a medical and surgical tool has lead to a growing interest in mod-
elling the interactions between laser irradiation and human tissues. This modelling has enormous
computational needs where the use of parallel computing can provide CPU-power enough to obtain
results in real time, and with a proper representation. To this end we have developed a three-layer,
parallel architecture in order to simulate the laser-tissue interaction. The first layer is used to obtain
the irradiance distribution by Monte Carlo simulation under an optical model; Then, in the sec-
ond layer, the temperature changes produced by the energy delivered by laser device is obtained by
means of a differential equation based model. Finally, the thermal damage is predicted from the
spatial and temporal temperature distributions. To achieve high efficiencies and real time results, the
complexity of the model requires a complex parallel implementation. In this work, an interface for
a hybrid parallel communication model is presented. This interface makes easier the programming
of high efficiency hybrid codes, even with a reduced set of processors.
1. Introduction
Laser treatment based on controlled tissue elimination using selective photothermolysis is now
well established as the treatment of first choice for various skin lesions and especially for the treat-
ment of pigment disorders and skin tumors. When choosing a proper set of irradiation parameters
(wave-length, pulse length, beam size, etc.) for a pulsing laser beam applied to a given target zone,
some undesired kinds of tissue can be destroyed by inducing thermal damage in it, while the tem-
perature of the surrounding tissues is kept below the threshold for damage.
However the optimal choice of laser irradiation parameters and guidance of treatment is closely
related to the prognosis of results. This is a very complex problem because it is strongly associated
with the specific lesion characteristics with an enormous variety in histopathology of skin disorders,
the surrounding tissue and its particular structure distribution in the various skin layers, the type of
laser device with a diversity of laser irradiation parameters, among others. The many issues involved
in this problem made of it a field of actual interest that claims for an in-deep research. In the last,
this problem has lead a growing interest in modelling the interaction between laser irradiation and
human tissues.
We have address this problem by a new laser/tissue interaction model based on three different lay-
ers: (a) First, the irradiance distribution –how light delivered by laser device propagates through such
tissue models– is determined by Monte Carlo simulation; next (b) the temperature distribution in the
tissue caused by laser energy deposition is estimated by solving the bioheat transfer equations; and
lastly (c) the thermal damage is predicted from the spatial and temporal temperature distributions,
with the aid of so-called damage integral Arrhenius formulation, in which the thermal damage to
tissue is described as a temperature dependent rate process. The three layers architecture, presented
in Section 2 allows a close reproduction of the effect of laser with a realistic tissue model. Figure
1 shows a schematic representation of the model, on the left, and the most important dependencies
among the layers and its parameters, on the right.
415
3. Figure 1. A laser tissue interaction scheme and a dependency diagram of the model.
Parallel computing with low-cost multiprocessor systems have been employed to achieve the real
time simulations required for monitoring the laser-tissue interaction. While the Monte Carlo sim-
ulation in parallel is trivially solved, the major computational cost and complexity arise from the
integration of the equation describing the heat transfer. To obtain a faster computation with good
accuracy, a finite differences method in parallel is employed here, using a block distribution strategy
for the discretization grid. Finally, Arrhenius formulation takes profit of the grid distribution and no
data interchange is necessary in this step. The parallelization of the model is presented in Section 3.
One of the major advantages using non-deterministic techniques in the simulation process lays
in the fact that processors synchronization is not strictly necessary to obtain information. Different
parallelization techniques and communication paradigms have been combined to optimize response
time producing high efficiency and near real time results. In Section 4 an interface to make easier
the programming of efficient hybrid codes is presented.
The final integrated application allows to conduct the full simulation in real time (shown in Section
5) after the configuration parameters are settled. A multi-layer description of tissue allows a very
close representation of the different tissue irregularities, with a easy definition of the diverse tissue
components such as chromospheres, small veins, dermis, epidermis, etc, and even it is possible to
incorporate external sources like as cryogen gels or to modify the device parameters.
2. The model
The three layers of the computational scheme of the laser/tissue interaction model are strongly
related. As stated above, the photon energy of a pulsing laser in layer 1 increases the temperature in
layer 2, and the induced heat modifies the cells in layer 3. As the thermal and optical properties are
modified in a damaged tissue, there is a feedback of layer 3 over the first two layers of the proposed
model. In order to obtain a realistic model, an time iterative implementation of the three layers is
required, in which the time step should be as short as possible. But in practice, the cell degradation
by effect of the temperature, and so, the changes of the optical and thermal properties, are governed
by a time factor which is much larger than the required precision in the solution of the diffusion
equation for typical grid sizes. For this reason, a compromise solution with a relaxed interaction
416
4. (with two levels of time steps) among the layers have been considered, in order to minimize the
communication overhead of the model and to increase its temporal locality:
Algorithm 1: for i = 1, . . ., number of global time steps
compute photon/energy distribution (layer 1)
for j = 1, . . ., number of diffusion time steps
Energy absorption=f(energy distribution,pulsing laser time shape)
compute thermal diffusion (layer 2)
compute damage (layer 3)
2.1. Light distribution
Light distribution is estimated by launching photons governed by the optical properties of tissue
(scattering, absorption, etc) [3]. The number of photons must be large enough to obtain statistically
validated results. In the travel through the tissue, every photon deploys some of its energy in small
cells (voxels) resulting from a discretization of the tissue. The optical properties of each voxel
depends on the tissue layer to which it belongs 1
. The process is represented by this algorithm:
Algorithm 2: for i = 1, . . ., number of photons
launch photon
while (photon in grid) and (photon alive)
move photon
compute scattering
if (layer boundary)compute refraction and reflection
compute energy deposition
if (energy < threshold) photon not alive
The Monte Carlo method employed here requires a high computational power for statistically
valid simulations, but more accurate results can be obtained in comparison with other methods.
2.2. Thermal difusion
The heat transfer is modelled in this work by the bioheat equations with advection (1), and the
corresponding contour conditions [5,6]:
∂T
∂t
=
k
ρ · C
(D ·
∂2T
∂x2
+ D ·
∂2T
∂y2
+
∂
∂z
D ·
∂T
∂z
+ Fmet + Fcirc + A + Elaser, (1)
where T is the temperature, ρ is the density, C is the specific heat, k is the thermal conductivity, D is
the diffusion coefficient; Fmet, a heat source from the cellular metabolism; Fcirc, a heat source from
the smaller blood vessels; A is the advection (see Figure 2), and finally, Elaser is the energy carried
by the photons, computed in the previous layer. Due to the typical size of photon sampling and the
dimensions of the integration grid for the partial differential equation system, our choice has been
pre-computing the energy distribution matrix based on light distribution in the previous step and on
the time shape of the pulsing laser.
Equation(1) has been integrated using a Crank–Nicolson finite difference method. It is noteworthy
to observe that both spatial and temporal discretization in this step are the same as used in the
Monte Carlo simulation. Once the energy distribution is known and the matrix coefficients have
1
A typical simulation uses 50 × 50 × 50 voxels, each one of size 100µm × 100µm × 100µm.
417
5. Figure 2. A complex model with an air layer,
cryogen gel, epidermis, dermis, subcutaneous tis-
sue, vessels, chromospheres, and several advection
terms (represented with arrows).
20 40 60 80 100
10
0
10
5
10
10
Time−Temperature (Damage=1) (Dermis)
1 year
1 day
1 second
Figure 3. Exposure time required to damage
a 100% of dermis cells, as a function of tem-
perature.
been computed, the resulting system of n = nx × ny × nz linear equations is solved using the
Preconditioned Conjugate Gradient (PCG) method.
2.3. Arrhenius formulation for the tissue damage
The thermal increase modifies the physiological properties of a tissue. Normally, when tempera-
ture rises to about 50◦
C, there is a protein denaturalization which induces the death of cells in short
time. Arrhenius formulation [1] is employed here to calculate the accumulative damage (∆Ω), which
is irreversible when the total induced damage affect to the 100% of cells (Ω=1). Arrhenius equation
(2) computes the accumulative damage in a tissue, exposed to a given temperature for a certain time:
∆Ω(T, t) = A
tf
ti
e− Ea
R·T dt A(tf − ti) · e− Ea
R·T , (2)
where A is the frequency factor, tf −ti is the exposure time, R is the universal gas constant, and Ea is
the energy activation barrier. In Figure 3 the time required to an irreversibe damage of dermis tissue
is shown for different temperatures. Usually, the damage will be mainly produced in the epidermis
(due to its proximity to the laser source), and also in the haemoglobin (because of its high absorption
coefficient). The use of cryogen gels (below 0◦
C) will minimize the damage in the epidermis [2].
3. Parallel implementation
In order to achieve real time realistic simulations, a parallel implementation has been developed,
which can be properly scaled in order to get the maximum performance from the available resources.
Up to four level of parallelism have been combined in a typical implementation of the model, as
described below (Fig. 4):
1) A client-server model, usually using a PC acting as front-end client, sending to a simulator
Server the control parameter of the model (such as intensity, pulse shape, environmental conditions,
etc.). The front-end client also renders any volume data which have been received from the simulator
Server. The experiment presented below have been obtained using a PC with a Pentium 4 as client,
and an Altix 3000 as server.
2) The Altix server launches two MPI processes. The first one computes the trajectory of photons
(the photon server), while the other one solves the bioheat equation (the grid server).
418
6. Figure 4. Gantt chart for a typical parallel implementation
3) Both MPI processes have been parallelized using a shared memory model with OpenMP. In
the first MPI process, Monte Carlo is parallelised distributing the number of photons among the
different processors and collecting results in a master thread which will communicate with the grid
server. Photon trajectories are independent allowing an efficient parallelization. In the second one,
a block distribution strategy for the discretization grid following the z-direction is used, in such a
way that each processor computes the matrix coefficients and the right hand sides in their assigned
nodes. This is equivalent to a block distribution of the system matrix if a natural ordering is used.
The resulting system is solved by means of the PCG method, in which each vectorial operation is
solved in parallel by a block distribution of the vector components among processors.
4) In addition, one of the MPI processes (usually the photon server) launches a Posix thread,
which will perform an asynchronous communication with the PC client.
3.1. Communication pattern
Note that the communication between the processes and threads may be synchronous or asynchro-
nous, depending of the requirements in each level. It has to be taken into account that, like reality,
this is an stochastic model, in which the computational rigourousness in the solution of the problem
can be partially sacrificed without a penalty in the realism of the results. So, the PC client and the
server machine only requires asynchronous communication: the posix thread will continuously send
the data as they are found in the memory, in any moment, and it will forward the control commands
received from the PC to a memory location in one of the MPI processes. The volume data for visual-
ization is sent by the server using just one byte per voxel in a compressed package. Decompression
and rendering of the volume data is performed by an event driven application using the fast VTK
visualization library [7]. The communication rate only depends on the LAN connection and the
frequency of this asynchronous communication is large enough to avoid a visual mismatch.
The photon server and the grid server also communicates each other by using an asynchronous
message passing model (MPI Isend, MPI Irecv) at a frequency of about 10Hz. This frequency is
large enough to eliminate numerical instability and to provide realistic simulations. Note that the
protein denaturalization, which modifies the optical parameters for a temperature increase, usually
occurs at a rate of tens of seconds. On the other hand, communications in the grid server, required
for the solution of equation (1), which occurs through the memory, must be carefully synchronized.
419
7. 3.2. Load balancing
An efficient parallelization strategy for the PCG solver has been used in this work, which will
ensure a minimization of the communication cost and a good load balancing inside the grid server
[4]. As the integration of equation (1) establishes the temporal reference for the system, the com-
putational load of the photon server can be easily adjusted (with a minimum threshold, statistically
stablished), depending on the employed number of threads.
4. A communication model for hybrid systems
In this section, an interface to make easier the hybrid programming of such a complex parallel
model is presented. The proposed idea is to make an abstraction of both the data interchange and
the synchronization requirements, by solving the underlying problem in any parallel programming
paradigm: the read-after-write (RAW) and write-after-read (WAR) hazards for any block of data
being shared by two or more processors. Note that all parallel programming environment existing in
the literature shares the information by using one of the following strategies: a) storing information
in a shared memory location; b) copying remote information in a local storage: get; c) copying local
information in a remote storage: put; and finally, by using a send-receive symmetric communicator.
The interface proposed here is based in the substitution of the mentioned operation (put, get, send,
recv) and of any required syncronization (flags, semaphores, locks, etc.), by the use of only four
routines, which has to be carefully inserted after and before the use of shared data. These routines
may use any of the traditional strategies inside, depending of a the requirements, as explained below:
• pre-write: Routine used to evaluate a WAR hazard just before any modification of local
data which is being shared. This call is only required in a shared memory environment, or
when a get communicator is employed. Usually, it consist in evaluate if a certain flag vari-
able,described below, is zero (WAR check).
• post-write: This routine is used after modifying the data, and includes three stages. Firstly,
WAR hazard is evaluated, only if a put communicator is employed. It evaluates the availability
of a remote location. Second, the data is delivered if required, by using send or put. And
third, a flag variable is lifted, as the initial step of a RAW hazard detection, if a symmetric
communicator is not being used (RAW release).
• pre-read : Routine used to receive data before any computation using it. It also includes three
stages. Firstly, the flag variable is tested (if a symmetric communicator is not being used), as
the final step of a RAW hazard detection. Second the data is received, by using recv or get.
And third, the flag variable is done zero, as the liberation phase of a WAR hazard detection.
The third stage is only required when a get communicator is employed. (RAW check).
• post-read: Routine used to liberate a WAR hazard just after using data produced in another
processor. The flag variable is done zero. This call is only required in a shared memory
environment, or when a put communicator is employed (WAR release).
In Table 1, the four operations of the proposed interface are summarized. Note that these routines
should be applied to large blocks of data, rather than to individual variables (in the same way that
messages should be unified, if possible, in a message passing model). For example, a post-write call
should be invoked just after the last local modification, before an access from a remote processor. In
general, the routines presented here should be placed immediately after or before the read or write
operations, to ensure a minimization of the waits, and to overlap communications and computations.
420
8. The synchronization flags described here can be reused by different communication operations
with some care. For example, in a global reduction, all flags involved can be replaced by a single
counter. Also, if a same operation is performed inside a loop, an even-odd pair of flags should be
alternatively used, to prevent from hazards affecting to the flag variable. Finally, when any of the
paired routines (either a post-read/pre-write for WAR hazards, or a post-write/pre-read, for a RAW
hazard) are dynamically separated by another pair, involving, at least, at the same PEs, then the
synchronization operations over the flags in the first pair can be eliminated, because the hazard has
been solved by the second one. So, many of the WAR hazard detections can be eliminated.
By considering all this items together, the resulting code should minimize the communication
cost, and it will significatively reduce the waits in the synchronization points due to load unbalance.
Communicator pre–write WR post-write pre-read RE post–read
shared test flag==0; flag=1; test flag==1; flag=0;
symm MP send; recv;
put test flag==0; put; flag=1; eval if 1 flag=0;
get test flag==0; flag=1; test flag==1; get; flag=0;
Table 1
PEs task distribution Ex. time(grid server) Launched photons speedup MC speedup ED
1 1 (photon server) - 13050 1 -
2 1 (photon server), 1 (grid server) 4.88 sec. 63650 - 1
4 1 (photon server), 3 (grid server) 0.98 sec. 12950 0.99 4.98
8 2 (photon server), 6 (grid server) 0.50 sec. 26550 2.02 9.76
Table 2
5. Results
In this section, we present some results, which has been obtained using an Altix 3000 system
as the simulator server. Several experiments has been performed, using a typical skin model with
3 vessels, several chromospheres, and a discretization grid of 125000 voxels (see Figure 5). In all
cases, times are shown for a 1 second simulation (10 global times steps, 10×20 diffusion time steps).
The first experiment (using 1 PE as a photon server), has been used to determine how many photons
can be launched by 1 PE in one CPU second. In the second and third experiments, one of the Itanium
processor has been used as a photon server, while 1 and 3 (respectively) CPUs has been used for the
solution of the bioheat equation (the CPU usage for the Arrhenius problem is irrelevant). In the third
experiment, the parallel efficiency obtained (1.66) is very high, which is not only due to the cache
effect, but also to a total absence of waits in the synchronization points. This is thanks to the use of
the technique described in the previous section, which has allowed to eliminate all global barriers,
and any unnecessary test for hazards. Real time results have been obtained in this case (note that
many of the model properties have been previously adjusted to reach a real time simulation for this
configuration of the system). The last experiment indicates that the scalability of the problem is very
good for a larger number of processors, thus allowing a more complex modellization of the problem.
421
9. Figure 5. Simulation snapshots: a) photon energy deposition, b) tissue temperature, and c) induced
damage for a sample model with three vessels. Figure b) includes a 0◦
C isosurface in the epidermis,
and an opacity render (centered in 90◦
C) which is mainly located around the vessels.
6. Conclusions
In this work, an integrated model for the laser-tissue interaction has been presented. By means of
the three-layer architecture of the model, an accurate representation of the tissue response to the laser
stimulus can be achieved because of the extended set of parameters considered. Also, the three-layer
architecture of the model incorporates a complex multi-level parallelism. Several communication
paradigms and synchronization techniques have been included, because of the requirements of each
level. To make easy the programming of the model, a new parallel programming interface for com-
munications and synchronizations is proposed, which is independent of the parallelization paradigm.
With this interface, a robust, efficient and accurate parallel model has been obtained, which can
achieve real time simulations of a complex model, even using a small parallel architecture. These
results represent an step forward in assist greatly in diagnosis and treatment, and subsequently ob-
jectively assess the device parameters at various stages of treatment.
7. acknowledgement
The authors wish to thank Victor Espigares for his help in the integration of the model.
References
[1] J.K.Barton, A.Rollins, S.Yazdanfar, T.J.Pfefer, V. Westphal, J. A.Izatt, “Photothermal coagulation of
blood vessels”, Physics in Medicine and Biology, vol.46, pp.1665-1678, 2001.
[2] T.J. Pfefer, D.J. Smithies, T.E. Milner, M.J. Van Gemert, J.S. Nelson, A.J. Welch, “Bioheat transfer
analysis of cryogen spray cooling during laser treatment of port wine stains”, Lasers in Surgery and
Medicine, vol.26, pp.145-157, 2000.
[3] A. Roggan, G. M¨uller, Dosimetry and Computer Based Irradiation Planning for Laser-Induced Inter-
sticial Thermotherapy, SPIE Institute Series IS 13, M¨uller-Rogan Eds., 1995.
[4] L.F. Romero, E.M. Ortigosa, J.I. Ramos, “Parallel scheduling of the PCG method for banded matrices
rising from FDM/FEM”, Journal of Parallel and Distributed Computing,vol.63, pp.1243-1256, 2003.
[5] M.J. Van Gemert, A.J. Welch, J.W. Pickering, Modelling Laser Treatment of Port Wine Stains, O.T.Tan
Eds., Amsterdam, 1992.
[6] M.J. Van Gemert, Optical-Thermal Response of Laser-Irradiated Tissue, Plenum, London, 1995.
[7] L. Avila et al, The VTK User’s Guide, Kitware Inc., 2004.
422