This document summarizes a research paper that developed a GPU-based algorithm for CT image reconstruction from undersampled and noisy projection data. The algorithm uses an algebraic reconstruction method and implements the LSQR iterative solver on a GPU using CUDA programming. By taking advantage of massive parallel processing on the GPU, the algorithm is able to reconstruct CT images with higher resolution than previous methods without losing image quality or increasing computation time significantly. The paper presents the mathematical model, reconstruction algorithm steps, and GPU implementation details.
This slide best explains the introduction of CT, basis and types of CT image reconstructions with detailed explanation about Interpolation, convolution, Fourier slice theorem, Fourier transformation and brief explanation about the image domain i.e digital image processing.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This slide best explains the introduction of CT, basis and types of CT image reconstructions with detailed explanation about Interpolation, convolution, Fourier slice theorem, Fourier transformation and brief explanation about the image domain i.e digital image processing.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Geospatial Data Acquisition Using Unmanned Aerial SystemsIEREK Press
The Rivers State University campus in Portharcourt is one of the university campuses in the city of Portharcourt,
Nigeria covering over 21 square kilometers and housing a variety of academic, residential, administrative and other
support buildings. The University Campus has seen significant transformation in recent years, including the
rehabilitation of old facilities, the construction of new academic facilities and the most recent update on the creation
of new collages, faculties and departments. The current view of the transformations done within the University
Campus is missing from several available maps of the university. Numerous facilities have been constructed on the
University Campus that are not represented on these maps as well as the qualities associated with these facilities.
Existing information on the various landscapes on the map is outdated and it needs to be streamlined in light of
recent changes to the University's facilities and departments. This research article aims to demonstrate the
effectiveness of unmanned aerial systems (UAS) in geospatial data collection for physical planning and mapping of
infrastructures at the Rivers State University Port Harcourt campus by developing a UAS-based digital map and
tour guide for RSU's main campus covering all collages, faculties and departments and this offers visitors, staff and
students with location and attribute information within the campus.
Methodologically, Unmanned Aerial Vehicles were deployed to obtain current visible images of the campus
following the growth and increasing infrastructural development. At a flying height of 76.2m (250 ft), a DJI
Phantom 4 Pro UAS equipped with a 20-megapixel visible camera was flown around the campus, generating
imagery with 1.69cm spatial resolution per pixel. To obtain 3D modeling capabilities, visible imagery was acquired
using the flight-planning software DroneDeploy with a near nadir angle and 75 percent front and side overlap.
Vertical positions were linked to the World Geodetic System 1984 and horizontal positions to the 1984 World
Geodetic Datum universal transverse Mercator (UTM) (WGS 84). To match the UAS data, GCPs were transformed
to UTM zone 32 north.
Finally, dense point clouds, DSM, and an orthomosaic which is a geometrically corrected aerial image that provides
an accurate representation of an area and can be used to determine true distances, were among the UAS-derived
deliverables.
Chebyshev filter applied to an inversion technique for breast tumour detectioneSAT Journals
Abstract Microwave imaging has been extensively studied in the past several years as a new technique for early stage breast cancer detection. The rationale of microwave imaging for breast tumour detection is significant contrast in the dielectric properties of normal tissue and malignant tumours. However, in practice noise present from the environments during screening/examination degrades the quality of the image. Inaccurate reconstructed image caused false/misleading interpretation of the image which leads to inappropriate diagnose or treatment to the patient. In the simulation works, noise is added to imitate the actual environment scenario. The two-dimensional (2D) object that identical to breast model is developed using numerical simulation to imitate the breast model. A filter is integrated with an iterative inversion technique for breast tumour detection to eliminate the noise. To assess the effectiveness of this approach, we consider the reconstruction of the electrical parameter profiles of 2D objects from measurements of the transient total electromagnetic field data contaminated with noise. Additive white Gaussian noise is utilized to mimic the effect of random processes that occur in the nature. This paper presents the filter settings and characteristics that affect the reconstruction of the image in order to obtain the most reliable and closer to the actual image. Selection of filter settings or design is important in order to achieve desired signal, most accurate image and provide reliable information of the object. Chebyshev low pass filter is applied in the Forward-Backward Time-Stepping (FBTS) algorithm to filter the noisy data and to improve the quality of reconstructed image. Keywords: Chebyshev low pass filter, microwave imaging and breast tumour detection
Chebyshev filter applied to an inversion technique for breast tumour detectioneSAT Journals
Abstract Respiration Rate is one of the vital signs which require regular monitoring among diseased people. There are a number of medical devices developed to monitor human health condition among them RR monitor is one. The Respiration rate monitor is a device that measures the subject’s respiration rate non-invasively. The objective of the proposed work is to design and develop a low cost Respiration rate monitor for clinical applications. The main parameter to be used is the temperature of respired air i.e. both inspired and expired air. Hence this device uses Thermistor as the source sensor which will provide the temperature feedback of the inspired and expired air. The proposed work uses the ATMEL AT89S52 microprocessor with external ADC0809. The magnitude voltage during the inhalation and exhalation is converted into digital signal using ADC. The further process involves a peak detection technique. The number of peaks obtained in duration of one minute gives the Respiration Rate. The so obtained Respiration Rate is sent to the concerned physician’s cell phone through GSM modem. The device gives an alarm and sends request via SMS if there is tachyopnea and bradyapnea. Keywords: Respiratory Rate, Peak Detection, ADC, GSM, SM, Threshold.
Real-Time Analysis of Streaming Synchotron Data: SCinet SC19 Technology Chall...Globus
This project, which involved streaming light source data from the SC19 show floor to Argonne’s Leadership Computing Facility (ALCF) outside Chicago, won the top prize at the inaugural SCinet Technology Challenge at SC19 in Denver, CO.
Damage detection in cfrp plates by means of numerical modeling of lamb waves ...eSAT Journals
Abstract
The paper presents an application of modeling acoustic waves propagation in a carbon fiber reinforced plastic (CFRP) plates for
damage detection. This task is a part of non-destructive testing (NDT) methods which are very important in many industry
branches. Propagation of Lamb waves is modeled using three-dimensional finite element method by means of commercial
software. In the paper three different cases of plate structures with and without flaws are considered to present review of selected
methods for the detection of defects in time and frequency domain. These are comparisons of: A-scans, B-scans, dispersion
curves, spectrograms, scalograms and energy plots. Developed numerical model first has been validated by means of analytical
solution for isotropic plate.
Keywords: Lamb waves, non-destructive testing, finite element method, damage detection
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
The Forward-Backward Time-Stepping (FBTS) had proven its potential to reconstruct images of buried objects in inhomogeneous medium with useful quantitative information about its size, shape, and locality. The Total Variation regularization method was incorporated with the FBTS algorithm to deal with the ill-posedness or ill-conditionedness of the inverse problem. The effectiveness of the proposed technique is confirmed by numerical simulations. The numerical method was carried out on simple object detection through FBTS with and without TV regularization method. The detection and reconstruction of relative permittivity and conductivity of the simple object have shown an improvement as TV regularization method applied whereas it smoothed the vibrations of the images and gave a better estimation of the image’s boundaries.
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Con...jamesinniss
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
Get more info here:
>>> https://bit.ly/38FXJav
A new Compton scattered tomography modality and its application to material n...irjes
Imaging modalities exploiting the use of Compton scattering are currently under active investigation. However, despite many innovative contributions, this topic still poses a formidable mathematical and technical challenge. Due to the very particular nature of the Compton effect, the main problem consists of obtaining the reconstruction of the object electron density. Investigations on Compton scatter imaging for biological tissues, organs and the like have been performed and studied widely over the years. However in material sciences, in particular in non-destructive evaluation and control, this type of imaging procedure is just at its beginning. In this paper, we present a new scanning process which collects scattered radiation to reconstruct the internal electronic distribution of industrial materials. As an illustration, we shall look at one of the most widely used construction material: concrete and its variants in civil engineering. The Compton scattered radiation approach is particularly efficient in imaging steel frame and voids imbedded in bulk concrete objects.
We present numerical simulation results to demonstrate the viability and performances of this imaging modality.
Keywords :- Compton scattering , Gamma-ray imaging , Non-destructive testing/evaluation (NDT/NDE), Concrete: structure and defects, Radon transform
3-D WAVELET CODEC (COMPRESSION/DECOMPRESSION) FOR 3-D MEDICAL IMAGESijitcs
Compression is an important part in image processing in order to save memory space and reduce the
bandwidth while transmitting. The main purpose of this paper is to analyse the performance of 3-D wavelet
encoders using 3-D medical images. Four wavelet transforms, namely, Daubechies 4,Daubechies
6,Cohen-Daubechies-Feauveau 9/7 and Cohen Daubechies-Feauveau5/3 are used in the first stage with
encoders such as 3-D SPIHT,3-D SPECK and 3-D BISK used in the second stage for the compression.
Experiments are performed using medical test image such as magnetic resonance images (MRI) and X-ray
angiograms (XA). The XA and MR image slices are grouped into 4, 8 and 16 slices and the wavelet
transforms and encoding schemes are applied to identify the best wavelet encoder combination. The
performances of the proposed scheme are evaluated in terms of peak signal to noise ratio and bit rate.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
Effect of grid adaptive interpolation over depth imagescsandit
A suitable interpolation method is essential to keep the noise level minimum along with the timedelay.
In recent years, many different interpolation filters have been developed for instance
H.264-6 tap filter, and AVS- 4 tap filter. This work demonstrates the effects of a four-tap lowpass
tap filter (Grid-adaptive filter) on a hole-filled depth image. This paper provides (i) a
general form of uniform interpolations for both integer and sub-pixel locations in terms of the
sampling interval and filter length, and (ii) compares the effect of different finite impulse
response filters on a depth-image. Furthermore, the author proposed and investigated an
integrated Grid-adaptive filter, that implement hole-filling and interpolation concurrently,
causes reduction in time-delay noticeably along with high PSNR .
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Geospatial Data Acquisition Using Unmanned Aerial SystemsIEREK Press
The Rivers State University campus in Portharcourt is one of the university campuses in the city of Portharcourt,
Nigeria covering over 21 square kilometers and housing a variety of academic, residential, administrative and other
support buildings. The University Campus has seen significant transformation in recent years, including the
rehabilitation of old facilities, the construction of new academic facilities and the most recent update on the creation
of new collages, faculties and departments. The current view of the transformations done within the University
Campus is missing from several available maps of the university. Numerous facilities have been constructed on the
University Campus that are not represented on these maps as well as the qualities associated with these facilities.
Existing information on the various landscapes on the map is outdated and it needs to be streamlined in light of
recent changes to the University's facilities and departments. This research article aims to demonstrate the
effectiveness of unmanned aerial systems (UAS) in geospatial data collection for physical planning and mapping of
infrastructures at the Rivers State University Port Harcourt campus by developing a UAS-based digital map and
tour guide for RSU's main campus covering all collages, faculties and departments and this offers visitors, staff and
students with location and attribute information within the campus.
Methodologically, Unmanned Aerial Vehicles were deployed to obtain current visible images of the campus
following the growth and increasing infrastructural development. At a flying height of 76.2m (250 ft), a DJI
Phantom 4 Pro UAS equipped with a 20-megapixel visible camera was flown around the campus, generating
imagery with 1.69cm spatial resolution per pixel. To obtain 3D modeling capabilities, visible imagery was acquired
using the flight-planning software DroneDeploy with a near nadir angle and 75 percent front and side overlap.
Vertical positions were linked to the World Geodetic System 1984 and horizontal positions to the 1984 World
Geodetic Datum universal transverse Mercator (UTM) (WGS 84). To match the UAS data, GCPs were transformed
to UTM zone 32 north.
Finally, dense point clouds, DSM, and an orthomosaic which is a geometrically corrected aerial image that provides
an accurate representation of an area and can be used to determine true distances, were among the UAS-derived
deliverables.
Chebyshev filter applied to an inversion technique for breast tumour detectioneSAT Journals
Abstract Microwave imaging has been extensively studied in the past several years as a new technique for early stage breast cancer detection. The rationale of microwave imaging for breast tumour detection is significant contrast in the dielectric properties of normal tissue and malignant tumours. However, in practice noise present from the environments during screening/examination degrades the quality of the image. Inaccurate reconstructed image caused false/misleading interpretation of the image which leads to inappropriate diagnose or treatment to the patient. In the simulation works, noise is added to imitate the actual environment scenario. The two-dimensional (2D) object that identical to breast model is developed using numerical simulation to imitate the breast model. A filter is integrated with an iterative inversion technique for breast tumour detection to eliminate the noise. To assess the effectiveness of this approach, we consider the reconstruction of the electrical parameter profiles of 2D objects from measurements of the transient total electromagnetic field data contaminated with noise. Additive white Gaussian noise is utilized to mimic the effect of random processes that occur in the nature. This paper presents the filter settings and characteristics that affect the reconstruction of the image in order to obtain the most reliable and closer to the actual image. Selection of filter settings or design is important in order to achieve desired signal, most accurate image and provide reliable information of the object. Chebyshev low pass filter is applied in the Forward-Backward Time-Stepping (FBTS) algorithm to filter the noisy data and to improve the quality of reconstructed image. Keywords: Chebyshev low pass filter, microwave imaging and breast tumour detection
Chebyshev filter applied to an inversion technique for breast tumour detectioneSAT Journals
Abstract Respiration Rate is one of the vital signs which require regular monitoring among diseased people. There are a number of medical devices developed to monitor human health condition among them RR monitor is one. The Respiration rate monitor is a device that measures the subject’s respiration rate non-invasively. The objective of the proposed work is to design and develop a low cost Respiration rate monitor for clinical applications. The main parameter to be used is the temperature of respired air i.e. both inspired and expired air. Hence this device uses Thermistor as the source sensor which will provide the temperature feedback of the inspired and expired air. The proposed work uses the ATMEL AT89S52 microprocessor with external ADC0809. The magnitude voltage during the inhalation and exhalation is converted into digital signal using ADC. The further process involves a peak detection technique. The number of peaks obtained in duration of one minute gives the Respiration Rate. The so obtained Respiration Rate is sent to the concerned physician’s cell phone through GSM modem. The device gives an alarm and sends request via SMS if there is tachyopnea and bradyapnea. Keywords: Respiratory Rate, Peak Detection, ADC, GSM, SM, Threshold.
Real-Time Analysis of Streaming Synchotron Data: SCinet SC19 Technology Chall...Globus
This project, which involved streaming light source data from the SC19 show floor to Argonne’s Leadership Computing Facility (ALCF) outside Chicago, won the top prize at the inaugural SCinet Technology Challenge at SC19 in Denver, CO.
Damage detection in cfrp plates by means of numerical modeling of lamb waves ...eSAT Journals
Abstract
The paper presents an application of modeling acoustic waves propagation in a carbon fiber reinforced plastic (CFRP) plates for
damage detection. This task is a part of non-destructive testing (NDT) methods which are very important in many industry
branches. Propagation of Lamb waves is modeled using three-dimensional finite element method by means of commercial
software. In the paper three different cases of plate structures with and without flaws are considered to present review of selected
methods for the detection of defects in time and frequency domain. These are comparisons of: A-scans, B-scans, dispersion
curves, spectrograms, scalograms and energy plots. Developed numerical model first has been validated by means of analytical
solution for isotropic plate.
Keywords: Lamb waves, non-destructive testing, finite element method, damage detection
Automatic Segmentation of Brachial Artery based on Fuzzy C-Means Pixel Clust...IJECEIAES
Automatic extraction of brachial artery and measuring associated indices such as flow-mediated dilatation and Intima-media thickness are important for early detection of cardiovascular disease and other vascular endothelial malfunctions. In this paper, we propose the basic but important component of such decision-assisting medical software development – noise tolerant fully automatic segmentation of brachial artery from ultrasound images. Pixel clustering with Fuzzy C-Means algorithm in the quantization process is the key component of that segmentation with various image processing algorithms involved. This algorithm could be an alternative choice of segmentation process that can replace speckle noise-suffering edge detection procedures in this application domain.
The Forward-Backward Time-Stepping (FBTS) had proven its potential to reconstruct images of buried objects in inhomogeneous medium with useful quantitative information about its size, shape, and locality. The Total Variation regularization method was incorporated with the FBTS algorithm to deal with the ill-posedness or ill-conditionedness of the inverse problem. The effectiveness of the proposed technique is confirmed by numerical simulations. The numerical method was carried out on simple object detection through FBTS with and without TV regularization method. The detection and reconstruction of relative permittivity and conductivity of the simple object have shown an improvement as TV regularization method applied whereas it smoothed the vibrations of the images and gave a better estimation of the image’s boundaries.
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Con...jamesinniss
FusIon - On-Field Security and Privacy Preservation for IoT Edge Devices: Concurrent Defense Against Multiple types of Hardware Trojan Attacks
Get more info here:
>>> https://bit.ly/38FXJav
A new Compton scattered tomography modality and its application to material n...irjes
Imaging modalities exploiting the use of Compton scattering are currently under active investigation. However, despite many innovative contributions, this topic still poses a formidable mathematical and technical challenge. Due to the very particular nature of the Compton effect, the main problem consists of obtaining the reconstruction of the object electron density. Investigations on Compton scatter imaging for biological tissues, organs and the like have been performed and studied widely over the years. However in material sciences, in particular in non-destructive evaluation and control, this type of imaging procedure is just at its beginning. In this paper, we present a new scanning process which collects scattered radiation to reconstruct the internal electronic distribution of industrial materials. As an illustration, we shall look at one of the most widely used construction material: concrete and its variants in civil engineering. The Compton scattered radiation approach is particularly efficient in imaging steel frame and voids imbedded in bulk concrete objects.
We present numerical simulation results to demonstrate the viability and performances of this imaging modality.
Keywords :- Compton scattering , Gamma-ray imaging , Non-destructive testing/evaluation (NDT/NDE), Concrete: structure and defects, Radon transform
3-D WAVELET CODEC (COMPRESSION/DECOMPRESSION) FOR 3-D MEDICAL IMAGESijitcs
Compression is an important part in image processing in order to save memory space and reduce the
bandwidth while transmitting. The main purpose of this paper is to analyse the performance of 3-D wavelet
encoders using 3-D medical images. Four wavelet transforms, namely, Daubechies 4,Daubechies
6,Cohen-Daubechies-Feauveau 9/7 and Cohen Daubechies-Feauveau5/3 are used in the first stage with
encoders such as 3-D SPIHT,3-D SPECK and 3-D BISK used in the second stage for the compression.
Experiments are performed using medical test image such as magnetic resonance images (MRI) and X-ray
angiograms (XA). The XA and MR image slices are grouped into 4, 8 and 16 slices and the wavelet
transforms and encoding schemes are applied to identify the best wavelet encoder combination. The
performances of the proposed scheme are evaluated in terms of peak signal to noise ratio and bit rate.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
Effect of grid adaptive interpolation over depth imagescsandit
A suitable interpolation method is essential to keep the noise level minimum along with the timedelay.
In recent years, many different interpolation filters have been developed for instance
H.264-6 tap filter, and AVS- 4 tap filter. This work demonstrates the effects of a four-tap lowpass
tap filter (Grid-adaptive filter) on a hole-filled depth image. This paper provides (i) a
general form of uniform interpolations for both integer and sub-pixel locations in terms of the
sampling interval and filter length, and (ii) compares the effect of different finite impulse
response filters on a depth-image. Furthermore, the author proposed and investigated an
integrated Grid-adaptive filter, that implement hole-filling and interpolation concurrently,
causes reduction in time-delay noticeably along with high PSNR .
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. 1413
Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
The most important application of CT is in diagnostic medicine, where it allows the radiologists to view
internal organs with precision and safety to the patient. More recently, there have arisen numerous non-medical
imaging applications.
It is well-known that the intensity of the X-ray is attenuated when it goes through a material following a
decreasing exponential law. The attenuation depends on the thickness and on the linear attenuation coefficient
of the material. If the material is not uniform, the X-ray passes by zones with different attenuation coefficient.
Physically, the projection measured by the scanner is the line or path integral of the attenuation coefficient.
Since the attenuation coefficient depends on the density and characteristics of the material, then reconstructing
the attenuation coefficient means obtaining the information about the different characteristics of the objects in
the X-ray line. Therefore, it is possible to obtain an image of the body's slice by reconstructing the attenuation
coefficient function from the projections and then imaging the attenuation coefficient.
Assume that a X-ray beam passes through a slice of the body and that f(x,y) is the two dimensional
distribution of the attenuation coefficient. Each single X-ray goes though the body's section producing a
measure in the detector. The projection is defined as the line integral of the attenuation coefficient where the
integration path is the line followed by the X-ray from the source to the detector. By moving the source and
detector, it is possible to obtain a set of projections. Thus, a single k-th projection at angle r is defined by the
integral along a line l by
l
r
k dl
y
x
f
P )
,
(
, (1)
The reconstruction problem consists of determining the values of the function f(x,y) from the set of the
experimental projection data. Then, imaging the attenuation coefficient function f(x,y), an image of the planar
section of the body is obtained. From now on, let us identify the function f(x,y) with the grayscale intensity
values at pixel (x,y) of the image grid of the planar section of the body.
Mathematically, reconstruction problem corresponds to obtain a function from its line integrals for different
angles. The first mathematical foundations of this problem are based on Radon work of 1917. However, since
the introduction of the first clinical scanner, tremendous advances have been made in CT technology and in
reconstruction techniques. Reconstruction needs to manage a very large amount of information and, then, the
advances in computer science have become a key stone for the development of reconstruction techniques that
could be implemented into clinical scanners successfully. There are many books about the principles of the
computed tomography (see, for instance, [1-3]) and a very large number of research works devoted to particular
advances or applications of CT technology (see [4] for an outlook on CT developments). In this work, we are
interested in the implementation of the algorithm in a very efficient way to reduce as much as possible the
computation time. Previous work done in this direction can be found in [5-6].
There exist many different reconstruction algorithms. Most algorithms can be subdivided in two categories:
filtered back-projection (FBP) and algebraic iterative methods. The selection of one of these algorithms must
represent a compromise between accuracy and computation time required. Filtered back-projection demands
fewer computational resources, while iterative methods generally produce less number of errors in the
reconstruction at a higher computational cost.
Filtered back-projection was the algorithm used first by almost all commercially available CT scanners [3],
however, the advance in new computer architectures is doing that algebraic methods can be considered as
reconstruction method in commercial scanners. Filtered back-projection methods are based on analytical
algorithms which use the inverse Fourier transform and they need a set of projections equally separated (see [2]
for a review on FBP method).
However, in CT it is common to find under sampled set of no equally spaced projections. In these cases,
images reconstructed with conventional FBP algorithm are highly degraded due to insufficient and noisy
projections. On the other hand, iterative methods do not require complete data collection and do provide the
3. 1414 Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
optimal reconstruction in noisy conditions in the image. These methods allow reconstructing images with
higher contrast and precision in noisy conditions from a small number of projections than the methods based on
the Fourier transform [7-9].
Although widely used in nuclear medicine (gamma-camera, single photon emission computed tomography
(SPECT), positron emission tomography (PET)), iterative reconstruction has not yet penetrated in CT. The
main reason for this is that data sets in CT are much larger than in nuclear medicine and iterative reconstruction
then becomes computationally very intensive. Acceleration of iterative reconstruction is an active area of
research. Stone et al. [5] describe the accelerated reconstruction algorithm on graphical processing units
(GPUs) for advanced magnetic resonance imaging (MRI). They reconstruct images of 1283
voxels in over one
minute. Johnson and Sofer [10] propose a parallel computational method for emission tomography applications
that is capable of exploiting the sparsity and symmetries of the model and demonstrate that such a
parallelization scheme is applicable to the majority of iterative reconstruction algorithms. The time needed for
the reconstruction of thick-slices images (128x128x23 in voxels) is over 3 minutes. Pratx et al [11] show
results of iterative reconstruction using GPU in PET. The required time on a single GPU to reconstruct an
image of 1603 voxels is 8.8 second. Multi GPU implementation of tomographic reconstruction accelerates
reconstruction of images 350x350x9 up to 67 seconds on a single GPU and 32 seconds on four GPUs [6].
It seems that the resolution of an image to be reconstructed remains to be a problem. In our previous work
we have reported results on using Extensive Toolkit for Scientific computation (PETSc) and binary format of
input data to facilitate the programming task and accelerate the whole process of reconstruction [12-13]. In this
research, our aim is to take advantage of the massive computing power of GPU in order to reconstruct CT
images with higher resolution without losing quality. We present a description and validation of our algorithm
The rest of the paper is organized as follows: mathematical aspects of the problem are described in section
2 and the reconstruction algorithm in section 3, GPU implementation of this algorithm is presented in section 4.
The results obtained in the experiment are shown in section 5 and, finally, we summarize the conclusions.
2. Algebraic reconstruction method
In this work we use an algebraic method to reconstruct the function f(x,y) in equation (1) from the set of
projections. This approach for tomography imaging consists of assuming that the cross section is a discrete
squared grid and that in each cell (image pixel) the function f(x,y) is constant. Then, in this approach
reconstruction means to solve a set of algebraic equations for the unknowns (f(x,y) value at each pixel) in terms
of the measured projection data.
Let xij denotes the constant value of f(x,y) in the ijth pixel with i,j=1,2...n. Fundamentally, the algebraic
methods of image reconstruction from projections are schemes for solving a linear equation system:
,
P
Ax (2)
where the system matrix A simulates computer tomography functioning and its elements (Wij) depend on the
projection number and the angle and may not be square, x is a column matrix whose values represent intensities
of the image , and the column matrix P represents projections collected by a scanner.
Assume that, for a given angle, the number of projections ranges from 1 to m. If there are k different angles,
then in (2) P is a column matrix with mxk elements, x is a column matrix with n2
elements and A is a mkxn2
rectangular matrix as it follows in (3). In (3), Wij(rq) denotes the entries of the matrix A and sub-indices ij in
Wij(rq) refer to the pixel location and indices r,q refer to projection and angle numbers respectively. Wij(rq) is
the weighting factor that represents the contribution of the ijth pixel to the rqth X-ray line integral (1).
Many properties of the reconstructed image depend on the approximations when calculating the system
matrix. In
4. 1415
Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
.
...
...
,
...
...
,
)
(
...
)
(
)
(
........
..........
..........
..........
..........
)
1
(
...
)
1
(
)
1
(
.......
..........
..........
..........
..........
)
11
(
...
)
11
(
)
11
(
12
11
1
11
12
11
12
11
12
11
T
nn
T
mk
m
nn
nn
nn
x
x
x
x
p
p
p
P
mk
W
mk
W
mk
W
m
W
m
W
m
W
W
W
W
A (3)
Acc Wij(rq) is equal to the length of the segment of the rqth X-ray going through
the ijth image pixel. Note that most of the Wij(rq) elements are zero since only a small number of cells
contribute to any individual X-ray. It has been found that Siddon algorithm gives a good approximation of the
system matrix [15].
3. Reconstruction algorithm
Many different algorithms have been proposed in the literature to solve the reconstruction problem (2) (see
[1] for a review). The properties of the system matrix A are very important in order to propose an algorithm to
solve the system (2). Main characteristics of the matrices used in the experiment are summarized in Table 1.
Figure 1 shows a sparsity pattern of such matrices (it is shown a part of the matrix with 3500 rows and 8000
columns). In practice, A is a rectangular no symmetrical sparse matrix and therefore it is recommendable to
store only nonzero elements. The appropriate storage format for such matrices is Compact Sparse Row (CSR)
or Compact Sparse Column (CSC) format. The system (2) may be over determined or undetermined. Over
determined systems contain more information on the image and, consequently, the reconstructed image is less
noisy. The dimensions of A grow proportionally to the resolution of the image to be reconstructed and the
number of projections, increasing therefore the computational cost. In the experiment, the input matrix A and
the right hand side vector P have been generated previously, they can be stored in two formats: as a plain text
(ASCII format) or in a binary format. We use the input data in binary format, which allows reducing the
memory storage and the loading time of the input data (up to 10 times).
Table 1. The main characteristics of the system matrix
Matrix Size (pixels)
Generation
Time (sec)
Matrix Size (MB)
ASCII format Binary format
(256x100) x (256x256) 11.3 236 91
(256x200) x (256x256) 22.4 475 181
(256x400) x (256x256) 45.4 954 361
(512x100) x (512x512) 72.5 973 361
(512x200) x (512x512) 203.2 2047 721
(512x400) x (512x512) 446.4 2148 755
We implemented the Least Square QR method (LSQR) [16] to solve the system (2) by
minimizing 2
min P
Ax . The matrix A is normally large and sparse and is used only to compute products
of the form Av and AT
u for various vectors v and u. The input data is stored in binary format. Figure 2
illustrates the following main steps of the reconstruction process:
5. 1416 Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
Fig. 1. Sparsity pattern of the system matrix. It is shown a part of the matrix data structure that corresponds to 3500 rows and 8000
columns. Matrix of the size (512x400) x (512x512) has 125986768 nonzero elements.
CT projections are collected by a scanner
The system matrix that simulates the scanning process is generated previously by Siddon algorithm
In binary format these data are used by LSQR solver to find the solution of the system (2) that represents
the reconstructed image.
LSQR solver is implemented in CUDA parallel programming model.
Fig. 2. a) LSQR solver uses input data in binary format to reconstruct an image; b) Libraries used for the implementation of the
algorithm and their relationship.
6. 1417
Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
In our previous work, we have analyzed the efficiency of the LSQR solver in parallel image reconstruction
on CPU [13]. A speed up of 1.8 has been achieved to reconstruct images of 512x512 pixels. In this paper, we
attempt to develop an algorithm suitable for GPU parallelization in order to take advantage of the massive
computing power of GPUs.
4. GPU implementation
Computer graphic cards, such as the NVIDIA GeForce series and the GTX series, are conventionally used
for display purpose on desktop computers. Special GPUs card dedicated for scientific computing, like the
NVIDIA Tesla M2050 card is used in this paper to carry out the experiment. Such a GPU card has a total
number of 448 cuda cores with 3GB ECC memory, shared by all processor cores. Utilizing such a GPU card
with tremendous parallel computing ability can considerably elevate the computation efficiency of our
algorithm.
NVIDIA also introduced CUDAT
[17], a general purpose parallel computing architecture with a new
parallel programming model and instruction set architecture that leverages the parallel compute engine in
NVIDIA GPUs to solve many complex computational problems in a more efficient way than on a CPU. CUDA
comes with a software environment that allows developers to use C or C++ as high-level programming
languages and overcome the challenge to develop application software that transparently scales its parallelism
to leverage the increasing number of processor cores.
We also use CUBLAS [18] and CUSPARSE [19] libraries that allow the user to access the computational
resources of NVIDIA Graphical Processing Unit (GPU). The CUBLAS library is an implementation of BLAS
(Basic Linear Algebra Subprograms) on top of the NVIDIA®
CUDATM
runtime. To use the CUBLAS library,
the application must allocate the required matrices and vectors in the GPU memory space, fill them with data,
call the sequence of desired CUBLAS functions, and then upload the results from the GPU memory space back
to the host. The CUBLAS library also provides helper functions for writing and retrieving data from the GPU.
The NVIDIA®
CUDATM
CUSPARSE library contains a set of basic linear algebra subroutines used for
handling sparse matrices and is designed to be called from C or C++. These subroutines include operations
between vector and matrices in sparse and dense format, as well as conversion routines that allow conversion
between different matrix formats. Fig. 3 shows the libraries used for the implementation of the algorithm and
their relationship.
The following piece of the code represents the usage of the library functions used to compute norm of a
vector:
01: cublasCreate ( &handle_b );
02: cublasSetVector ( nrow, sizeof(float), h_U, 1, d_U, 1 );
03: cublasSnrm2 ( handle_b, nrow, d_U, 1, &beta );
04: cublasScal ( handle_b, nrow, &beta1, d_U, 1 );
05: cublasGetVector ( nrow, sizeof(float), d_U, 1, h_U, 1 );
and matrix - vector product:
06: cusparseCreate (&handle_s);
07: cusparseCreateMatDescr(&descra);
08: cusparseSetMatType(descra, CUSPARSE_MATRIX_TYPE_GENERAL);
09: cusparseSetMatIndexBase(descra,CUSPARSE_INDEX_BASE_ZERO);
10: cusparseScsrmv ( handle_s, CUSPARSE_OPERATION_NON_TRANSPOSE, ncol, nrow,
1.0, descra, csc_values, cscColPtr, cscRowInd, d_U, 0.0, d_V );
11: cublasGetVector ( ncol, sizeof(float), d_V, 1, h_V, 1 );
CUBLAS and CUSPARSE are written using the CUDA parallel programming model and take advantage of the
7. 1418 Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
computational resources of the NVIDIA graphics processor (GPU).
5. Experimental results
For experimental purposes we used real projections and reference images acquired from the Hospital Clinico
Universitario in Valencia. We worked with fan-beam projections collected by the scanner with 512 sensors in
the range 0 - 180 with 0.9 degree spacing. To be able to reconstruct the image with the iterative method we
complete the given set up to 360 degrees using the symmetry of the system matrix. We wanted to analyze the
capacity of iterative algorithms in parallel reconstruction of images from less number of projections. With this
purpose, from the initial set, three sets of equally spaced (with the angle steps 0.9, 1.8, and 3.6 degrees)
projections have been derived.
The results have been measured on a one GPU card of the cluster system Euler that belongs to the Alicante
University in Spain. The GPU computing node consists of 2 x CPU Intel Xeon X5660, each with 6 cores of 2,80
GHz and 3 x GPU NVIDIA TESLA M2050 with 448 cores and 3GB memory each of them. In Euler, it is used
Grid Engine function, general purpose Distributed Resource Management (DRM) tool. The scheduler
component in Grid Engine supports a wide range of different compute scenarios. Jobs are queued and executed
remotely according to defined policies.
For the images of 256x256 and 512x512 pixels the solving time of system (2) on CPU with 1 core and on
one GPU card is given in Table 2. The execution time on CPU has been measured with gettimeofday() function .
On the GPU card, we used cudaEventRecord() function that measures only the execution time of operations on
a device not taking into account time spent in queues. The standard deviation of the results after running the
application ten times is 2.9e-004. In the system matrix, the number of rows is obtained by multiplying the
number of used sensors and angles and corresponds to the number of the projections used to reconstruct the
image; the number of columns corresponds to the size of the reconstructed image (256x256 and 512x512 pixels).
Table 2. The reconstruction time of images on CPU and using one GPU card on Euler cluster
System Matrix (rows x columns)
One core CPU
(seconds)
One GPU card
(seconds)
M1 = (256x100) x (256x256) 2.7 0.1569
M2 = (256x200) x (256x256) 5.3 0.3056
M3 = (256x400) x (256x256) 10.5 0.6127
M4 = (512x100) x (512x512) 12.3 0.6584
M5 = (512x200) x (512x512) 24.4 1.2741
The results show the efficiency of the algorithm based on a GPU parallel computing ability. SpeedUp of 19.2 has
been achieved to reconstruct images of 512x512 pixels. Also, it can be seen that usage of GPUs becomes more
efficient for large scale problems.
Finally, Figure 3 shows the images reconstructed in parallel from different number of equally spaced
projections. It is needed to be mentioned that usually post processing procedure (as filtering) is applied to the
reconstructed image in order to improve the quality. In this work we present the images right after the
reconstruction stage without any filtering.
8. 1419
Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
Fig. 3. Reconstructed images (512x512 pixels): a) reference images; b), c), d) iterative reconstruction from 400, 200 and 100 angles at the
iteration 12 when the given tolerance is achieved
6. Conclusions
The GPU-based iterative algorithm of image reconstruction presented in this paper shows that the algebraic
iterative methods are capable to reconstruct images at low computational cost.
CUDA parallel programming model with CUBLAS and CUSPARSE libraries allow overcoming the
challenge to solve complex computational problems and take advantage of the computational resources of the
NVIDIA graphics processor (GPU). We expect more significant results in undergoing work of 3D image
reconstruction when a huge amount of computing is involved.
Acknowledgements
This research was supported by ANITRAN Project PROMETEO/2010/039.
We wish to thank Dr. Sergio Díez, Head of the Radiology and Radiophysics Protection Service of the
hospital Clinico Universitario, for the collaboration in carrying out this work.
We are also grateful to the Alicante University for allowing testing our algorithms on Euler cluster system.
References
9. 1420 Liubov A. Flores et al. / Procedia Computer Science 18 (2013) 1412 – 1420
[1] Herman, G. T. Fundamentals of computerized tomography: Image reconstruction from projection. 2nd ed. Springer; 2009.
[2] A. C. Kak and M. Slaney. Principles of Computerized Tomographic Imaging. IEEE Press, 1988. Republished by SIAM, electronic
copy, 1999.
[3] Jiang Hsieh. Computed Tomography: Principles, Design, Artifacts, and Recent Advances. SPIE Press; 2003.
[4] Wang G.,Yu H. and De Man B. An outlook on X-ray CT research and development. Medical Physics 2008; 35(3): p. 1051-1064.
[5] Stone S. S., Haldar J. P., Tsao S.C., Hwu W.-m W., Sutton B. P., Liang Z. P., 2008. Accelerating advanced MRI reconstructions on
GPUs. Journal of Parallel and Distributed Computing, vol. 68, issue 10, 1307-1318.
[6] Jang B, Kaeli D., Do S., Pien H., 2009. Multi GPU implementation of iterative tomographic reconstruction algorithms. Biomedical
Imaging: From Nano to Macro, pp. 185-188.
[7] Crawford B. M. and Herman G. T. Low-dose, large-angled cone-beam helical CT data reconstruction using algebraic reconstruction
techniques. Image and Vision Comp. 2007; 25: p. 78-94.
[8] Nuyts J, De Man B., Dupont, M. P., Defrise, Suetens P., and Mortelmans L. Iterative reconstruction for helical CT: A simulation study.
Phys. Med. Biol 1998; 43: p. 729-737.
[9] Wells R. G., King M. A., Simkin P. H., Judy P. F., Brill A. B, Licho R., Pretorius P. H., Schneider P. B., and Seldin D. W. Comparing
Filtered back projection and ordered-subsets expectation maximization for small-lesion detection and localization in 67Ga SPECT. J.
Nucl. Med. 2000; 41: p. 1391-1399.
[10] Johnson C.A., Sofer. A., 1999. A data-parallel algorithm for iterative tomographic image reconstruction. Frontiers of Massively
Parallel Computation, pp. 126-137.
[11] Pratx G., Chinn G., Olcott P.D., Levin C. S., 2009. Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction
Using the GPU. IEEE Transactions on Medical Imaging, 28(3), pp. 435-445.
[12] Flores L., Vidal V., Mayo P., Rodenas F., Verdú G. Iterative reconstruction of CT images with PETSc. BMEI 2011; vol. 1 p. 343-346.
[13] Flores L.,Vidal V., Mayo P., Rodenas F., Verdú G. Fast parallel algorithm for CT image reconstruction. Proceedings of the IEEE
EMBC 2012; p. 4374-4377. ISBN: 978-1-4577-1787.
[14] Siddon R.. Fast calculation of the exact radiological path length for a three dimensional CT array. Med. Phys. 1985; 12: p. 252-255.
[15] Cibeles Mora Mora M.T., 2008. Tesis PhD. Métodos de Reconstrucción Volumétrica Algebraica de Imágenes Tomográficas.
[16] Paige C. C. and Saunders M. A., 1982. LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Trans.
Math. Sof., 8, 1, p. 43-71.
[17] Cuda C Programming Guide. Downloaded in Oct. 2012 from URL http://docs.nvidia.com/cuda/index.html.
[18] CUBLAS Library. Downloaded in Oct. 2012 from URL http://docs.nvidia.com/cuda/index.html.
[19] CUSPARSE Library. Downloaded in Oct. 2012 from URL http://docs.nvidia.com/cuda/index.html.