This document summarizes a research paper that proposes a modified version of Steering Kernel Regression called Median Based Parallel Steering Kernel Regression for improving image reconstruction. The key points are:
1. The proposed algorithm addresses two drawbacks of the original Steering Kernel Regression technique by implementing it in parallel on GPUs and multi-cores to improve computational efficiency, and using a median filter to suppress spurious edges in the output.
2. Experimental results show the proposed algorithm achieves a speedup of 21x using GPUs and 6x using multi-cores compared to serial implementation, while maintaining comparable reconstruction quality as measured by RMSE.
3. The algorithm is implemented iteratively, applying the median filter after each iteration
FPGA Implementation of 2-D DCT & DWT Engines for Vision Based Tracking of Dyn...IJERA Editor
Real time motion estimation for tracking is a challenging task. Several techniques can transform an image into frequency domain, such as DCT, DFT and wavelet transform. Direct implementation of 2-D DCT takes N^4 multiplications for an N x N image which is impractical. The proposed architecture for implementation of 2-D DCT uses look up tables. They are used to store pre-computed vector products that completely eliminate the multiplier. This makes the architecture highly time efficient, and the routing delay and power consumption is also reduced significantly. Another approach, 2-D discrete wavelet transform based motion estimation (DWT-ME) provides substantial improvements in quality and area. The proposed architecture uses Haar wavelet transform for motion estimation. In this paper, we present the comparison of the performance of discrete cosine transform, discrete wavelet transform for implementation in motion estimation.
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...ijcax
Methods for Molecular Dynamics(MD) simulations are investigated. MD simulation is the widely used computer simulation approach to study the properties of molecular system. Force calculation in MD is computationally intensive. Paral-lel programming techniques can be applied to improve those calculations.
The major aim of this paper is to speed up the MD simulation calculations by/using General Purpose Graphics Processing Unit(GPU) computing paradigm, an efficient and economical way for parallel computing. For that we are proposing a method called cell charge approximation which treats the
electrostatic interactions in MD simulations.This method reduces the complexity of force calculations.
Performance analysis of real-time and general-purpose operating systems for p...IJECEIAES
In general, modern operating systems can be divided into two essential parts, real-time operating systems (RTOS) and general-purpose operating systems (GPOS). The main difference between GPOS and RTOS is the system is time-critical or not. It means that; in GPOS, a high-priority thread cannot preempt a kernel call. But, in RTOS, a low-priority task is preempted by a high-priority task if necessary, even if it’s executing a kernel call. Most Linux distributions can be used as both GPOS and RTOS with kernel modifications. In this study, two Linux distributions, Ubuntu and Pardus, were analyzed and their performances were compared both as GPOS and RTOS for path planning of the multi-robot systems. Robot groups with different numbers of members were used to perform the path tracking tasks using both Ubuntu and Pardus as GPOS and RTOS. In this way, both the performance of two different Linux distributions in robotic applications were observed and compared in two forms, GPOS, and RTOS.
NEW IMPROVED 2D SVD BASED ALGORITHM FOR VIDEO CODINGcscpconf
Video compression is one of the most important blocks of an image acquisition system.
Compression of video results in reduction of transmission bandwidth. In real time video
compression the incoming video data is directly compressed without being stored first.
Therefore real time video compression system operates under stringent timing constraints.
Current video compression standards like MPEG, H.26x series, involve emotion estimation and
compensation blocks which are highly computationally expensive and hence they are not
suitable for real time applications on resource scarce systems. Current applications like video
calling, video conferencing require low complexity video compression algorithms so that they
can be implemented in environments that have scarce computational resources (like mobile
phones). A low complexity video compression algorithm based on 2D SVD exists. In this paper, a modification to that algorithm which provides higher PSNR at the same bit rate is presented.
Real-time traffic sign detection and recognition using Raspberry Pi IJECEIAES
This document presents a real-time traffic sign detection and recognition system developed using a Raspberry Pi 3 processor. The system uses a Raspberry Pi camera to record real-time video and the TensorFlow machine learning algorithm to detect and identify traffic signs based on a dataset of 500 labeled images across 5 sign classes. The system's accuracy, delay, and reliability were evaluated during testbed implementation considering different environmental and sign conditions. Results showed the system achieved over 90% accuracy on average with a maximum detection delay of 3.44 seconds, demonstrating reliable performance even for broken, faded, or low-light signs. This real-time traffic sign recognition system developed with affordable hardware has potential to increase road safety.
This document discusses single-GPU and multi-GPU implementations of the MAD IQA algorithm to improve its computational performance. A single-GPU implementation achieved a 24x speedup over the CPU version, bringing the runtime down to 40ms. A multi-GPU implementation using 3 GPUs achieved an additional speedup of 33x over the CPU version, bringing the runtime to 28.9ms, but required many more data transfers between GPUs and system memory. While parallelizing tasks across GPUs improved performance, latency from data transfers between devices limited gains.
This document presents a proposed method for efficient image compression using integer wavelet transform (IWT) employing the lifting scheme (LS). The lifting scheme allows for faster computation compared to conventional discrete wavelet transform by exploiting both high-pass and low-pass filter values. Simulation results using MATLAB show the proposed method achieves superior compression performance in terms of encoding time, decoding time, peak signal-to-noise ratio, and compression ratio for various test images. The integer wavelet transform with lifting scheme provides a robust algorithm for truly lossless image compression by decomposing both approximation and detailed image contents.
Comparative study to realize an automatic speaker recognition system IJECEIAES
This document presents a comparative study between an adaptive orthogonal transform method and mel-frequency cepstral coefficients (MFCCs) for automatic speaker recognition. The adaptive orthogonal transform method uses an adaptive operator to extract informative features from input speech signals with minimum dimensions. Experimental results show the adaptive orthogonal transform method achieved 96.8% accuracy using Fourier transform and 98.1% accuracy using correlation, outperforming MFCCs which achieved 49.3% and 53.1% accuracy respectively. The proposed method successfully identified speakers with a recognition rate of 98.1% compared to 53.1% for MFCCs, demonstrating the efficiency of the adaptive orthogonal transform approach.
FPGA Implementation of 2-D DCT & DWT Engines for Vision Based Tracking of Dyn...IJERA Editor
Real time motion estimation for tracking is a challenging task. Several techniques can transform an image into frequency domain, such as DCT, DFT and wavelet transform. Direct implementation of 2-D DCT takes N^4 multiplications for an N x N image which is impractical. The proposed architecture for implementation of 2-D DCT uses look up tables. They are used to store pre-computed vector products that completely eliminate the multiplier. This makes the architecture highly time efficient, and the routing delay and power consumption is also reduced significantly. Another approach, 2-D discrete wavelet transform based motion estimation (DWT-ME) provides substantial improvements in quality and area. The proposed architecture uses Haar wavelet transform for motion estimation. In this paper, we present the comparison of the performance of discrete cosine transform, discrete wavelet transform for implementation in motion estimation.
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...ijcax
Methods for Molecular Dynamics(MD) simulations are investigated. MD simulation is the widely used computer simulation approach to study the properties of molecular system. Force calculation in MD is computationally intensive. Paral-lel programming techniques can be applied to improve those calculations.
The major aim of this paper is to speed up the MD simulation calculations by/using General Purpose Graphics Processing Unit(GPU) computing paradigm, an efficient and economical way for parallel computing. For that we are proposing a method called cell charge approximation which treats the
electrostatic interactions in MD simulations.This method reduces the complexity of force calculations.
Performance analysis of real-time and general-purpose operating systems for p...IJECEIAES
In general, modern operating systems can be divided into two essential parts, real-time operating systems (RTOS) and general-purpose operating systems (GPOS). The main difference between GPOS and RTOS is the system is time-critical or not. It means that; in GPOS, a high-priority thread cannot preempt a kernel call. But, in RTOS, a low-priority task is preempted by a high-priority task if necessary, even if it’s executing a kernel call. Most Linux distributions can be used as both GPOS and RTOS with kernel modifications. In this study, two Linux distributions, Ubuntu and Pardus, were analyzed and their performances were compared both as GPOS and RTOS for path planning of the multi-robot systems. Robot groups with different numbers of members were used to perform the path tracking tasks using both Ubuntu and Pardus as GPOS and RTOS. In this way, both the performance of two different Linux distributions in robotic applications were observed and compared in two forms, GPOS, and RTOS.
NEW IMPROVED 2D SVD BASED ALGORITHM FOR VIDEO CODINGcscpconf
Video compression is one of the most important blocks of an image acquisition system.
Compression of video results in reduction of transmission bandwidth. In real time video
compression the incoming video data is directly compressed without being stored first.
Therefore real time video compression system operates under stringent timing constraints.
Current video compression standards like MPEG, H.26x series, involve emotion estimation and
compensation blocks which are highly computationally expensive and hence they are not
suitable for real time applications on resource scarce systems. Current applications like video
calling, video conferencing require low complexity video compression algorithms so that they
can be implemented in environments that have scarce computational resources (like mobile
phones). A low complexity video compression algorithm based on 2D SVD exists. In this paper, a modification to that algorithm which provides higher PSNR at the same bit rate is presented.
Real-time traffic sign detection and recognition using Raspberry Pi IJECEIAES
This document presents a real-time traffic sign detection and recognition system developed using a Raspberry Pi 3 processor. The system uses a Raspberry Pi camera to record real-time video and the TensorFlow machine learning algorithm to detect and identify traffic signs based on a dataset of 500 labeled images across 5 sign classes. The system's accuracy, delay, and reliability were evaluated during testbed implementation considering different environmental and sign conditions. Results showed the system achieved over 90% accuracy on average with a maximum detection delay of 3.44 seconds, demonstrating reliable performance even for broken, faded, or low-light signs. This real-time traffic sign recognition system developed with affordable hardware has potential to increase road safety.
This document discusses single-GPU and multi-GPU implementations of the MAD IQA algorithm to improve its computational performance. A single-GPU implementation achieved a 24x speedup over the CPU version, bringing the runtime down to 40ms. A multi-GPU implementation using 3 GPUs achieved an additional speedup of 33x over the CPU version, bringing the runtime to 28.9ms, but required many more data transfers between GPUs and system memory. While parallelizing tasks across GPUs improved performance, latency from data transfers between devices limited gains.
This document presents a proposed method for efficient image compression using integer wavelet transform (IWT) employing the lifting scheme (LS). The lifting scheme allows for faster computation compared to conventional discrete wavelet transform by exploiting both high-pass and low-pass filter values. Simulation results using MATLAB show the proposed method achieves superior compression performance in terms of encoding time, decoding time, peak signal-to-noise ratio, and compression ratio for various test images. The integer wavelet transform with lifting scheme provides a robust algorithm for truly lossless image compression by decomposing both approximation and detailed image contents.
Comparative study to realize an automatic speaker recognition system IJECEIAES
This document presents a comparative study between an adaptive orthogonal transform method and mel-frequency cepstral coefficients (MFCCs) for automatic speaker recognition. The adaptive orthogonal transform method uses an adaptive operator to extract informative features from input speech signals with minimum dimensions. Experimental results show the adaptive orthogonal transform method achieved 96.8% accuracy using Fourier transform and 98.1% accuracy using correlation, outperforming MFCCs which achieved 49.3% and 53.1% accuracy respectively. The proposed method successfully identified speakers with a recognition rate of 98.1% compared to 53.1% for MFCCs, demonstrating the efficiency of the adaptive orthogonal transform approach.
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...cscpconf
The document summarizes a hybrid approach for image and video super-resolution. It uses motion estimation between low resolution frames to determine shift and rotation parameters. A robust super-resolution algorithm is then applied to the frames to generate a high resolution image. Finally, anisotropic diffusion regularization is performed on the high resolution image to reduce noise and preserve edges, producing the final super-resolution result. The approach is applied to both still images and video by estimating motion between consecutive video frames, applying super-resolution, and regularization to obtain the high resolution video. Experimental results showed the hybrid approach produced better quality super-resolution images and videos compared to other techniques in terms of metrics like PSNR.
This document summarizes a research paper that implemented Levenberg-Marquardt artificial neural network training using graphics processing unit (GPU) hardware acceleration. The key points are:
1) This appears to be the first description of implementing artificial neural networks using the Levenberg-Marquardt training method on a GPU.
2) The paper describes their approach for implementing the Levenberg-Marquardt algorithm on a GPU, which involves solving the matrix inversion operation that is typically computationally expensive.
3) Results show that training networks using the GPU implementation can be up to 10 times faster than using a CPU-only implementation on the same hardware.
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...mlaij
Depth estimation has made great progress in the last few years due to its applications in robotics science
and computer vision. Various methods have been implemented and enhanced to estimate the depth without
flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers,
especially for the video applications which have more complexity of the neural network which af ects the
run time. Moreover to use such input like monocular video for depth estimation is considered an attractive
idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing
pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on
enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of
RAM and with using less number of parameters without having a significant reduction in the quality of the
depth estimation.
A novel approach for satellite imagery storage by classifying the non duplica...IAEME Publication
This document presents a novel approach for classifying and storing satellite imagery by detecting and storing only non-duplicate regions. It uses kernel principal component analysis to reduce the dimensionality and extract features of satellite images. Fuzzy N-means clustering is then used to segment the images into blocks. A duplication detection algorithm compares blocks to identify duplicate and non-duplicate regions. Only the non-duplicate regions are stored in the database, improving storage efficiency and updating speed compared to completely replacing existing images. Support vector machines are used to categorize the non-duplicate blocks into the appropriate classes in the existing images.
A novel approach for satellite imagery storage by classifyiaemedu
This document presents a novel approach for classifying and storing satellite imagery by detecting and storing only non-duplicate regions. It uses kernel principal component analysis to reduce the dimensionality and extract features of satellite images. Fuzzy N-means clustering is then used to segment the images into blocks. A duplication detection algorithm compares blocks to identify duplicate and non-duplicate regions. Only the non-duplicate regions are stored in the database, improving storage efficiency and updating speed compared to completely replacing existing images. Support vector machines are used to categorize the non-duplicate blocks into the appropriate classes in the existing images.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
The document discusses grid computing in remote sensing data processing. It describes how grid computing can help process huge amounts of remote sensing data in real-time by distributing processing across networked computers. Key requirements for a grid environment for remote sensing include sharing computational and software resources, managing resources, and supporting different data formats. Case studies demonstrate how grid middleware can improve the efficiency of tasks like image deblurring and generating lookup tables for aerosol analysis.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An fpga implementation of the lms adaptive filter eSAT Journals
This document describes an FPGA implementation of the Least Mean Square (LMS) adaptive filter algorithm for active vibration control. It compares fixed-point and floating-point implementations in terms of area usage and performance. The LMS algorithm is implemented using a finite state machine model with separate modules for operations like filtering, error estimation, and weight adaptation. Both implementations utilize this structural model. The fixed-point version uses 16-bit integers and fractions, while the floating-point version leverages IP cores. Results show the floating-point implementation has better accuracy and resource utilization than the fixed-point version for active vibration control applications on FPGAs.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Hybrid compression based stationary wavelet transformsOmar Ghazi
This document presents a hybrid compression approach for images that uses Stationary Wavelet Transforms (SWT), Back Propagation Neural Network (BPNN), and Lempel-Ziv-Welch (LZW) compression. The approach involves: 1) preprocessing the image, 2) applying SWT, 3) converting to a 1D vector using zigzag scan, and 4) hybrid compression using BPNN vector quantization and LZW lossless compression. Experimental results show the SWT with BPNN and LZW achieves the highest compression ratios but the longest processing time, while SWT with Run Length encoding has a lower ratio but shorter time. The hybrid approach combines lossy and lossless compression techniques to obtain a
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
Image Fusion is a process of combining relevant information from a set of images, into a
single image, wherein the resultant fused image will be more informative and complete than any of
the input images. This paper discusses Laplacian Pyramid (LP) based image fusion techniques for
fingerprint application. The technique is implemented in MatLab and evaluation parameters Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Matching score are discussed. As well
the same implemented on Virtex-5 FPGA development board using Verilog HDL. LP based
technique provides better results for image fusion than other techniques.
IRJET- Handwritten Decimal Image Compression using Deep Stacked AutoencoderIRJET Journal
This document proposes using a deep stacked autoencoder neural network for compressing handwritten decimal image data. It involves training multiple autoencoders in sequence to form a deep network that can compress the high-dimensional input images into lower-dimensional encoded representations while minimizing information loss. The autoencoders are trained one layer at a time using scaled conjugate gradient descent. Testing on the MNIST handwritten digits dataset showed the deep stacked autoencoder achieved compression by encoding the 400-dimensional input images down to a 25-dimensional representation while maintaining good reconstruction accuracy, as measured by minimizing the mean squared error at each layer.
Image reconstruction through compressive sampling matching pursuit and curvel...IJECEIAES
An interesting area of research is image reconstruction, which uses algorithms and techniques to transform a degraded image into a good one. The quality of the reconstructed image plays a vital role in the field of image processing. Compressive Sampling is an innovative and rapidly growing method for reconstructing signals. It is extensively used in image reconstruction. The literature uses a variety of matching pursuits for image reconstruction. In this paper, we propose a modified method named compressive sampling matching pursuit (CoSaMP) for image reconstruction that promises to sample sparse signals from far fewer observations than the signal’s dimension. The main advantage of CoSaMP is that it has an excellent theoretical guarantee for convergence. The proposed technique combines CoSaMP with curvelet transform for better reconstruction of image. Experiments are carried out to evaluate the proposed technique on different test images. The results indicate that qualitative and quantitative performance is better compared to existing methods.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Neural network based image compression with lifting scheme and rlceSAT Journals
This document summarizes a research paper on neural network based image compression using wavelet transforms and lifting schemes. It discusses how wavelet transforms decompose images into frequency bands and how lifting schemes can reduce computational complexity. It also describes multilayer feedforward neural networks and how they can be used for image compression. Specifically, it proposes using a neural network with 64 input nodes, 8 hidden nodes, and 64 output nodes to compress 8x8 pixel blocks of an image by encoding the hidden node weights for transmission instead of the full image pixels. Simulation results showed this approach achieved better compression ratios and image quality than other existing methods.
IMPROVEMENT IN IMAGE DENOISING OF HANDWRITTEN DIGITS USING AUTOENCODERS IN DE...IRJET Journal
This document proposes a deep learning model called Stacked Sparse Denoising Autoencoder (SSDAE) to improve compressed sensing image reconstruction. It combines a denoising autoencoder and sparse autoencoder. The model uses non-linear measurements in the encoder and a decoder network for reconstruction, unlike traditional compressed sensing algorithms. It is trained on corrupted versions of images with added noise. Experimental results on the MNIST handwritten digit dataset show the model achieves better reconstruction quality and peak signal-to-noise ratio than other algorithms, with good denoising ability and robustness to different noise levels.
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORSIRJET Journal
The document proposes a composite imagelet identifier technique for machine learning processors that uses seam carving to manipulate edges and pixilation in images for processing. It discusses using existing algorithms like SSIM and Dijkstra's algorithm to calculate image energy and identify optimal seam locations for manipulation. The technique is evaluated using test images in MATLAB and is presented as having potential applications in areas like forestry, animal husbandry and safety monitoring.
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...cscpconf
The document summarizes a hybrid approach for image and video super-resolution. It uses motion estimation between low resolution frames to determine shift and rotation parameters. A robust super-resolution algorithm is then applied to the frames to generate a high resolution image. Finally, anisotropic diffusion regularization is performed on the high resolution image to reduce noise and preserve edges, producing the final super-resolution result. The approach is applied to both still images and video by estimating motion between consecutive video frames, applying super-resolution, and regularization to obtain the high resolution video. Experimental results showed the hybrid approach produced better quality super-resolution images and videos compared to other techniques in terms of metrics like PSNR.
This document summarizes a research paper that implemented Levenberg-Marquardt artificial neural network training using graphics processing unit (GPU) hardware acceleration. The key points are:
1) This appears to be the first description of implementing artificial neural networks using the Levenberg-Marquardt training method on a GPU.
2) The paper describes their approach for implementing the Levenberg-Marquardt algorithm on a GPU, which involves solving the matrix inversion operation that is typically computationally expensive.
3) Results show that training networks using the GPU implementation can be up to 10 times faster than using a CPU-only implementation on the same hardware.
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...mlaij
Depth estimation has made great progress in the last few years due to its applications in robotics science
and computer vision. Various methods have been implemented and enhanced to estimate the depth without
flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers,
especially for the video applications which have more complexity of the neural network which af ects the
run time. Moreover to use such input like monocular video for depth estimation is considered an attractive
idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing
pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on
enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of
RAM and with using less number of parameters without having a significant reduction in the quality of the
depth estimation.
A novel approach for satellite imagery storage by classifying the non duplica...IAEME Publication
This document presents a novel approach for classifying and storing satellite imagery by detecting and storing only non-duplicate regions. It uses kernel principal component analysis to reduce the dimensionality and extract features of satellite images. Fuzzy N-means clustering is then used to segment the images into blocks. A duplication detection algorithm compares blocks to identify duplicate and non-duplicate regions. Only the non-duplicate regions are stored in the database, improving storage efficiency and updating speed compared to completely replacing existing images. Support vector machines are used to categorize the non-duplicate blocks into the appropriate classes in the existing images.
A novel approach for satellite imagery storage by classifyiaemedu
This document presents a novel approach for classifying and storing satellite imagery by detecting and storing only non-duplicate regions. It uses kernel principal component analysis to reduce the dimensionality and extract features of satellite images. Fuzzy N-means clustering is then used to segment the images into blocks. A duplication detection algorithm compares blocks to identify duplicate and non-duplicate regions. Only the non-duplicate regions are stored in the database, improving storage efficiency and updating speed compared to completely replacing existing images. Support vector machines are used to categorize the non-duplicate blocks into the appropriate classes in the existing images.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
The document discusses grid computing in remote sensing data processing. It describes how grid computing can help process huge amounts of remote sensing data in real-time by distributing processing across networked computers. Key requirements for a grid environment for remote sensing include sharing computational and software resources, managing resources, and supporting different data formats. Case studies demonstrate how grid middleware can improve the efficiency of tasks like image deblurring and generating lookup tables for aerosol analysis.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An fpga implementation of the lms adaptive filter eSAT Journals
This document describes an FPGA implementation of the Least Mean Square (LMS) adaptive filter algorithm for active vibration control. It compares fixed-point and floating-point implementations in terms of area usage and performance. The LMS algorithm is implemented using a finite state machine model with separate modules for operations like filtering, error estimation, and weight adaptation. Both implementations utilize this structural model. The fixed-point version uses 16-bit integers and fractions, while the floating-point version leverages IP cores. Results show the floating-point implementation has better accuracy and resource utilization than the fixed-point version for active vibration control applications on FPGAs.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Hybrid compression based stationary wavelet transformsOmar Ghazi
This document presents a hybrid compression approach for images that uses Stationary Wavelet Transforms (SWT), Back Propagation Neural Network (BPNN), and Lempel-Ziv-Welch (LZW) compression. The approach involves: 1) preprocessing the image, 2) applying SWT, 3) converting to a 1D vector using zigzag scan, and 4) hybrid compression using BPNN vector quantization and LZW lossless compression. Experimental results show the SWT with BPNN and LZW achieves the highest compression ratios but the longest processing time, while SWT with Run Length encoding has a lower ratio but shorter time. The hybrid approach combines lossy and lossless compression techniques to obtain a
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
Image Fusion is a process of combining relevant information from a set of images, into a
single image, wherein the resultant fused image will be more informative and complete than any of
the input images. This paper discusses Laplacian Pyramid (LP) based image fusion techniques for
fingerprint application. The technique is implemented in MatLab and evaluation parameters Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Matching score are discussed. As well
the same implemented on Virtex-5 FPGA development board using Verilog HDL. LP based
technique provides better results for image fusion than other techniques.
IRJET- Handwritten Decimal Image Compression using Deep Stacked AutoencoderIRJET Journal
This document proposes using a deep stacked autoencoder neural network for compressing handwritten decimal image data. It involves training multiple autoencoders in sequence to form a deep network that can compress the high-dimensional input images into lower-dimensional encoded representations while minimizing information loss. The autoencoders are trained one layer at a time using scaled conjugate gradient descent. Testing on the MNIST handwritten digits dataset showed the deep stacked autoencoder achieved compression by encoding the 400-dimensional input images down to a 25-dimensional representation while maintaining good reconstruction accuracy, as measured by minimizing the mean squared error at each layer.
Image reconstruction through compressive sampling matching pursuit and curvel...IJECEIAES
An interesting area of research is image reconstruction, which uses algorithms and techniques to transform a degraded image into a good one. The quality of the reconstructed image plays a vital role in the field of image processing. Compressive Sampling is an innovative and rapidly growing method for reconstructing signals. It is extensively used in image reconstruction. The literature uses a variety of matching pursuits for image reconstruction. In this paper, we propose a modified method named compressive sampling matching pursuit (CoSaMP) for image reconstruction that promises to sample sparse signals from far fewer observations than the signal’s dimension. The main advantage of CoSaMP is that it has an excellent theoretical guarantee for convergence. The proposed technique combines CoSaMP with curvelet transform for better reconstruction of image. Experiments are carried out to evaluate the proposed technique on different test images. The results indicate that qualitative and quantitative performance is better compared to existing methods.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Neural network based image compression with lifting scheme and rlceSAT Journals
This document summarizes a research paper on neural network based image compression using wavelet transforms and lifting schemes. It discusses how wavelet transforms decompose images into frequency bands and how lifting schemes can reduce computational complexity. It also describes multilayer feedforward neural networks and how they can be used for image compression. Specifically, it proposes using a neural network with 64 input nodes, 8 hidden nodes, and 64 output nodes to compress 8x8 pixel blocks of an image by encoding the hidden node weights for transmission instead of the full image pixels. Simulation results showed this approach achieved better compression ratios and image quality than other existing methods.
IMPROVEMENT IN IMAGE DENOISING OF HANDWRITTEN DIGITS USING AUTOENCODERS IN DE...IRJET Journal
This document proposes a deep learning model called Stacked Sparse Denoising Autoencoder (SSDAE) to improve compressed sensing image reconstruction. It combines a denoising autoencoder and sparse autoencoder. The model uses non-linear measurements in the encoder and a decoder network for reconstruction, unlike traditional compressed sensing algorithms. It is trained on corrupted versions of images with added noise. Experimental results on the MNIST handwritten digit dataset show the model achieves better reconstruction quality and peak signal-to-noise ratio than other algorithms, with good denoising ability and robustness to different noise levels.
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORSIRJET Journal
The document proposes a composite imagelet identifier technique for machine learning processors that uses seam carving to manipulate edges and pixilation in images for processing. It discusses using existing algorithms like SSIM and Dijkstra's algorithm to calculate image energy and identify optimal seam locations for manipulation. The technique is evaluated using test images in MATLAB and is presented as having potential applications in areas like forestry, animal husbandry and safety monitoring.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes and compares various algorithms used to implement video surveillance systems, including pixel matching, image matching, and clustering algorithms. It first provides background on video surveillance systems and their need for automatic abnormal motion detection. It then reviews several specific algorithms: pixel matching, agglomerative clustering, reciprocal nearest neighbor pairing, sub-pixel mapping, patch matching, tone mapping, and k-means clustering. For each algorithm, it provides a brief overview of the approach and complexity. The document also discusses image matching algorithms like classic image checking, pixel-based identity checking, and pixel-based similarity checking. Overall, the document analyzes algorithms that can be used to detect and classify motion in video surveillance systems.
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
This document summarizes research on compression algorithms for remote sensing images. It begins with an abstract describing the challenges of transmitting large remote sensing images from sensors to networks. The document then reviews 18 different research papers on various compression algorithms for remote sensing images, including wavelet-based algorithms, fractal coding methods, and region-based approaches. It evaluates each algorithm's performance in compressing remote sensing images while maintaining quality. The document aims to perform a comparative case study of these different compression algorithms.
Car Steering Angle Prediction Using Deep LearningIRJET Journal
This document discusses two deep learning models for predicting steering angles of a self-driving car from camera images. The first model uses 3D convolutional layers followed by LSTM layers to incorporate temporal information. The second model uses transfer learning with pre-trained CNN models. Both models were tested in simulation and showed potential for real-world autonomous driving applications by accurately predicting steering angles from camera images alone.
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...ijcax
This document summarizes a research paper that proposes a method called cell charge approximation to accelerate molecular dynamics simulations using GPUs. The method reduces the complexity of calculating long-range electrostatic forces. The document provides background on molecular simulation, the molecular dynamics algorithm, and GPU programming with CUDA. It describes how bonded interactions, van der Waals forces, and neighbor list construction can be parallelized to improve simulation performance.
A Review on Color Recognition using Deep Learning and Different Image Segment...IRJET Journal
The document discusses color recognition using deep learning and different image segmentation methods. It reviews research on using convolutional neural networks (CNN) for color recognition and how different segmentation techniques like Otsu's method and watershed segmentation can impact recognition accuracy. The review finds that choosing an appropriate segmentation method before feeding images to a CNN model can reduce time complexity and improve accuracy. For example, one study achieved 96.8% accuracy in disease detection in rice plants by combining CNN with support vector machines (SVM) and using mean shift segmentation. Another used a deep neural network with ResNet architecture to classify radioactive waste, achieving high accuracy by addressing issues like vanishing gradients through techniques like dropout and batch normalization.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
IRJET- Identification of Scene Images using Convolutional Neural Networks - A...IRJET Journal
This document summarizes research on using convolutional neural networks (CNNs) for scene image identification. It first discusses traditional object detection methods and their limitations. CNNs are presented as an improved approach, with convolutional, pooling and fully connected layers to extract features and classify images. Several popular CNN-based object detection algorithms are then summarized, including R-CNN, Fast R-CNN, Faster R-CNN and YOLO. The document concludes that CNN methods provide more accurate object identification compared to traditional algorithms due to their ability to learn from large datasets.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
Translation Invariance (TI) based Novel Approach for better De-noising of Dig...IRJET Journal
1. The document discusses a novel Translation Invariance (TI) approach for improving the performance of various digital image processing filters for image denoising.
2. It describes applying filters like convolution, wiener, gaussian etc. both without TI (directly on noisy image) and with TI (by shifting the image and averaging results) to denoise images.
3. The results found that using the TI approach, where the filters are applied after shifting the image and averaging the outputs, produced better performance and noise removal compared to directly applying the filters without translation invariance. This was also verified using edge detection tests.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
IRJET- Efficient JPEG Reconstruction using Bayesian MAP and BFMTIRJET Journal
This document discusses efficient JPEG reconstruction using Bayesian MAP and BFMT. It proposes using a Bayesian maximum a posteriori probability approach with an alternating direction method of multipliers iterative optimization algorithm. Specifically, it uses a learned frame prior and models the quantization noise as Gaussian. It also proposes using bilateral filter and its method noise thresholding using wavelets for image denoising as part of the JPEG reconstruction process. Experimental results show this approach improves reconstruction quality both visually and in terms of signal-to-noise ratio compared to other existing methods.
Similar to MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION (20)
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
This document discusses using social media technologies to promote student engagement in a software project management course. It describes the course and objectives of enhancing communication. It discusses using Facebook for 4 years, then switching to WhatsApp based on student feedback, and finally introducing Slack to enable personalized team communication. Surveys found students engaged and satisfied with all three tools, though less familiar with Slack. The conclusion is that social media promotes engagement but familiarity with the tool also impacts satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The document proposes a blockchain-based digital currency and streaming platform called GoMAA to address issues of piracy in the online music streaming industry. Key points:
- GoMAA would use a digital token on the iMediaStreams blockchain to enable secure dissemination and tracking of streamed content. Content owners could control access and track consumption of released content.
- Original media files would be converted to a Secure Portable Streaming (SPS) format, embedding watermarks and smart contract data to indicate ownership and enable validation on the blockchain.
- A browser plugin would provide wallets for fans to collect GoMAA tokens as rewards for consuming content, incentivizing participation and addressing royalty discrepancies by recording
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This document discusses the importance of verb suffix mapping in discourse translation from English to Telugu. It explains that after anaphora resolution, the verbs must be changed to agree with the gender, number, and person features of the subject or anaphoric pronoun. Verbs in Telugu inflect based on these features, while verbs in English only inflect based on number and person. Several examples are provided that demonstrate how the Telugu verb changes based on whether the subject or pronoun is masculine, feminine, neuter, singular or plural. Proper verb suffix mapping is essential for generating natural and coherent translations while preserving the context and meaning of the original discourse.
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The document discusses automated penetration testing and provides an overview. It compares manual and automated penetration testing, noting that automated testing allows for faster, more standardized and repeatable tests but has limitations in developing new exploits. It also reviews some current automated penetration testing methodologies and tools, including those using HTTP/TCP/IP attacks, linking common scanning tools, a Python-based tool targeting databases, and one using POMDPs for multi-step penetration test planning under uncertainty. The document concludes that automated testing is more efficient than manual for known vulnerabilities but cannot replace manual testing for discovering new exploits.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
The document proposes a new validation method for fuzzy association rules based on three steps: (1) applying the EFAR-PN algorithm to extract a generic base of non-redundant fuzzy association rules using fuzzy formal concept analysis, (2) categorizing the extracted rules into groups, and (3) evaluating the relevance of the rules using structural equation modeling, specifically partial least squares. The method aims to address issues with existing fuzzy association rule extraction algorithms such as large numbers of extracted rules, redundancy, and difficulties with manual validation.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
2. 12 Computer Science & Information Technology (CS & IT)
processing applications have been proposed by many authors [5][6][7][8]. Like many image
processing applications, Steering Kernel Regression also has inherent data parallelism. Taking
advantage of this inherent parallelism, we have identified and implemented the three most time
consuming parts of the algorithm both on multi-core and GPUs. We have addressed the second
problem by suppressing the spurious edges. These edges are produced due to noise and get
stronger with the number of iterations of the algorithm. In the proposed technique we use median
filter at the end of each iteration to remove the spurious edges.
2. DATA ADAPTIVE KERNEL REGRESSION
Regression is a process of finding the underlying signal in a given data, where the original signal
is corrupted by noise. Many image regression techniques like edge-directed interpolation [2], and
moving least squares [3] were proposed. Classical parametric regression methods assume that
there is a specific model for underlying signal and estimate the parameters of this model. The
parametric estimation has been used in major image processing techniques. The model generated
out of the estimated parameters is given as the best possible estimate of the underlying signal.
In contrast, non parametric regression methods don’t assume any underlying model, but depend
on the data itself to arrive at the original signal. Regression function is the implicit model that has
to be estimated. Takeda et al. [1] introduced steering kernel regression for image processing and
reconstruction and have shown that it out-performs existing regression techniques. A brief
introduction to the algorithm is given as follows:
The measured data is given by ݕ = ݔ(ݖ) + ߝ, i=1,...,P, where ݔ(ݖ) is a regression function
acting on pixel coordinates, ߝs are independent and identically distributed zero mean noise
values, P indicates total number of pixels. Assuming that the regression function is smooth to a
certain order N, the objective functional estimation of )ݔ(ݖ can be deduced (detailed derivation is
given in [1]) by minimization of the following functional:
min
{ఉ}
Σୀଵ
[ݕ − ߚ − ߚଵ
்
(ݔ − )ݔ − ߚଶ
்
(ݔ − )ݔଶ
−
… ߚே
்
(ݔ − )ݔே
]ଶ ଵ
ܭு ቀ
௫ି௫
ቁ (1)
where ߚ is ݅௧
derivative of )ݔ(ݖ and ℎ is global smoothing parameter. ܭு(ݔ − )ݔ is the kernel
weight assigned to the samples and is defined such that the nearby samples are given higher
weight than the farther ones.
Takeda et al.[1] have proposed to use a non-linear combination of data. In other words, data
adaptive kernel regression not only depends on the sample location and density but also on the
radiometric properties.
The incorporation of these radiometric properties were done by adapting the regression kernel
locally, so that the kernel aligns itself along the features such as edges, than across them. This
enables us to capture the features in better detail. The major contribution of Takeda et al.[1] is to
use adaptive kernel regression for image processing and reconstruction. They call this adaptive
technique, steering kernel regression.
3. Computer Science & Information Technology (CS & IT) 13
The central notion of the steering kernel regression is to estimate the local gradients. The
gradient information captures the features of the image. These in turn are used to find the weights
to be assigned to the neighboring samples. Pixel near an edge will be influenced by the pixels of
the same side of the edge. With this intuition in mind, the dominant orientation of the local
gradients are measured. The kernel is then effectively steered locally based on this dominant
orientation.
Our Median Based Parallel Steering Kernel Regression Technique is an iterative technique. The
output of the previous iteration is passed as input for the next iteration. As the first step of the
algorithm, output of the Classical Kernel Regression is passed as input for the first iteration of the
algorithm. Brief description of the algorithm is as follows:
1. Finding local gradients at each image point.
2. Based on the gradient value at a pixel, the scaling, elongation and rotation parameters are
estimated. There are used to construct the steering matrix.
3. Application of Steering Kernel Regression algorithm.
4. Post-processing of the resultant image to suppress spurious edges.
5. Repeating the steps 2 to 4, till the noise level in the output image is below the predefined
threshold.
3. IMPLEMENTATION
The serial implementation of steps 1 to 3 have been implemented by Takeda et.al. We have used
their code, which is available at http://www.soe.ucsc.edu/ htakeda/MatlabApp, for the
improvement. The parallel implementation of the algorithm is discussed in section 3.1. In section
3.2 we dwell on the method to suppress spurious edges.
3.1 PARALLEL IMPLEMENTATION
The serial code was programmed in Matlab and is available at http://www.soe.ucsc.edu/
htakeda/MatlabApp. The code was profiled using Matlab Profiling Tool and it was observed that
the steps 1 and 3 of steering kernel regression are most time consuming. On careful analysis, we
discovered that these steps have a lot of scope for parallelism. The serial implementation of step 3
consists the following ideas:
1. To all pixels, in each iteration running till the square of the upscale factor, determine the
feature matrix.
2. For each pixel,
• Obtain the weight matrix using the neighboring samples of covariance matrix.
• Compute equivalent kernel which involves inverse of a matrix resulting from the product of
the feature and weight matrices.
• Estimate the pixel values and gradient structure values for the output image.
In Step 1, classic regression method computes feature matrix, weight matrix and equivalent
kernel. The most time consuming action involved in this method is to estimate the target values,
local gradient structure values along the axes directions.
4. 14 Computer Science & Information Technology (CS & IT)
Steps 1 and 3 of steering kernel regression involve a lot of data-parallelism and CUDA is used to
parallelize these steps. Mex interface feature integrates CUDA into Matlab. We shall now discuss
the parallel implementation of the step 3 :
• Copy the original image, covariance matrix, feature matrix from the CPU host memory to GPU
global memory.
• Kernel is launched with the total number of threads equaling the total number of pixels in the
source image.
• A thread is assigned to each pixel and it does the above mentioned steps of determining weight
matrix and equivalent kernel.
• It contributes ܴଶ
pixels in the estimated output image, where R is the upscaling factor.
Computing the steering matrix (step 2) is done on multi-core for speed up of the whole process.
OPTIMIZATIONS
Since shared memory is on chip, accesses to it are significantly faster than accesses to global
memory. Feature matrix is copied from host memory to shared memory of GPU. Only one thread
in a block will get the data from global to shared memory. Shared memory could only fit feature
matrix as the image size exceeds it’s space limitations.
For the remaining data, the global memory request for a warp is split into two memory requests,
one for each half-warp, which are issued independently. GPU hardware can combine all memory
requests to a single memory transaction if the threads in a warp access consecutive memory
locations[4]. Our model is designed in such a way that the memory access pattern by threads is
coalesced, thereby improving performance and ensuring low latency. The multi-core code is
implemented using the Parallel Computing Toolbox of Matlab environment. The performance
comparison of these two implementations is given in the following section.
3.2 SPURIOUS EDGE SUPPRESSION
Spurious edges produced by single iteration of the steering kernel algorithm are not as strong as
the original edges in the image. Due to which the gradient strength of the spurious edges is much
lower than that of the original edges. Using this idea, we can suppress the edges which have the
gradient strength less than a given threshold(it will be set based on the value of edges in the
image). The value of the threshold depends upon the application. It can be static or given by a
heuristic. We have used thresholds derived from the maximum gradient values. For suppressing
the spurious edge pixels we applied median filter over the pixel’s neighbourhood.
4. RESULTS
4.1 COMPUTATIONAL EFFICIENCY
In this section we show the results of using GPUs and multicores to improve the computational
complexity of the first three steps of our algorithm(Steering Kernel Regression). We observe that
the computational efficiency of the algorithm has been improved while maintaining the quality of
the output images. We can show that the GPU and multi-core implementations are consistent with
results of original serial implementation. In the next section, we present the results of our
5. Computer Science & Information Technology (CS & IT) 15
algorithm(which incorporates post processing module for suppressing the spurious edges)in
comparison with Steering Kernel Regression.
EXPERIMENTAL SETUP
For all experiments, the parallelized version of the steering kernel regression was run on Tesla
T20 based Fermi GPU. The serial and multi-core implementations of the same were run in Matlab
R2012a on a node with Intel(R) Xeon(R) 2.13GHz processor and 24 GB RAM. CUDA timers
were employed, which have resolution upto milliseconds to time the CUDA kernels and more
importantly they are less subject to perturbations due to other events like page faults, interrupt
from the disks. Also CudaEventRecord is asynchronous; there is less of Heisenberg effect when
timing is short, exactly suitable to GPU-intensive operations as in these applications. For the
multi-core code timers provided by Matlab were used.
EXPERIMENTS.
Experiments and the performance results of GPU, multi-core implementations on simulated and
real data are presented. These experiments were conducted on diverse applications and attest the
claims made in previous sections. In all experiments, we considered regularly sampled data.
An important note to consider is the determination of the maximum speed up achieved by an
application. Understanding the type of scaling that is applicable in any application is vital in
estimating the speed up. All the applications that are mentioned here exhibit scaling. An instance
of this phenomenon is shown in Figure 1. One can observe that for a fixed problem size, increase
in the number of processing elements, the time of execution decreases.
Fig. 1. Plot of execution time against number of processing elements.
Fig. 2. RMSE(root mean square error) of the resulted image with respect to original image against
Number of iterations
6. 16 Computer Science & Information Technology (CS & IT)
Quality Test.
We performed the quality test for the image by visual inspection and RMSE (root mean square
error) values obtained from all the implementations. We have tested the comparisons between the
serial code, multi-core and CUDA when applied on a CT scan image with Gaussian noise with
standard deviation ߪ = 35. The RMSE values of the images resulted from multi-core and GPU
implementations are 13.2, which is closer to serial result. We observed that quality was
maintained with out any visually plausible differences. For all the experiments, the initial
estimates are given by the classical kernel regression method. Different experiments that were
performed are:
Denoising Experiment.
Well known Lena image of 512 × 512 size and a picture of a pirate with 1024 × 1024 size were
consideration for testing. Controlled simulated experiment was set up by adding white Gaussian
noise with standard deviation of ߪ = 25 to both the images. We set global smoothing
parameter ℎ = 2.4. The choice of number of iterations was set after careful analysis of the
behavior of the application. The graph in the Figure 2, indicates that RMSE (root mean square
error) values of the resulted image drop till a point and then it raises. This point was observed to
be 12 for this application. In this way, a limit for the number of iterations is deduced for all the
mentioned images. The RMSE values of the resulted images are 6.6426 and 9.2299.
Clearly, the dominance of the GPU performance over multi-core can be evidently seen in Figure
3(a). The multi-core code was run with 12 Matlab workers and as expected a near 10x
performance is achieved for image size 1024 × 1024. The slack is possibly due to the
communication overhead of the function calls performed by Matlab.
Fig. 3. Comparison of various algorithms: (a) Speedup factors against image sizes for the denoise
application. (b) Speedup factors against image sizes for the compression artifact removal. (c)
Speedup factors against image sizes for the upscaling.
Compression artifact removal.
Pepper image was considered for this experiment and compressed it by Matlab JPEG routine with
a quality parameter 10 and RMSE of 9.76. The number of iterations and ℎ were set to 10 and 2.4.
RMSE values for single core version is 8.5765, multi-core version is 8.5759 and that of GPU is
7. Computer Science & Information Technology (CS & IT) 17
8.5762. The performance is shown in Figure 3(b). The multi-core code has a average speed up
factor of 6x over serial implementation whereas GPU has speed up of 20x.
Upscaling.
We performed an upscale operation on the lena image and on the pirate’s picture. The
performance is noted in the Figure 3(c). In this case, the GPU performance is achieved to be
significantly higher than the multi-core. The arithmetic intensity is high for this application as
each thread needs to estimate more number of pixels in the target image, precisely 4 (the
upsampling factor).
Kernel Timing Results.
Table I presents the runtime measurements of the kernels in single core, multi-core and GPU for
different image sizes. The timings given for GPU code also include the data transfer time from
host to GPU memory and vice-versa. The GPU code maintains the efficiency even with the
increase in the image size. The GPU version of classic kernel has achieved a factor of 175x over
the serial implementation.An improvement factor of 75x was observed for steering kernel.
Table 1. Kernel execution timings(in seconds)
Image size Regression
method
Single core Multi-core GPU
512x 512 Classic kernel 11.765 1.765 0.067
512x 512 Steering kernel 105.140 18.200 1.470
1024x 1024 Classic kernel 47.741 6.930 0.270
1024x 1024 Steering kernel 427.701 102.234 5.819
4.2 QUALITATIVE IMPROVEMENT
The proposed algorithm has been compared with the algorithm proposed by Takeda et al. We
found that by incorporating an additional post processing module based on median filter(in which
the spurious edges are suppressed)into the Takeda algorithm, the proposed algorithm is giving
better than the original algorithm. Both the algorithms have been applied on Lena gray scale
image. The image is corrupted by Gaussian noise with standard deviation of 15, 25, 35 and 45
and then algorithms have been applied to denoise them. As shown in the figure, the error obtained
in the proposed algorithm is less than that of the Takeda’s algorithm. In our algorithm we have
used threshold value as 1% of the maximum edge strength. We have used median over a
neighbourhood of 1 pixel distance in fixing the value corresponding to spurious edges.
The RMSE value reaches a minimum after some iterations. Graphs corresponding to different
images after different iterations is presented in Figure 4.
8. 18 Computer Science & Information Technology (CS & IT)
In denoising the Lena image with Gaussian noise = 15, Takeda’s algorithm reached RMSE of
6.3917 while our algorithm reached 6.386. Both of them reach their minimum RMSE value in 8௧
iteration and it is shown in Figure 4(a).
In denoising the Lena Image with noise of 25, Takeda’s algorithm reached 7.8935 while our
algorithm reached 7.8778. Both of them reach their minimum RMSE value in 9௧
iteration and it
is shown in Figure 4(b).
In denoising the Lena Image with noise of 35, Takeda’s algorithm reached 9.2702 while our
algorithm reached 9.2059. Both of them reach their minimum RMSE value in 18௧
iteration and
it is shown in Figure 4(c).
Finally, in denoising the Lena Image with noise of 45, Takeda’s algorithm reached 10.823 while
our algorithm reached 10.822. They reached their minimum RMSE value in 18௧
iteration and it
is shown in Figure 4(d).
We have shown the results of denoising the Lena Image with noise 35 in the Figure 5. Figure 5(a)
denotes the original Lena image. Figure 5(b) is the Lena image with a Gaussian noise of 35.
Figure 5(c) gives us the suppressed edges, the edges which have been suppressed by the proposed
algorithm.
4(a)
4(b)
9. Computer Science & Information Technology (CS & IT)
Fig. 4. Comparison of Our Median Based Parallel Steering Kernel Regression Technique
Takeda’s Original algorithm on Lena image with Gaussian noise of (a)15 (b)25 (c)35 (d)45.
5(a)
Fig. 5. Comparison of our algorithm to Ste
Lena image with Gaussian noise of 35. (c) Suppressed spurious edges
Computer Science & Information Technology (CS & IT)
4(c)
4(d)
Median Based Parallel Steering Kernel Regression Technique
Takeda’s Original algorithm on Lena image with Gaussian noise of (a)15 (b)25 (c)35 (d)45.
5(a) 5(b) 5(c)
Comparison of our algorithm to Steering Kernel Regression. (a) Original Lena image. (b)
Lena image with Gaussian noise of 35. (c) Suppressed spurious edges
19
Median Based Parallel Steering Kernel Regression Technique with
Takeda’s Original algorithm on Lena image with Gaussian noise of (a)15 (b)25 (c)35 (d)45.
ering Kernel Regression. (a) Original Lena image. (b)
10. 20 Computer Science & Information Technology (CS & IT)
5. CONCLUSIONS AND FUTURE WORK
Steering kernel regression is indeed an innovative work in image reconstruction. This algorithm
was successfully parallelized on both multi-core, GPU platforms using Matlab and CUDA.
Implementations for different applications with varying parameters have been evaluated. From
the observations, it is clearly evident that the time complexity of steering kernel has been
reduced. On an average, a gain of 6 × and 21 × speed-up was achieved on the multi-core and
GPU platforms respectively. Also an additional post processing module, based on median filter,
has been introduced for the purpose of suppressing the spurious edges. This improvement has
given better results than the original algorithm.
In future, this work could be extended to video data. All the image datasets considered here fit
within the limits of GPU memory. Computing Steering Kernel regression on single GPU may not
be scalable to larger images, as the on-board memory of GPU is a major constraint. Thus,
parallelizing the algorithm for multiple GPUs may lead to better results.
Acknowledgment
We would like to dedicate our efforts to our Divine Founder Chancellor Bhagawan Sri Sathya Sai
Baba, without whom this work wouldn’t have been possible. We would like to acknowledge
Takeda et.al. for making the Kernel Regression Tool-box software available on-line. This work
was partially supported by nVIDIA, Pune, grant under Professor partnership program and DRDO
grant under Extramural Research and Intellectual Property rights.
REFERENCES
[1] Takeda, Hiroyuki, Sina Farsiu, and Peyman Milanfar. "Kernel regression for image processing and
reconstruction." Image Processing, IEEE Transactions on 16.2 (2007): 349-366.
[2] Li, Xin, and Michael T. Orchard. "New edge-directed interpolation." Image Processing, IEEE
Transactions on 10.10 (2001): 1521-1527.
[3] Bose, N. K., and Nilesh A. Ahuja. "Superresolution and noise filtering using moving least squares."
Image Processing, IEEE Transactions on 15.8 (2006): 2239-2248.
[4] NVIDIA CUDA Programming Guide, [Online], Available at : http://developer.download.nvidia.com/.
[5] Kraus, Martin, Mike Eissele, and Magnus Strengert. "GPU-based edge-directed image interpolation."
Image Analysis. Springer Berlin Heidelberg, 2007. 532-541.
[6] Tenllado, Christian, Javier Setoain, Manuel Prieto, Luis Piñuel, and Francisco Tirado. "Parallel
implementation of the 2D discrete wavelet transform on graphics processing units: filter bank versus
lifting." Parallel and Distributed Systems, IEEE Transactions on 19, no. 3 (2008): 299-310.
[7] Allusse, Yannick, Patrick Horain, Ankit Agarwal, and Cindula Saipriyadarshan. "GpuCV: A GPU-
accelerated framework for image processing and computer vision." In Advances in Visual
Computing, pp. 430-439. Springer Berlin Heidelberg, 2008.
[8] Risojević, Vladimir, Zdenka Babić, Tomaž Dobravec, and Patricio Bulić. "A GPU imple-mentation of
a structural-similarity-based aerial-image classification." The Journal of Su-percomputing: 1-19.