This document proposes a novel video scaling algorithm based on linear interpolation with quality enhancement. The algorithm generates two reference frames from the original video frame at different resolutions using Lanczos3 filtering. It then interpolates two intermediate frames from the reference frames. The scaled output frame is obtained through linear interpolation of the intermediate frames. Finally, sharpening is applied as an enhancement step to improve the quality of the scaled frame. Experimental results on various test images show the proposed algorithm achieves PSNR values over 45dB, outperforming other interpolation techniques like bilinear and b-spline interpolation in terms of image quality of the scaled frames.
Performance analysis of high resolution images using interpolation techniques...sipij
This paper presents various types of interpolation techniques to obtain a high quality image The difference
between the proposed algorithm and conventional algorithms (in estimation of missing pixel value) is that
if standard deviation of image is used to calculate pixel value rather than the value of nearmost neighbor,
the image gives the better result. The proposed method demonstrated higher performances in terms of
PSNR and SSIM when compared to the conventional interpolation algorithms mentioned.
This document discusses fractal image compression based on jointly and different partitioning schemes. It proposes partitioning RGB images into range blocks in two ways: 1) Jointly, where the red, green, and blue channels are partitioned together into blocks of the same size and coordinates. 2) Differently, where each channel is partitioned independently, resulting in different block sizes and coordinates for each channel. The document provides background on fractal image compression and the encoding/decoding processes. It analyzes the two partitioning schemes and argues the different scheme is more effective for encoding by allowing each channel to have customized partitioning.
A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoisin...CSCJournals
Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
This document proposes an algorithm for efficiently computing 2D spatial convolution through image partitioning and short convolution. The algorithm partitions an input image into overlapping 6x6 blocks, which are then further partitioned into non-overlapping 3x3 sub-images. Convolution is computed for each sub-image independently using a variable-length filter, reducing computational complexity compared to FFT-based techniques. The outputs from each sub-image convolution are combined to reconstruct the original block. Simulation results demonstrate the effectiveness of the algorithm for tasks like edge detection and noise reduction through local image filtering.
A Trained CNN Based Resolution Enhancement of Digital ImagesIJMTST Journal
Image Resolution Enhancement (RE) is a technique to estimate or synthesize a high resolution(HR) image
from one or several low resolution (LR) images . Resolution Enhancement (RE) technique reconstructs a
higher-resolution image or sequence from the observed LR images. In this project we are going to present
about the methods in resolution enhancement and the advancements that are taking place, since it has lot
many applications in various fields. Most resolution enhancement techniques are based on the same idea,
using information from several different images to create one upsized image. Algorithms try to extract details
from every image in a sequence to reconstruct other frames.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Performance analysis of high resolution images using interpolation techniques...sipij
This paper presents various types of interpolation techniques to obtain a high quality image The difference
between the proposed algorithm and conventional algorithms (in estimation of missing pixel value) is that
if standard deviation of image is used to calculate pixel value rather than the value of nearmost neighbor,
the image gives the better result. The proposed method demonstrated higher performances in terms of
PSNR and SSIM when compared to the conventional interpolation algorithms mentioned.
This document discusses fractal image compression based on jointly and different partitioning schemes. It proposes partitioning RGB images into range blocks in two ways: 1) Jointly, where the red, green, and blue channels are partitioned together into blocks of the same size and coordinates. 2) Differently, where each channel is partitioned independently, resulting in different block sizes and coordinates for each channel. The document provides background on fractal image compression and the encoding/decoding processes. It analyzes the two partitioning schemes and argues the different scheme is more effective for encoding by allowing each channel to have customized partitioning.
A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoisin...CSCJournals
Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
This document proposes an algorithm for efficiently computing 2D spatial convolution through image partitioning and short convolution. The algorithm partitions an input image into overlapping 6x6 blocks, which are then further partitioned into non-overlapping 3x3 sub-images. Convolution is computed for each sub-image independently using a variable-length filter, reducing computational complexity compared to FFT-based techniques. The outputs from each sub-image convolution are combined to reconstruct the original block. Simulation results demonstrate the effectiveness of the algorithm for tasks like edge detection and noise reduction through local image filtering.
A Trained CNN Based Resolution Enhancement of Digital ImagesIJMTST Journal
Image Resolution Enhancement (RE) is a technique to estimate or synthesize a high resolution(HR) image
from one or several low resolution (LR) images . Resolution Enhancement (RE) technique reconstructs a
higher-resolution image or sequence from the observed LR images. In this project we are going to present
about the methods in resolution enhancement and the advancements that are taking place, since it has lot
many applications in various fields. Most resolution enhancement techniques are based on the same idea,
using information from several different images to create one upsized image. Algorithms try to extract details
from every image in a sequence to reconstruct other frames.
A comparative analysis of retrieval techniques in content based image retrievalcsandit
Basic group of visual techniques such as color, shape, texture are used in Content Based Image
Retrievals (CBIR) to retrieve query image or sub region of image to find similar images in
image database. To improve query result, relevance feedback is used many times in CBIR to
help user to express their preference and improve query results. In this paper, a new approach
for image retrieval is proposed which is based on the features such as Color Histogram, Eigen
Values and Match Point. Images from various types of database are first identified by using
edge detection techniques .Once the image is identified, then the image is searched in the
particular database, then all related images are displayed. This will save the retrieval time.
Further to retrieve the precise query image, any of the three techniques are used and
comparison is done w.r.t. average retrieval time. Eigen value technique found to be the best as
compared with other two techniques.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVALsipij
1) The document describes an efficient region-based image retrieval system that uses discrete wavelet transform and k-means clustering. It segments images into regions, each characterized by features like size, mean, and covariance.
2) The system pre-processes images by resizing, converting to HSV color space, performing DWT, and using k-means clustering on DWT coefficients to generate regions. It extracts features for each region and stores them in a database.
3) For retrieval, it pre-processes the query image similarly and calculates similarities between the query regions and database regions based on their features, returning similar images.
A Low Hardware Complex Bilinear Interpolation Algorithm of Image Scaling for ...arpublication
In this brief, a low-complexity, low-memoryrequirement, and high-quality algorithm is proposed for VLSI implementation of an image scaling processor. The proposed image scaling algorithm consists of a sharpening spatial filter, a clamp filter, and a bilinear interpolation. To reduce the blurring and aliasing artifacts produced by the bilinear interpolation, the sharpening spatial and clamp filters are added as pre-filters. To minimize the memory buffers and computing resources for the proposed image processor design, a T-model and inversed T-model convolution kernels are created for realizing the sharpening spatial and clamp filters. Furthermore, two T-model or inversed T-model filters are combined into a combined filter which requires only a one-line-buffer memory. Moreover, a reconfigurable calculation unit is invented for decreasing the hardware cost of the combined filter. Moreover, the computing resource and hardware cost of the bilinear interpolator can be efficiently reduced by an algebraic manipulation and hardware sharing techniques. The VLSI architecture in this work can achieve 280 MHz with 6.08-K gate counts, and its core area is 30 378 μm2 synthesized by a 0.13-μm CMOS process. Compared with previous low-complexity techniques, this work reduces gate counts by more than 34.4% and requires only a one-line-buffer memory.
IRJET- A Review on Plant Disease Detection using Image ProcessingIRJET Journal
This document summarizes a research paper on detecting plant diseases from images using digital image processing techniques. The main steps discussed are: 1) Acquiring digital images of plant leaves, 2) Pre-processing the images by cropping, converting to grayscale, and enhancing, 3) Segmenting the images using k-means clustering to identify infected regions, 4) Extracting color, texture, and shape features from the segmented images, and 5) Classifying the images using a support vector machine to identify the type of disease. The proposed method was tested on images of citrus leaves to detect different diseases and future work aims to improve classification accuracy for other plant species.
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
This document discusses using image processing in MATLAB to quantify vascular data from microscopy images. It describes converting a microscopy image of a mouse ear to binary, then using distance transformation to extract the main vascular structures. Distance transformation assigns values to pixels based on their distance to nearest non-zero pixels, resulting in an image highlighting vasculature. The two main vessels were detected but more work is needed to extract smaller vessels. MATLAB provides tools for image processing, storing images as matrices, and computing distance transforms to automatically quantify vascular networks from images.
Bangla Optical Digits Recognition using Edge Detection MethodIOSR Journals
Abstract:This paper is based on Bangla Optical Digit Recognition (ODR) by the Edge detection technique. In this method, Bangla digit image converted into gray-scale which distributed by an M by N array form. Here input data are considered off-line printed digit’s image which collected from computer generated image, scanned documents or printed text. After addressing the gray-scale image against a variable in the form of an M by N array, where the value of array pointers are shown 255 for total white space, 0 (zero) for total dark space and value between 255 and 0 for mix of white and dark space of the image. At the next process, four edgestouch points as well as each touch point’s ratio use as parameters to determine each Bangla digit uniquely. Keywords-Edge, image,gray-scale, Matrix,ODR.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
User Interactive Color Transformation between ImagesIJMER
Abstract: In this paper we present a process called color
transfer which can borrow one image’s color
characteristics from another. Most current colorization
algorithms either require a significant user effort or have
large computational time. Here focus on orthogonal color
space i.e. lαβ color space without correlation between the
axes is given. Here we have implemented two global color
transfer algorithms in lαβ color space using simple color
statistical information such as mean, standard deviation
and covariance between the pixels of image. Our approach
is the extension of Reinhard's. Our local color transfer
algorithm uses simple color statistical analysis to recolor
the target image according to selected color range in
source image. Target image’s color influence mask is
prepared. It is a mask that specifies what parts of target
image will be affected according to selected color range.
After that target image is recolored in lαβ color space
according to prepared color influence map. In the lαβ
color space luminance and chrominance information is
separate so it allows making image recoloring optional.
The basic color transformation uses stored color statistics
of source and target image. All the algorithms are
implemented in JAVA object oriented language. The main
advantage of proposed method over the existing one is it
allows the user to recolor a part of the image in a simple &
intuitive way, preserving other color intact & achieving
natural look.
Index Terms: color transfer, local color statistics, color
characteristics, orthogonal color space, color influence
map.
The document describes a novel FPGA implementation of an image scaling processor using bilinear interpolation. The proposed method uses sharpening and clamp filters as pre-filters to the bilinear interpolator in order to improve image quality during scaling. The bilinear interpolation algorithm was chosen due to its lower complexity compared to other methods. The design was implemented in Verilog and synthesized for an FPGA, achieving a maximum frequency of 215.64MHz for 64x64 grayscale images. The hardware resources required were moderately lower than other algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
This document summarizes a research paper that proposes a method to enhance the resolution of satellite images using discrete wavelet transform (DWT), interpolation, and inverse discrete wavelet transform (IDWT). Low resolution satellite images are decomposed into subbands using DWT. Bilinear interpolation is applied to each subband to increase resolution. IDWT is then used to combine the subbands into the enhanced, higher resolution output image. The method is tested on LANDSAT 8 images and evaluated using metrics like PSNR, MSE, and entropy. Results show the proposed method improves these metrics over other interpolation techniques, enhancing image quality and resolution.
A fast fpga based architecture for measuring the distance betweenIAEME Publication
This paper presents an FPGA-based architecture for measuring the Manhattan distance between two RGB color images. The architecture takes RGB pixel values from each image as input and calculates the absolute difference between corresponding pixel values. It sums all the absolute differences and divides by the total number of pixels to obtain the normalized Manhattan distance. The architecture was implemented on a Xilinx Spartan 3 FPGA and can operate at 171.585 MHz, faster than software solutions. Experimental results demonstrating distance calculations on sample image pairs are presented. The FPGA implementation allows real-time Manhattan distance measurement for applications like image retrieval.
A lossless color image compression using an improved reversible color transfo...eSAT Journals
Abstract In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted finally. It is seen from the experimental result the proposed method improves the compression performance than the existing method. Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman coding.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a research paper that proposes a new image interpolation technique to reconstruct high-resolution images from low-resolution counterparts while preserving edge structures. The technique estimates each pixel to be interpolated in two orthogonal directions and fuses the estimates using linear minimum mean square error estimation. This adaptive fusion approach can better discriminate edge directions in the local window compared to interpolating in a single direction. The technique aims to improve on traditional linear interpolation methods by adapting to local image gradients to reduce artifacts while preserving sharp edges. A simplified version is also presented to reduce computational costs with minimal impact on performance. Experiments showed the new technique can better preserve edges and reduce artifacts compared to other methods.
This document discusses image compression algorithms using the Lapped Orthogonal Transform (LOT) and Discrete Cosine Transform (DCT) under the JPEG standard. It begins with an introduction to image compression and classification of compression schemes. It then describes LOT and DCT in detail and proposes a hybrid algorithm using both transforms simultaneously. The algorithm is tested on an image and achieves a peak signal-to-noise ratio of 36.76 decibels at a bit rate of 0.6 bits per pixel, providing higher quality than DCT alone. The document concludes the hybrid approach offers better energy compaction and quality at low bit rates than DCT.
Analysis of low pdp using SPST in bilateral filterIJTET Journal
A FPGA Implementation of a bilateral filter for image processing is given which does spatial averaging without smoothing edges. Kernel based processing is possible, which means that processing of the entire filter window at one pixel clock cycle. It is also supported by the arrangement of the input data into groups and applied a single clock cycle for a group of pixels. Based on these features, a technique called Spurious Power Suppression Technique (SPST) is implemented in Bilateral Filter to minimize Power Delay Product (PDP). SPST can also dramatically reduce the power by turn off its MSP (Most significant bit) without compromising the computational results to achieve the target parameters. Furthermore, an Original Glitch Diminishing technique is proposed to filter out useless switching power by asserting signals after the data transient period. The SPST can be expanded to a Fine-grain scheme in which the combinational circuits are divided into more than two parts. In Bilateral Filter a kernel of different size can be implemented using SPST which achieve good performance.
O documento fornece um resumo das principais fontes de energia renováveis e não renováveis, incluindo energia solar, eólica, hidrelétrica, geotérmica, nuclear, petróleo, gás natural e carvão. Ele discute os princípios básicos de cada tipo de energia, suas vantagens e desvantagens, e aplicações atuais.
Multi Resolution features of Content Based Image RetrievalIDES Editor
Many content based retrieval systems have been
proposed to manage and retrieve images on the basis of their
content. In this paper we proposed Color Histogram, Discrete
Wavelet Transform and Complex Wavelet Transform
techniques for efficient image retrieval from huge database.
Color Histogram technique is based on exact matching of
histogram of query image and database. Discrete Wavelet
transform technique retrieves images based on computation
of wavelet coefficients of subbands. Complex Wavelet
Transform technique includes computation of real and
imaginary part to extract the details from texture. The
proposed method is tested on COREL1000 database and
retrieval results have demonstrated a significant improvement
in precision and recall.
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVALsipij
1) The document describes an efficient region-based image retrieval system that uses discrete wavelet transform and k-means clustering. It segments images into regions, each characterized by features like size, mean, and covariance.
2) The system pre-processes images by resizing, converting to HSV color space, performing DWT, and using k-means clustering on DWT coefficients to generate regions. It extracts features for each region and stores them in a database.
3) For retrieval, it pre-processes the query image similarly and calculates similarities between the query regions and database regions based on their features, returning similar images.
A Low Hardware Complex Bilinear Interpolation Algorithm of Image Scaling for ...arpublication
In this brief, a low-complexity, low-memoryrequirement, and high-quality algorithm is proposed for VLSI implementation of an image scaling processor. The proposed image scaling algorithm consists of a sharpening spatial filter, a clamp filter, and a bilinear interpolation. To reduce the blurring and aliasing artifacts produced by the bilinear interpolation, the sharpening spatial and clamp filters are added as pre-filters. To minimize the memory buffers and computing resources for the proposed image processor design, a T-model and inversed T-model convolution kernels are created for realizing the sharpening spatial and clamp filters. Furthermore, two T-model or inversed T-model filters are combined into a combined filter which requires only a one-line-buffer memory. Moreover, a reconfigurable calculation unit is invented for decreasing the hardware cost of the combined filter. Moreover, the computing resource and hardware cost of the bilinear interpolator can be efficiently reduced by an algebraic manipulation and hardware sharing techniques. The VLSI architecture in this work can achieve 280 MHz with 6.08-K gate counts, and its core area is 30 378 μm2 synthesized by a 0.13-μm CMOS process. Compared with previous low-complexity techniques, this work reduces gate counts by more than 34.4% and requires only a one-line-buffer memory.
IRJET- A Review on Plant Disease Detection using Image ProcessingIRJET Journal
This document summarizes a research paper on detecting plant diseases from images using digital image processing techniques. The main steps discussed are: 1) Acquiring digital images of plant leaves, 2) Pre-processing the images by cropping, converting to grayscale, and enhancing, 3) Segmenting the images using k-means clustering to identify infected regions, 4) Extracting color, texture, and shape features from the segmented images, and 5) Classifying the images using a support vector machine to identify the type of disease. The proposed method was tested on images of citrus leaves to detect different diseases and future work aims to improve classification accuracy for other plant species.
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
This document discusses using image processing in MATLAB to quantify vascular data from microscopy images. It describes converting a microscopy image of a mouse ear to binary, then using distance transformation to extract the main vascular structures. Distance transformation assigns values to pixels based on their distance to nearest non-zero pixels, resulting in an image highlighting vasculature. The two main vessels were detected but more work is needed to extract smaller vessels. MATLAB provides tools for image processing, storing images as matrices, and computing distance transforms to automatically quantify vascular networks from images.
Bangla Optical Digits Recognition using Edge Detection MethodIOSR Journals
Abstract:This paper is based on Bangla Optical Digit Recognition (ODR) by the Edge detection technique. In this method, Bangla digit image converted into gray-scale which distributed by an M by N array form. Here input data are considered off-line printed digit’s image which collected from computer generated image, scanned documents or printed text. After addressing the gray-scale image against a variable in the form of an M by N array, where the value of array pointers are shown 255 for total white space, 0 (zero) for total dark space and value between 255 and 0 for mix of white and dark space of the image. At the next process, four edgestouch points as well as each touch point’s ratio use as parameters to determine each Bangla digit uniquely. Keywords-Edge, image,gray-scale, Matrix,ODR.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
User Interactive Color Transformation between ImagesIJMER
Abstract: In this paper we present a process called color
transfer which can borrow one image’s color
characteristics from another. Most current colorization
algorithms either require a significant user effort or have
large computational time. Here focus on orthogonal color
space i.e. lαβ color space without correlation between the
axes is given. Here we have implemented two global color
transfer algorithms in lαβ color space using simple color
statistical information such as mean, standard deviation
and covariance between the pixels of image. Our approach
is the extension of Reinhard's. Our local color transfer
algorithm uses simple color statistical analysis to recolor
the target image according to selected color range in
source image. Target image’s color influence mask is
prepared. It is a mask that specifies what parts of target
image will be affected according to selected color range.
After that target image is recolored in lαβ color space
according to prepared color influence map. In the lαβ
color space luminance and chrominance information is
separate so it allows making image recoloring optional.
The basic color transformation uses stored color statistics
of source and target image. All the algorithms are
implemented in JAVA object oriented language. The main
advantage of proposed method over the existing one is it
allows the user to recolor a part of the image in a simple &
intuitive way, preserving other color intact & achieving
natural look.
Index Terms: color transfer, local color statistics, color
characteristics, orthogonal color space, color influence
map.
The document describes a novel FPGA implementation of an image scaling processor using bilinear interpolation. The proposed method uses sharpening and clamp filters as pre-filters to the bilinear interpolator in order to improve image quality during scaling. The bilinear interpolation algorithm was chosen due to its lower complexity compared to other methods. The design was implemented in Verilog and synthesized for an FPGA, achieving a maximum frequency of 215.64MHz for 64x64 grayscale images. The hardware resources required were moderately lower than other algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
This document proposes and evaluates a near lossless image compression algorithm that divides color images into red, green, and blue channels. It classifies pixels in each channel row-by-row and records the results in mask images. The image data is then decomposed into sequences based on the classification and the mask images are hidden in the least significant bits of the sequences. Different encoding schemes like LZW, Huffman, and RLE are applied and compared. Experimental results on test images show the proposed algorithm achieves smaller bits per pixel than simple encoding schemes. PSNR values also indicate very little difference between original and reconstructed images.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
This document summarizes a research paper that proposes a method to enhance the resolution of satellite images using discrete wavelet transform (DWT), interpolation, and inverse discrete wavelet transform (IDWT). Low resolution satellite images are decomposed into subbands using DWT. Bilinear interpolation is applied to each subband to increase resolution. IDWT is then used to combine the subbands into the enhanced, higher resolution output image. The method is tested on LANDSAT 8 images and evaluated using metrics like PSNR, MSE, and entropy. Results show the proposed method improves these metrics over other interpolation techniques, enhancing image quality and resolution.
A fast fpga based architecture for measuring the distance betweenIAEME Publication
This paper presents an FPGA-based architecture for measuring the Manhattan distance between two RGB color images. The architecture takes RGB pixel values from each image as input and calculates the absolute difference between corresponding pixel values. It sums all the absolute differences and divides by the total number of pixels to obtain the normalized Manhattan distance. The architecture was implemented on a Xilinx Spartan 3 FPGA and can operate at 171.585 MHz, faster than software solutions. Experimental results demonstrating distance calculations on sample image pairs are presented. The FPGA implementation allows real-time Manhattan distance measurement for applications like image retrieval.
A lossless color image compression using an improved reversible color transfo...eSAT Journals
Abstract In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted finally. It is seen from the experimental result the proposed method improves the compression performance than the existing method. Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman coding.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a research paper that proposes a new image interpolation technique to reconstruct high-resolution images from low-resolution counterparts while preserving edge structures. The technique estimates each pixel to be interpolated in two orthogonal directions and fuses the estimates using linear minimum mean square error estimation. This adaptive fusion approach can better discriminate edge directions in the local window compared to interpolating in a single direction. The technique aims to improve on traditional linear interpolation methods by adapting to local image gradients to reduce artifacts while preserving sharp edges. A simplified version is also presented to reduce computational costs with minimal impact on performance. Experiments showed the new technique can better preserve edges and reduce artifacts compared to other methods.
This document discusses image compression algorithms using the Lapped Orthogonal Transform (LOT) and Discrete Cosine Transform (DCT) under the JPEG standard. It begins with an introduction to image compression and classification of compression schemes. It then describes LOT and DCT in detail and proposes a hybrid algorithm using both transforms simultaneously. The algorithm is tested on an image and achieves a peak signal-to-noise ratio of 36.76 decibels at a bit rate of 0.6 bits per pixel, providing higher quality than DCT alone. The document concludes the hybrid approach offers better energy compaction and quality at low bit rates than DCT.
Analysis of low pdp using SPST in bilateral filterIJTET Journal
A FPGA Implementation of a bilateral filter for image processing is given which does spatial averaging without smoothing edges. Kernel based processing is possible, which means that processing of the entire filter window at one pixel clock cycle. It is also supported by the arrangement of the input data into groups and applied a single clock cycle for a group of pixels. Based on these features, a technique called Spurious Power Suppression Technique (SPST) is implemented in Bilateral Filter to minimize Power Delay Product (PDP). SPST can also dramatically reduce the power by turn off its MSP (Most significant bit) without compromising the computational results to achieve the target parameters. Furthermore, an Original Glitch Diminishing technique is proposed to filter out useless switching power by asserting signals after the data transient period. The SPST can be expanded to a Fine-grain scheme in which the combinational circuits are divided into more than two parts. In Bilateral Filter a kernel of different size can be implemented using SPST which achieve good performance.
O documento fornece um resumo das principais fontes de energia renováveis e não renováveis, incluindo energia solar, eólica, hidrelétrica, geotérmica, nuclear, petróleo, gás natural e carvão. Ele discute os princípios básicos de cada tipo de energia, suas vantagens e desvantagens, e aplicações atuais.
O documento discute a informalidade no Brasil e os esforços para facilitar a legalização de pequenos negócios. Aponta que há 10 milhões de empresas informais no país e descreve iniciativas como o REDEsim, Centrais de Atendimento Empresarial e o Cadastro Sincronizado para simplificar o registro de empresas. Também menciona incentivos fiscais do Simples Nacional e benefícios do MEI para estimular a formalização.
El documento habla sobre varios temas relacionados con el crecimiento de la población mundial y sus efectos, incluyendo el rápido crecimiento de la población desde 1960, la urbanización e industrialización crecientes, y los impactos en el cambio climático como el aumento de la temperatura global, el nivel del mar y la contaminación.
O documento discute a necessidade de respeito às diferenças e liberdade de escolha no amor e nas relações. Argumenta que o preconceito contra a homossexualidade e outras questões precisa ser revisto à luz dos direitos humanos e da democracia. Defende que o amor entre pessoas, independente de gênero ou raça, deve ser aceito.
Este documento descreve um programa de yoga e meditação no local de trabalho com o objetivo de melhorar a concentração, criatividade e cooperação entre funcionários através da redução do stress, ansiedade e fadiga. O programa inclui exercícios de relaxamento, posturas de yoga, meditação e alimentação saudável para promover o bem-estar físico e mental dos participantes.
5 Ways Nonprofits Can Improve Mobile EngagementBen Stroup
The most important shifts in nonprofit communications in the next five years will center around mobile technology. Are you ready? Stay current on all tips, trends, and suggestions by subscribing to www.pursuant.com/blog.
O documento discute a necessidade de atualização do Espiritismo de forma permanente e democrática, reconhecendo diferentes interpretações da doutrina, mas mantendo os fundamentos básicos estabelecidos por Kardec. Defende que a atualização não significa alterar os textos originais, mas sim revisar as ideias à luz dos novos conhecimentos, como propôs o próprio Kardec.
El documento menciona varias cosas que podrían hacer feliz a alguien como pasar las navidades con la familia, ir de vacaciones a Hawaii, comer chocolate, ser profesora, el cariño de los padres y no tener exámenes, así como que el dinero caiga del cielo.
A carta parabeniza a mãe C.P. Laviada pelo seu primeiro aniversário, desejando-lhe saúde, felicidade e que continue sendo um exemplo de dedicação para sua família.
The document presents a method for solving nonlinear time delay control systems using Fourier series. The key steps are:
1) Time functions in the system are expanded as truncated Fourier series with unknown coefficients.
2) Operational matrices of integration, delay, and product are presented and used to evaluate the Fourier coefficients for the solution.
3) This reduces solving the time-delay control system to solving a set of algebraic equations for the Fourier coefficients.
Este documento describe cómo los estudiantes de secundaria aprenden sobre el aprendizaje fundamental de ejercer plenamente la ciudadanía. Explica que se desarrollarán competencias y capacidades relacionadas con este tema y se verán situaciones de aprendizaje. También proporciona orientaciones sobre cómo diseñar estas situaciones de aprendizaje usando las rutas de aprendizaje y sus matrices de competencias, capacidades e indicadores.
A empresa anunciou um novo produto que combina hardware e software para fornecer uma solução completa para clientes. O produto oferece recursos avançados de inteligência artificial e aprendizado de máquina para ajudar os usuários a automatizar tarefas complexas. Analistas acreditam que o produto pode ser um sucesso comercial se for fácil de usar e tiver um preço acessível.
2014 06-11-marcello-siqueira-certics-overview-assespro-espirito-santoRoberto C. Mayer
A Certificação CERTICS identifica e credencia software resultante de desenvolvimento e inovação tecnológica realizados no Brasil para preferência em compras públicas. Seu objetivo é alavancar a autonomia tecnológica e a capacidade inovativa brasileira por meio da verificação de 16 resultados esperados relacionados ao desenvolvimento do software.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Spectral approach to image projection with cubic b spline interpolationiaemedu
This document summarizes a research paper that proposes a new method for image projection using spectral interpolation with cubic B-spline interpolation. The key points are:
1) Existing super resolution methods based on Fourier transforms have limitations in providing high visual quality when scaling images.
2) The proposed method first transforms image frames into the frequency domain using FFT. It then interpolates in the spectral domain using cubic B-spline interpolation to project the images onto a higher resolution grid.
3) Experimental results show the proposed method provides higher visual quality projections compared to conventional Fourier-based approaches, while also having faster computation times.
Spectral approach to image projection with cubiciaemedu
This document summarizes a research paper that proposes a new method for image projection using spectral interpolation with cubic B-spline interpolation. The key points are:
1) Existing super resolution methods based on Fourier transforms have limitations in providing high visual quality when scaling images.
2) The proposed method first transforms image frames into the frequency domain using FFT. It then interpolates in the spectral domain using cubic B-spline interpolation before projecting the interpolated data onto a high resolution grid.
3) Experimental results on a test video sequence show the proposed method provides higher visual quality compared to conventional Fourier-based approaches, while also having faster computation time.
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET Journal
This document presents a proposed method for detecting fake Indian currency notes using image processing techniques. The proposed system takes an image of a currency note as input and performs pre-processing including resizing, restoration, and enhancement. It then applies "X-ray vision" using dark channel prior to extract inner and outer edges of patterns in the image. The extracted patterns are labeled and classified using a fuzzy classifier. The system is able to classify images as real or fake currency with 90-95% accuracy. The document reviews related work on currency detection and provides details on the proposed methodology, which includes image acquisition, pre-processing, enhancement, dark channel prior, labeling, and fuzzy classification. Results are presented showing the output of each step.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
IRJET - Change Detection in Satellite Images using Convolutional Neural N...IRJET Journal
The document describes a method for detecting changes in satellite images using convolutional neural networks. It discusses how existing methods have limitations in terms of accuracy and speed. The proposed method uses preprocessing techniques like median filtering and non-local means filtering. It then applies convolutional neural networks to extracted compressed image features and classify detected changes. The method forms a difference image without explicitly training on change images, making it unsupervised. Testing achieved 91.63% accuracy in change detection, showing the effectiveness of the proposed convolutional neural network approach.
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...IRJET Journal
This document presents research on detecting license plates in foggy conditions using an enhanced OTSU technique. The researchers tested their technique on a large database of license plate images taken under different conditions, including clear and foggy images. They evaluated the technique using various performance parameters such as MSE, PSNR, SSIM, and aspect ratio. When compared to a base technique, the enhanced OTSU technique showed improvements in these parameters of 14.93%, 14.12%, 39.21%, and 40% respectively. The technique aims to better handle hazardous image conditions like foggy weather that existing techniques often struggle with. It uses steps like image denoising, thresholding segmentation, and character extraction to read license plates in low-visibility situations
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
This document summarizes a research paper that proposes a stereo vision algorithm called the Canny Block Matching Algorithm (CBMA) to estimate distance from stereo images. CBMA uses the Canny edge detector to extract edges from images and block matching with Sum of Absolute Difference (SAD) to determine disparity maps and reduce processing time. The algorithm was tested on stereo image pairs and achieved an error reduction of about 2% and processing time reduction compared to other methods. Interpolation techniques including bilinear, 1st order polynomial and 2nd order polynomial were also evaluated to enhance the output images and further reduce errors.
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
Image Fusion is a process of combining relevant information from a set of images, into a
single image, wherein the resultant fused image will be more informative and complete than any of
the input images. This paper discusses Laplacian Pyramid (LP) based image fusion techniques for
fingerprint application. The technique is implemented in MatLab and evaluation parameters Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Matching score are discussed. As well
the same implemented on Virtex-5 FPGA development board using Verilog HDL. LP based
technique provides better results for image fusion than other techniques.
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
This document discusses the implementation of Laplacian Pyramid (LP) based image fusion techniques for fingerprint applications. LP fusion provides better results than other techniques like PCA and DCT. The technique is implemented in MATLAB and on an FPGA development board using Verilog HDL. Performance is evaluated using mean square error, peak signal to noise ratio, and matching score. LP fusion captures image details at multiple scales and is well-suited for fusing fingerprint images.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes and compares various algorithms used to implement video surveillance systems, including pixel matching, image matching, and clustering algorithms. It first provides background on video surveillance systems and their need for automatic abnormal motion detection. It then reviews several specific algorithms: pixel matching, agglomerative clustering, reciprocal nearest neighbor pairing, sub-pixel mapping, patch matching, tone mapping, and k-means clustering. For each algorithm, it provides a brief overview of the approach and complexity. The document also discusses image matching algorithms like classic image checking, pixel-based identity checking, and pixel-based similarity checking. Overall, the document analyzes algorithms that can be used to detect and classify motion in video surveillance systems.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The ergodicity property of chaotic system is utilized to perform the permutation process; a substitution operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are then applied to perform the permutation process. The encryption’s key streams not only depend on the cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by
another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption rate. The security and performance analysis have been performed, including key space analysis, histogram analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis
etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its
large key space and efficient permutation-substitution operation, and therefore it is suitable for practical image and video encryption.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This document proposes a novel color image encryption scheme based on multiple chaotic systems. The scheme utilizes the ergodic properties of chaotic systems to perform pixel permutation and applies a substitution operation to achieve diffusion. In the permutation stage, two generalized Arnold maps are used to generate hybrid chaotic sequences to permute pixel positions. In the diffusion stage, four pseudo-random gray value sequences generated by another generalized Arnold map are used to diffuse the permuted image via bitwise XOR operations. Security analysis shows the scheme has a large key space and is highly secure against statistical attacks, differential attacks, and chosen/known plaintext attacks.
COLOR IMAGE ENCRYPTION BASED ON MULTIPLE CHAOTIC SYSTEMSIJNSA Journal
This paper proposed a novel color image encryption scheme based on multiple chaotic systems. The ergodicity property of chaotic system is utilized to perform the permutation process; a substitution
operation is applied to achieve the diffusion effect. In permutation stage, the 3D color plain-image matrix
is converted to a 2D image matrix, then two generalized Arnold maps are employed to generate hybrid chaotic sequences which are dependent on the plain-image’s content. The generated chaotic sequences are then applied to perform the permutation process. The encryption’s key streams not only depend on the
cipher keys but also depend on plain-image and therefore can resist chosen-plaintext attack as well as
known-plaintext attack. In the diffusion stage, four pseudo-random gray value sequences are generated by another generalized Arnold map. The gray value sequences are applied to perform the diffusion process by bitxoring operation with the permuted image row-by-row or column-by-column to improve the encryption rate. The security and performance analysis have been performed, including key space analysis, histogram analysis, correlation analysis, information entropy analysis, key sensitivity analysis, differential analysis etc. The experimental results show that the proposed image encryption scheme is highly secure thanks to its large key space and efficient permutation-substitution operation, and therefore it is suitable for practical image and video encryption.
IRJET- A Comparative Review of Satellite Image Super Resolution TechniquesIRJET Journal
This document presents a comparative review of satellite image super resolution techniques in the frequency domain. It discusses several common interpolation and wavelet transform based methods for enhancing the resolution of satellite images, including nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, 2D wavelet transformation with and without wavelet zero padding and cycle spinning, complex wavelet transform, and discrete wavelet transform. The review finds that while interpolation alone is insufficient, combining interpolation like bicubic interpolation with wavelet transforms provides better performance for enhancing satellite image resolution.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
E046012533
1. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 25|P a g e
A Novel Video Scaling Algorithm Based On Linear Interpolation
WithQuality Enhancement
SafinazS1
,S. Ramachandran2
1
Sir MVIT, Bangalore
2
SJBIT, Bangalore
Abstract
A non-adaptive image interpolation algorithm is proposed to scale images for any given scaling factor with an
enhancement scheme to ensure a better picture. |In the proposed algorithm, two reference frames, one with
higher resolution and another with lower resolution are generated using an original videoframefor the User
defined scaling factor. Later, two Intermediate frames are generated from these reference frames using the
filtering based re-sampling Lanczos3 kernel. Finally, the desired scaled frame is obtained by linearly
interpolating the two intermediate frames.The scaled frame is then passed through an enhancement phase, where
a Sharpening Filter is employed, to obtain an enhanced scaled frame. The algorithm compares favorably with
other interpolating techniques such as Bilinear Interpolation and B-Spline interpolation in terms of the image
quality. The quality of the scaled frames expressed as PSNR is better than 45dB.
Keywords-Image scaling, Video scaling, Filtering, Sampling, Interpolation
I. INTRODUCTION
Scaling a video can be effective once an efficient
Image scaling algorithm is determined. Digital Image
scaling is the process of resizing a digital image,
involving a trade-off between efficiency, smoothness
and sharpness. An image interpolation algorithm is
used to convert an image from one resolution
(dimension) to another without losing the visual
content in the picture. Image interpolation algorithms
can be grouped into two categories, non-adaptive and
adaptive Ref. [1-4]. In non-adaptive algorithms,
computational logic are fixed irrespective of the input
image features, whereas in adaptive algorithms
computational logic is dependent upon the intrinsic
image features and contents of the input image.
When the image is interpolated from a higher
resolution to a lower resolution, it is called as image
down-scaling. On the other hand, when the image is
interpolated from a lower resolution to a higher
resolution, it is referred as up-scaling. Image
interpolation has a variety of applications in the areas
of computer graphics, editing, medical image
reconstruction, enlargingthe images for HDTV,
shrinking images to fit mini-size LCD panel in
portable instruments and so on. It is also a part of
many commercial image processing tools or freeware
graphic viewers such as Adobe Photoshop CS2
software, IrfanView [5], Fast Stone Photo Resizer,
Photo PosPro, XnConvert etc.
Numerous digital image scaling techniqueshave
been presented [6-10] of which the most popular
methods are: pixel replication based nearest neighbor
replacement algorithm, Pixel interpolation based Bi-
linear, Filter/Kernel based Cubic, Bi-cubic, B-Spline,
Box , Triangle, Lanczos etc.
In this paper, a non- adaptive interpolation
algorithm is proposed with an enhancement scheme
to ensure a better Image quality of the scaled image.
The proposed algorithm can scale images to any
given scaling ratio, be it up-scaling or downscaling. It
is compared with various interpolating techniques
such as the Bilinear, B-Spline and Lanczos. Various
test images of different sizes ranging from 150x250
pixelsto 600x912 pixels are scaled by a scaling factor
of 150%. Simulation results show the impact of this
algorithm in terms of image quality metrics.
In the proposed algorithm, two reference frames,
one with higher resolution and another with lower
resolutions are generated using the original input
video frame and scaling factor. These reference
frames are used to interpolate two intermediate
frames, using one of the filtering based re-sampling
techniques such as the Lanczos3 kernel. Finally, the
scaled frame is obtained by linearly interpolating the
two Intermediate frames. Here a simple method of
linear interpolation can be replaced by a more
efficient architecture of Extended Linear Interpolator
as described in Ref. [11].Thescaled picture is then
passed through an enhancement phase, where a
Sharpening Filter is employed to obtain the enhanced
scaled picture.
The rest of the paper is organized as follows.
Section II describes the various Interpolation
Techniques followed by the description of the
Proposed Algorithm in Section III. Section IV
presents the experimental results. The conclusion is
presented in Section V.
RESEARCH ARTICLE OPEN ACCESS
2. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 26|P a g e
II. INTERPOLATION TECHNIQUES
The basic principle of image scaling is to have a
reference image as the base image in order to
construct a new scaled image. The reconstructed
image can be smaller, larger, or equal in size with the
original image depending on the scaling factor. When
enlarging an image, we are actually introducing
empty spaces in the original base picture, which is
the process of up-sampling. From this image we need
to interpolate meaningful pixel valuesto fill the empty
spaces, through any of the non-adaptive, adaptive or
filter based interpolation techniques [12-14].
Linear interpolation: It is a basic form of
interpolation. Here we interpolate or estimate pixel
value of any arbitrary point between two or more
given points. Mathematically linear interpolation is
for interpolating functions of one variable (either „x‟
or „y‟) on a regular 1D grid as shown in Fig.1.Herein,
P is the arbitrary point between two known pixel
values P1 and P2, where the pixel value has to be
interpolated or estimated using Eq. (1).
Figure1 Linear Interpolation
Bilinear image Interpolation:It is an extension of
linear interpolation for interpolating functions of two
variables („x‟ and „y‟) on a regular 2D grid. This
algorithm is a combination of two linear
interpolations. The key idea is to perform linear
interpolation first in one direction, and then again in
the other direction as shown in Fig.2.Here, P is the
arbitrary point between four known pixel values P1,
P2, P3 and P4. We first apply the Linear Interpolation
between P1 and P2 to obtain the Pixel value P12
using the Eq. no. (1). Similarly, P34 is interpolated
between P3 and P4. Then the pixel value P is
obtained using Eq. (2).
Filter Based Interpolation: The filtering-based
methods are also known as re-sampling methods [15-
17]. As shown in Fig. 3, the re-sampling from one
discrete signal x [n] to a re-sampled signal y [n‟] of a
different resolution is computed as:y n′
=
h t x n′
− tw
t=−w , where h[t] is the interpolating
function and w is the desired filtering window.
Figure 2 Bilinear Interpolation
In this paper, two different 2-D separable filters
are selected, such as Cubic B-Spline kernel and
Lanczos3 kernel
Figure 3 Re-sampling / Filter based Interpolation
forsimulation and comparison of experimental
results. The kernel functions of these filters [18-21]
are as follows.
Cubic B-Spline: It is a form of interpolation
where the interpolantis a special type of piecewise
polynomial, called a Spline [15-17]. The kernel
function of the cubic B-Spline is described using Eq.
(3).
………. (1)
. (2)
…. (3)
3. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 27|P a g e
Lanczos re-sampling/filter: It is a mathematical
formula used to smoothly interpolate the value of a
digital signal between its samples. It isa
dilatedSinc function windowed by the
central humps. The sum of these translated and scaled
kernels is then evaluated at the desired points. The
Lanczos Kernel functions are described using Eq. (4).
The parameter „a‟ is a positive integer,
typically 2 or 3, which determines the size of the
kernel. As examples, the kernels are presented in Fig.
4 for a = 2 and a = 3.
Figure 4Lanczos Kernels
III. PROPOSED ALGORITHM
The block diagram of the proposed algorithm is
presented in Fig. 5.
Figure5 Block Diagram of the Proposed Algorithm
The input image or a frame X of M x N pixels
resolution is scaled by a factor S in order to obtain a
scaled image or frame Y of a different resolution,
say, I x J pixels. The following is a representation of
images produced after every stage of the proposed
algorithm:
X (M x N): Input Frame, S: Scaling Factor
X1 (M1 x N1): Larger Reference Frame
X2 (M2 x N2): Smaller Reference Frame
Y1 (I1 x J1): Intermediate Frame 1
Y2 (I2 x J2): Intermediate Frame 2
Y (I x J): Scaled output Frame
The Flow chart of the proposed algorithm is
presented in Fig. 6. The algorithm makes use of some
of the previously mentioned scaling methods. Out of
the various methods, Lanczos3 kernel is selected for
simulation in the proposed algorithm. Fig. 7 and Fig.
8 show the first three stages. The two reference
frames, one Larger resolution reference frame and
another a Smaller reference frame are generated from
the input image for a specific scaling factor using the
Lancoz3 re-sampling method. The input frame
isscaled by twice the scaling factor (2*S) in order to
obtain the larger frame and scaled by half (0.5*S) to
obtain the smaller frame.
....... (4)
4. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 28|P a g e
Figure 6 Flow Chart of Proposed Algorithm
This initial step to generate the reference frames is
important with a view to ensure that the interpolated
frame is within the range of pixel values of the input
frame.
These reference frames are used to obtain the
intermediate frames having the same resolutions.
These are further used to linearly interpolate the
required scaled output imageY (IxJ) using the
interpolation equation described in Eq. no (5).
W1: Width of larger reference Frame
W2: Width of smaller reference Frame
Width: Width of larger reference Frame
Figure7 Functional Blocks to Generate Reference
Frames
Figure8Functional Blocks of the Proposed Algorithm to Scale a Video Frame
Figure 9 EnhancementUsing Sharpening Filter
The last stage of Enhancement involves a
Sharpening Filter as shown in Fig. 9 [22-29]. The
scaled video frame is filtered through a low-pass
filter to obtain an Averaged or Smoothened video
frame. Then a Sharpened frame is achieved by taking
the difference between the scaled and smoothened
frame. Finally an enhanced frame is obtained by
adding the scaled frame and the sharpened frame.
Through this, the quality of the scaled output frame is
significantly enhanced.
IV. EXPERIMENTAL RESULTS
The performance of the proposed algorithm and
the quality metric of the enhanced scaled image is
measured using a Mean Square Error Extractor. The
……. (5)
5. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 29|P a g e
extractor requires two images of same size to
evaluate the error between them. In order to evaluate
the quality of the scaled image, PSNR is computed
between theoriginal image scaled using Image
Processing freeware such as Irfan View orFastStone
Photo Resizer, and the scaled image by the proposed
algorithm. The PSNR is expressed in decibel scale
(dB).High value of PSNR indicates a high quality of
image. It is defined using Mean Square Error (MSE).
Lower value of MSE results in High value of PSNR
[12]. The Extractor uses the following relationships
to evaluate the PSNR:
WhereYIP : Scaled Frame using Image Processing
Freeware
YPA : Scaled Frame using the Proposed
Algorithm.
i * j: Total number of Pixels in the scaled
image.
The proposed algorithm is implemented in
MATLAB. Video sequences of various resolutions
are selected for test purposes as shown in Fig.
10.Table 1 presents the PSNR values for these video
frames/images scaled by the proposed algorithm.It
may be seen that the quality obtained by the proposed
algorithm is significantly better when compared to
other methods such as Bi-linear and B-Spline. As an
example, the popular image “Lena” has been scaled
using Bi-linear, B-Spline and the proposed algorithm.
Different scaling factors have been used. The
reconstructed images are presented in Fig. 11. In
addition, various images as well as video sequences
have been scaled by 150% and the results are
presented in Fig. 12. These demonstrate that the
proposed algorithm is better than other methods in
terms of reconstructed image quality.
Figure10 Original Test Images
6. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 30|P a g e
Table 1 Comparison of Quality for Images Scaled by a Scaling Factor S = 1.5 (150%)
Using Different Interpolation Methods and the Proposed Algorithm
7. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 31|P a g e
Figure 11 Comparison of various Scaling Algorithms and the Proposed Algorithm for Lena Image of
Original Size: 220x220 pixels
(a)(b) (c) Bilinear Scaling for: S=2, up-scaled size: 440x440 pixels, PSNR=44, S=1.5,up-scaled size:330x330
pixels,PSNR=47,S=0.5, down-scaled size: 110x110 pixels, PSNR=40 (d) (e) (f) Bi-Spline Scaling, S=2, up-
scaled size: 440x440 pixels, PSNR=40, S=1.5,up-scaled size: 330x330 pixels, PSNR=39, S =0.5(50%), down-
scaled size: 110x110 pixels, PSNR=45(g)(h)(i) Proposed Algorithm, S=2, up-scaled size: 440x440 pixels,
PSNR=45, S=1.5, up-scaled size: 330x330 pixels, PSNR= 50, S=0.5, down-scaled size: 110x110
pixels,PSNR=49
(a) (b) (c) (d)
(e) (f) (g) (h)
8. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 32|P a g e
(i) (j) (k) (l)
Figure 12 Scaled FramesUsing the Proposed Algorithm for Scaling Factor S=1.5
(a) Original Lena Image(220x220 pixels)(b) Scaled Lena Image (330x330 pixels), PSNR= 50 dB
(c) Original Titanic Frame (150x200 pixels) (d) Scaled Titanic Frame (225x300 pixels), PSNR= 48dB
(e) Original Flower1Image(152x160 pixels)(f) Scaled Flower1 Image (228x240 pixels), PSNR= 47 dB
(g) Original E. Tower Frame(800x600 pixels)(h) Scaled E. Tower Frame (1200x900 pixels), PSNR= 52 dB
(i) Original Tennis Frame(300x400 pixels)(j) Scaled Tennis Frame (450x600 pixels), PSNR= 45 dB
(k) Original Geneva Image(600x800 pixels)(l) Scaled Titanic Image (900x1200 pixels), PSNR= 46dB.
V. CONCLUSION
An efficient Interpolation algorithm for image
scaling has been proposed with an enhancement
scheme. The algorithmyields a high image quality in
comparison with Bilinear, B-Spline interpolating
methods. The Proposed Algorithm can be effectively
applied for images and video frames in order to
achieve an efficient image/video scaling.
REFERENCES
[1] T. Lehmann, C. Gonner, and K. Seltzer,
“Survey: Interpolation Methods in Medical
Image Processing”,IEEE Trans. Med.
Imaging, 18: pp. 1049-1067, 1999.
[2] Y. Wang and S. Mitra, “Motion/Pattern
Adaptive Interpolation of Interlaced Video
Sequences”,Proceedings of International
Conference on Acoustics, Speech and Signal
Processing, ICASSP-91, Vol. 4, pp. 2829-
2832, Toronto, Ont., Canada, April 1991.
[3] George Wolberg,“Digital Image Warping”,
IEEE Computer Society Press, Los Alamitos,
CA, USA, 1994.
[4] J.W. Hwang and H.S. Lee, “Adaptive Image
Interpolation Based on Local Gradient
Features”,IEEE Signal Processing Letters,
Vol.11, No.3, March 2004.
[5] Irfan View graphic viewer
(http://www.irfanview.com/).
[6] Niruban R. T.,SreeRenga Raja T.
SreeSharmila, “Novel Color Filter Array
Demosaicing in Frequency domain with
Spatial Refinement”, Journal of Computer
Science 10 (9): pp. 1591-1599, 2014 ISSN:
1549-3636 2014.
[7] SeungHoonJee, Moon Gi Kang, “Improved
Multichannel Up-sampling method for
Reconstruction based Super-resolution”, Proc.
SPIE 8655, Image Processing: Algorithms and
Systems XI, 86550S, February 19, 2013.
[8] Leonid Bilevich, Leonid Yaroslavsky, “ Fast
DCT-based algorithm for Signal and Image
Accurate Scaling", Proc. SPIE 8655, Image
Processing: Algorithms and Systems XI,
86550W, February 19, 2013.
[9] Shu-Mei Guo, Chia-Wei Chen, Chih-Yuan
Hsu, Guo-Ching Shih,“Fast Pixel-Size-Based
Large-Scale Enlargement and Reduction of
Image: Adaptive Combination of Bilinear
Interpolation and Discrete Cosine Transform”,
Journal on. Electron Imaging. 20(3),
033005,doi:10.11171/1.3603937. August
2012.
[10] DamberThapa, KaamranRaahemifar, William
R. Bobier,VasudevanLakshminarayanan,
“Comparison of Super-Resolution Algorithms
applied to RetinalImages”, Journal on.
Biomed. Opt. 19(5), 056002, May 01,doi:
10.1117/1.JBO.19.5.056002. 2014.
[11] Chung-chi lin, Ming-hwasheu, Huann-
kengchiang, Chishyanliaw,Zeng-chuanwu and
Wen-kaitsai, “An Efficient Architecture of
Extended Linear interpolation for Image
processing”,Journal of Information Science
and Engineering,26, pp. 631-648, 2010.
[12] E. Maeland, "On the Comparison of
Interpolation Methods", IEEE Transactions on
Medical Imaging, Vol. 7, pp. 213–217,
September, 1988.
[13] M.R. Smith and S.T. Nichols, "Efficient
Algorithms for Generating Interpolated
(Zoomed) MRImages",Magnetic Resonance in
Medicine, Vol. 7, pp. 156–171, 1988.
[14] J. A. Parker, R. V. Kenyon and D. E. Troxel,
“Comparison of Interpolating Methods for
Image Resampling”, IEEE Trans. on Medical
Imaging, Vol. MI-2,pp. 31-39, 1983.
[15] K. Turkowski, “Filters for Common
Resampling Tasks”, Graphics Gems I,
Academic Press, pp. 147-165, 1990.
9. Safinaz S et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 6( Version 1), June 2014, pp.25-33
www.ijera.com 33|P a g e
[16] D.F. Watson, “Contouring: A Guide to the
Analysis and Display of Spatial Data”. New
York: Pergamon Press, 1992.
[17] P. Thevenaz, T. Blu and M. Unser, “Image
Interpolation and Resampling, “Handbook of
Medical Imaging, Processing and Analysis”,
pp. 393-420, 2000.
[18] M. Unser, A. Aldroubi and M. Eden, "B-Spline
Signal Processing: Part I—Theory",
IEEETransactions on Signal Processing, Vol.
41, pp. 821–832, February, 1993.
[19] M. Unser, A. Aldroubi and M. Eden, "B-Spline
Signal Processing: Part II—Efficient Design
and Applications", IEEE Transactions on
Signal Processing, Vol. 41, pp. 834–848,
February, 1993.
[20] R.G. Keys, "Cubic Convolution Interpolation
for Digital Image Processing",IEEE
Transactions on Acoustics, Speech, and Signal
Processing, Vol. ASSP, pp. 1153–1160, 1981.
[21] T.Narsimulu, B.Rama Raj, Ch.RajithaLaxmi,
“High- Resolution Video Scaling using Cubic-
BSpline Approach”, International Journal of
Computer Applications (0975 – 8887),Vol.17,
pp. 8-16, March 2011.
[22] Guan-Hao Chen, Chun-Ling Yang and Sheng-
Li Xie, “Gradient-Based Structural Similarity
for Image Quality
Assessment”,IEEEInternational conference on
Image Processing, ISSN: 1522-4880, pp.
2929-2932, Oct 2006.
[23] Du Sic Yoo, Joonyoung Chang, ChulHee Park
and Moon Gi Kang, “Video resampling
algorithm for simultaneous deinterlacing and
image upscaling with reduced jagged edge
artifacts”, EURASIP Journal on Advances in
Signal Processing,doi 10.1186/1687-6180-
2013-118.2013.
[24] X. Li and M. T. Orchard, “New Edge-Directed
Interpolation”, IEEE Trans. on Image
Processing, Vol. 10, No. 10,pp. 1521-1527,
October 2001.
[25] Y. Wang, and S. Mitra, “Edge Preserved
Image Zooming”, Proc. of European Signal
Process, EURASIP-88, pp. 1445-1448,
Grenoble, France, 1988.
[26] S. Thurnhofer, and S. Mitra, “Edge-Enhanced
Image Zooming”, Optical Eng.35(7),pp. 1862-
1869, 1996.
[27] K. P. Hong, J. K. Paik, H. J. Kim, and C. H.
Lee, “An Edge-Preserving Image Interpolation
System for a Digital Camcorder”, IEEE Trans.
on Consumer Electronics, Vol. 42, No. 3, pp.
279-284, August 1996.
[28] K. Jensen, and D. Anastassiou, “Spatial
Resolution Enhancement of Images Using
Nonlinear interpolation”,Proceedings of
International Conference on Acoustics, Speech
and Signal Processing, ICASSP-90, pp. 2045-
2048, Albuquerque, NM, 1990.
[29] C. H. Park, J. Chang, M. G. Kang, “ Kernel-
based Image Upscaling method with Shooting
Artifact Reduction”, Proc. SPIE 8655 Image
Processing: Algorithms and Systems XI,doi:
10.1117/12.2003326.19.2013.