Abstract In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted finally. It is seen from the experimental result the proposed method improves the compression performance than the existing method. Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman coding.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
With the large use of images for the communication, image zooming plays an important role.
Image zooming is the process of enlarging the image with some factor of magnification, where
the factor can be integer or non-integer. Applying zooming algorithm to an image generally results
in aliasing; edge blurring and other artifacts. The main focus of the work presented in this paper is
on the reduction of these artifacts. This paper focuses on reduction of these artifacts and
presents an image zooming algorithm using non-linear fourth order PDE method combined with
edge directed bi-cubic algorithm. The proposed method uses high resolution image obtained from
edge directed bi-cubic interpolation algorithm to construct the zoomed image. This technique
preserves edges and minimizes blurring and staircase effects in the zoomed image. In order to
evaluate image quality obtained after zooming, the objective assessment is performed.
Use of horizontal and vertical edge processing technique to improve number pl...eSAT Journals
Abstract
Automatic Number Plate Recognition (ANPR) is a bulk investigation system that catches the image of vehicles and identifies their
license number. ANPR may be supported in the finding of taken vehicles. The recognition of taken vehicles may be done in an
effective means by means of the ANPR systems situated on the highways. This paper proposes a recognition technique in which
the vehicle plate image is found by the digital cameras and the image is processed to acquire the number plate data. A back image
of a vehicle is taken and administered using numerous algorithms.
Key Words: Image Processing, Histogram, Skew Correction, Segmentation
MONOGENIC SCALE SPACE BASED REGION COVARIANCE MATRIX DESCRIPTOR FOR FACE RECO...cscpconf
In this paper, we have presented a new face recognition algorithm based on region covariance
matrix (RCM) descriptor computed in monogenic scale space. In the proposed model, energy
information obtained using monogenic filter is used to represent a pixel at different scales to
form region covariance matrix descriptor for each face image during training phase. An eigenvalue
based distance measure is used to compute the similarity between face images. Extensive
experimentation on AT&T and YALE face database has been conducted to reveal the
performance of the monogenic scale space based region covariance matrix method and
comparative analysis is made with the basic RCM method and Gabor based region covariance matrix method to exhibit the superiority of the proposed technique.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Interpolation Technique using Non Linear Partial Differential Equation with E...CSCJournals
With the large use of images for the communication, image zooming plays an important role.
Image zooming is the process of enlarging the image with some factor of magnification, where
the factor can be integer or non-integer. Applying zooming algorithm to an image generally results
in aliasing; edge blurring and other artifacts. The main focus of the work presented in this paper is
on the reduction of these artifacts. This paper focuses on reduction of these artifacts and
presents an image zooming algorithm using non-linear fourth order PDE method combined with
edge directed bi-cubic algorithm. The proposed method uses high resolution image obtained from
edge directed bi-cubic interpolation algorithm to construct the zoomed image. This technique
preserves edges and minimizes blurring and staircase effects in the zoomed image. In order to
evaluate image quality obtained after zooming, the objective assessment is performed.
Use of horizontal and vertical edge processing technique to improve number pl...eSAT Journals
Abstract
Automatic Number Plate Recognition (ANPR) is a bulk investigation system that catches the image of vehicles and identifies their
license number. ANPR may be supported in the finding of taken vehicles. The recognition of taken vehicles may be done in an
effective means by means of the ANPR systems situated on the highways. This paper proposes a recognition technique in which
the vehicle plate image is found by the digital cameras and the image is processed to acquire the number plate data. A back image
of a vehicle is taken and administered using numerous algorithms.
Key Words: Image Processing, Histogram, Skew Correction, Segmentation
MONOGENIC SCALE SPACE BASED REGION COVARIANCE MATRIX DESCRIPTOR FOR FACE RECO...cscpconf
In this paper, we have presented a new face recognition algorithm based on region covariance
matrix (RCM) descriptor computed in monogenic scale space. In the proposed model, energy
information obtained using monogenic filter is used to represent a pixel at different scales to
form region covariance matrix descriptor for each face image during training phase. An eigenvalue
based distance measure is used to compute the similarity between face images. Extensive
experimentation on AT&T and YALE face database has been conducted to reveal the
performance of the monogenic scale space based region covariance matrix method and
comparative analysis is made with the basic RCM method and Gabor based region covariance matrix method to exhibit the superiority of the proposed technique.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Contour Line Tracing Algorithm for Digital Topographic MapsCSCJournals
Topographic maps contain information related to roads, contours, landmarks land covers and rivers etc. For any Remote sensing and GIS based project, creating a database using digitization techniques is a tedious and time consuming process especially for contour tracing. Contour line is very important information that these maps provide. They are mainly used for determining slope of the landforms or rivers. These contour lines are also used for generating Digital Elevation Model (DEM) for 3D surface generation from any satellite imagery or aerial photographs. This paper suggests an algorithm that can be used for tracing contour lines automatically from contour maps extracted from the topographical sheets and creating a database. In our approach, we have proposed a modified Moore's Neighbor contour tracing algorithm to trace all contours in the given topographic maps. The proposed approach is tested on several topographic maps and provides satisfactory results and takes less time to trace the contour lines compared with other existing algorithms.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The Active Shape Model (ASM) is one of the most popular local texture models for face alignment. It applies in many fields such as locating facial features in the image, face synthesis, etc. However, the experimental results show that the accuracy of the classical ASM for some applications is not high. This paper suggests some improvements on the classical ASM to increase the performance of the model in the application: face alignment. Four of our major improvements include: i) building a model combining Sobel filter and the 2-D profile in searching face in image; ii) applying Canny algorithm for the enhancement edge on image; iii) Support Vector Machine (SVM) is used to classify landmarks on face, in order to determine exactly location of these landmarks support for ASM; iv) automatically adjust 2-D profile in the multi-level model based on the size of the input image. The experimental results on CalTech face database and Technical University of Denmark database (imm_face) show that our proposed improvement leads to far better performance.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Image Enhancement Using Filter To Adjust Dynamic Range of PixelsIJERA Editor
In this paper, we propose a novel algorithm for image enhancement in compressed (DCT) domain. Despite, few algorithms have been reported to enhance images in DCT domain proposed algorithm differs from previous algorithms in such a way that it enhances both dark and bright regions of an image equally well. In addition, it outperforms in enhancing the chromatic components as well as luminance components. Since the algorithm works in DCT domain, computational complexity is reduced reasonably.
Effect of Similarity Measures for CBIR using Bins ApproachCSCJournals
This paper elaborates on the selection of suitable similarity measure for content based image retrieval. It contains the analysis done after the application of similarity measure named Minkowiski Distance from order first to fifth. It also explains the effective use of similarity measure named correlation distance in the form of angle ‘cosè’ between two vectors. Feature vector database prepared for this experimentation is based on extraction of first four moments into 27 bins formed by partitioning the equalized histogram of R, G and B planes of image into three parts. This generates the feature vector of dimension 27. Image database used in this work includes 2000 BMP images from 20 different classes. Three feature vector databases of four moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color intensities (R, G and B) separately. Then system enters in the second phase of comparing the query image and database images which makes of set of similarity measures mentioned above. Results obtained using all distance measures are then evaluated using three parameters PRCP, LSRR and Longest String. Results obtained are then refined and narrowed by combining the three different results of three different colors R, G and B using criterion 3. Analysis of these results with respect to similarity measures describes the effectiveness of lower orders of Minkowiski distance as compared to higher orders. Use of Correlation distance also proved its best for these CBIR results.
Content Based Image Retrieval Using Full Haar SectorizationCSCJournals
Content based image retrieval (CBIR) deals with retrieval of relevant images from the large image database. It works on the features of images extracted. In this paper we are using very innovative idea of sectorization of Full Haar Wavelet transformed images for extracting the features into 4, 8, 12 and 16 sectors. The paper proposes two planes to be sectored i.e. Forward plane (Even plane) and backward plane (Odd plane). Similarity measure is also very essential part of CBIR which lets one to find the closeness of the query image with the database images. We have used two similarity measures namely Euclidean distance (ED) and sum of absolute difference (AD). The overall performance of retrieval of the algorithm has been measured by average precision and recall cross over point and LIRS, LSRR. The paper compares the performance of the methods with respect to type of planes, number of sectors, types of similarity measures and values of LIRS and LSRR.
Image Inpainting Using Cloning AlgorithmsCSCJournals
In image recovery image inpainting has become essential content and crucial topic in research of a new era. The objective is to restore the image with the surrounding information or modifying an image in a way that looks natural for the viewer. The process involves transporting and diffusing image information. In this paper to inpaint an image cloning concept has been used. Multiscale transformation method is used for cloning process of an image inpainting. Results are compared with conventional methods namely Taylor expansion method, poisson editing, Shepard’s method. Experimental analysis verifies better results and shows that Shepard’s method using multiscale transformation not only restores small scale damages but also large damaged area and useful in duplication of image information in an image.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
Lossless Image Compression Using Data Folding Followed By Arithmetic Codingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Contour Line Tracing Algorithm for Digital Topographic MapsCSCJournals
Topographic maps contain information related to roads, contours, landmarks land covers and rivers etc. For any Remote sensing and GIS based project, creating a database using digitization techniques is a tedious and time consuming process especially for contour tracing. Contour line is very important information that these maps provide. They are mainly used for determining slope of the landforms or rivers. These contour lines are also used for generating Digital Elevation Model (DEM) for 3D surface generation from any satellite imagery or aerial photographs. This paper suggests an algorithm that can be used for tracing contour lines automatically from contour maps extracted from the topographical sheets and creating a database. In our approach, we have proposed a modified Moore's Neighbor contour tracing algorithm to trace all contours in the given topographic maps. The proposed approach is tested on several topographic maps and provides satisfactory results and takes less time to trace the contour lines compared with other existing algorithms.
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
Face Alignment Using Active Shape Model And Support Vector MachineCSCJournals
The Active Shape Model (ASM) is one of the most popular local texture models for face alignment. It applies in many fields such as locating facial features in the image, face synthesis, etc. However, the experimental results show that the accuracy of the classical ASM for some applications is not high. This paper suggests some improvements on the classical ASM to increase the performance of the model in the application: face alignment. Four of our major improvements include: i) building a model combining Sobel filter and the 2-D profile in searching face in image; ii) applying Canny algorithm for the enhancement edge on image; iii) Support Vector Machine (SVM) is used to classify landmarks on face, in order to determine exactly location of these landmarks support for ASM; iv) automatically adjust 2-D profile in the multi-level model based on the size of the input image. The experimental results on CalTech face database and Technical University of Denmark database (imm_face) show that our proposed improvement leads to far better performance.
Unconstrained Optimization Method to Design Two Channel Quadrature Mirror Fil...CSCJournals
This paper proposes an efficient method for the design of two-channel, quadrature mirror filter (QMF) bank for subband image coding. The choice of filter bank is important as it affects image quality as well as system design complexity. The design problem is formulated as weighted sum of reconstruction error in time domain and passband and stop-band energy of the low-pass analysis filter of the filter bank .The objective function is minimized directly, using nonlinear unconstrained method. Experimental results of the method on images show that the performance of the proposed method is better than that of the already existing methods. The impact of some filter characteristics, such as stopband attenuation, stopband edge, and filter length on the performance of the reconstructed images is also investigated.
Image Enhancement Using Filter To Adjust Dynamic Range of PixelsIJERA Editor
In this paper, we propose a novel algorithm for image enhancement in compressed (DCT) domain. Despite, few algorithms have been reported to enhance images in DCT domain proposed algorithm differs from previous algorithms in such a way that it enhances both dark and bright regions of an image equally well. In addition, it outperforms in enhancing the chromatic components as well as luminance components. Since the algorithm works in DCT domain, computational complexity is reduced reasonably.
Effect of Similarity Measures for CBIR using Bins ApproachCSCJournals
This paper elaborates on the selection of suitable similarity measure for content based image retrieval. It contains the analysis done after the application of similarity measure named Minkowiski Distance from order first to fifth. It also explains the effective use of similarity measure named correlation distance in the form of angle ‘cosè’ between two vectors. Feature vector database prepared for this experimentation is based on extraction of first four moments into 27 bins formed by partitioning the equalized histogram of R, G and B planes of image into three parts. This generates the feature vector of dimension 27. Image database used in this work includes 2000 BMP images from 20 different classes. Three feature vector databases of four moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color intensities (R, G and B) separately. Then system enters in the second phase of comparing the query image and database images which makes of set of similarity measures mentioned above. Results obtained using all distance measures are then evaluated using three parameters PRCP, LSRR and Longest String. Results obtained are then refined and narrowed by combining the three different results of three different colors R, G and B using criterion 3. Analysis of these results with respect to similarity measures describes the effectiveness of lower orders of Minkowiski distance as compared to higher orders. Use of Correlation distance also proved its best for these CBIR results.
Content Based Image Retrieval Using Full Haar SectorizationCSCJournals
Content based image retrieval (CBIR) deals with retrieval of relevant images from the large image database. It works on the features of images extracted. In this paper we are using very innovative idea of sectorization of Full Haar Wavelet transformed images for extracting the features into 4, 8, 12 and 16 sectors. The paper proposes two planes to be sectored i.e. Forward plane (Even plane) and backward plane (Odd plane). Similarity measure is also very essential part of CBIR which lets one to find the closeness of the query image with the database images. We have used two similarity measures namely Euclidean distance (ED) and sum of absolute difference (AD). The overall performance of retrieval of the algorithm has been measured by average precision and recall cross over point and LIRS, LSRR. The paper compares the performance of the methods with respect to type of planes, number of sectors, types of similarity measures and values of LIRS and LSRR.
Image Inpainting Using Cloning AlgorithmsCSCJournals
In image recovery image inpainting has become essential content and crucial topic in research of a new era. The objective is to restore the image with the surrounding information or modifying an image in a way that looks natural for the viewer. The process involves transporting and diffusing image information. In this paper to inpaint an image cloning concept has been used. Multiscale transformation method is used for cloning process of an image inpainting. Results are compared with conventional methods namely Taylor expansion method, poisson editing, Shepard’s method. Experimental analysis verifies better results and shows that Shepard’s method using multiscale transformation not only restores small scale damages but also large damaged area and useful in duplication of image information in an image.
Novel algorithm for color image demosaikcing using laplacian maskeSAT Journals
Abstract Images in any digital camera is formed with the help of a monochrome sensor, which can be either a charge-coupled device(CCD) or complementary metal oxide semi-conductor(CMOS). Interpolation is the base for any demsoaicking process. The input for interpolation is the output of the Bayer Color Filter Array which is a mosaic like lattice structure. Bayer Color Filter Array samples the channel information of R,G and B values separately assigning only one channel component per pixel. To generate a complete color image, three channel values are required. In order to find those missing samples we use interpolation. It is a technique of estimating the missing values from the discrete observed samples scattered over the space. Thus Demosaicking or De-bayering is an algorithm of finding missing values from the mosaic patterned output of the Bayer CFA. Interpolation algorithm results in few artifacts such as zippering effect in the edges. This paper introduces an algorithm for demosaicking which outperforms the existing demosaciking algorithms. The main aim of this algorithm is to accurately estimate the Green component. The standard mechanism to compare the performance is PSNR(Peak Signal to Noise Ratio) and the image dataset for comparison was Kodak image dataset. The algorithm was implemented using Matlab2009B version. Keywords: Demosaicking, Interpolation, Bayer CFA, Laplacian Mask, Correlation.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
Lossless Image Compression Using Data Folding Followed By Arithmetic Codingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Hierarchical prediction and context adaptive coding for lossless color image ...LeMeniz Infotech
Hierarchical prediction and context adaptive coding for lossless color image compression
A new lossless color image compression algorithm is proposed, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method.
Images may contain different types of noises. Removing noise from image is often the first step in image processing, and remains a challenging problem in spite of sophistication of recent research. This ppt presents an efficient image denoising scheme and their reconstruction based on Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IDWT).
Fourier Transform : Its power and Limitations – Short Time Fourier Transform – The Gabor Transform - Discrete Time Fourier Transform and filter banks – Continuous Wavelet Transform – Wavelet Transform Ideal Case – Perfect Reconstruction Filter Banks and wavelets – Recursive multi-resolution decomposition – Haar Wavelet – Daubechies Wavelet.
Retinal blood vessel extraction and optical disc removaleSAT Journals
Abstract Retinal image processing is an important process by which we can detect the blood vessels and this helps us in detecting the DIABETIC RETINOPATHY at a early stage and this is very helpful because the symptoms are not known by anyone unless we have blur eye sight or we get blind. And this mainly occurs in people suffering from high diabetes. So by extracting the blood vessels using the algorithm we can see which blood vessels are actually damaged. So by using the algorithm we can continuously survey the situation and can protect our eye-sight. Keywords: field of view, retinopathy, thresholding, morphology, Otsu's algorithm, MATLAB.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
An Efficient Image Encomp Process Using LFSRIJTET Journal
Lossless color image compression algorithm based on hierarchical prediction and Context Adaptive Lossless Image Compression (CALIC).The RGB Image decorrelation is done by Reversible Color Transformation (RCT). The output of RCT is Y Component and chrominance component (cucv). The Y Component is encoded by any conventional technique like raster scan predicted method. The Chrominance images (cucv) are encoded by hierarchical prediction. In Hierarchical prediction row wise decomposition and column wise decomposition are performed. From the predicted value in order to obtain the compressed image can apply the arithmetic coding. In that results we can apply the Security for the images using LFSR encryption.
Dynamic texture based traffic vehicle monitoring systemeSAT Journals
Abstract Dynamic Textures are function of both space and time which exhibit spatio-temporal stationary property. They have spatially repetitive patterns which vary with time. In this paper, importance of phase spectrum in the signals is utilized and a novel method for vehicle monitoring is proposed with the help of Fourier analysis, synthesis and image processing. Methods like Doppler radar or GPS navigation are used commonly for tracking. The proposed image based approach has an added advantage that the clear image of the object (vehicle) can be used for future reference like proof of incidence, identification of owner and registration number. Keywords-Fourier Transform, dynamic texture, phase spectrum
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Contrast enhancement using various statistical operations and neighborhood pr...sipij
Histogram Equalization is a simple and effective contrast enhancement technique. In spite of its popularity
Histogram Equalization still have some limitations –produces artifacts, unnatural images and the local
details are not considered, therefore due to these limitations many other Equalization techniques have been
derived from it with some up gradation. In this proposed method statistics play an important role in image
processing, where statistical operations is applied to the image to get the desired result such as
manipulation of brightness and contrast. Thus, a novel algorithm using statistical operations and
neighborhood processing has been proposed in this paper where the algorithm has proven to be effective in
contrast enhancement based on the theory and experiment.
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoisin...CSCJournals
Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.
Similar to A lossless color image compression using an improved reversible color transform with hierarchical prediction and modified huffman coding (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
Abstract
Soil stabilization has proven to be one of the oldest techniques to improve the soil properties. Literature review conducted revealed
that uses of natural inorganic stabilizers are found to be one of the best options for soil stabilization. In this regard an attempt has
been made to evaluate the influence of RBI-81 stabilizer on properties of black cotton soil through laboratory investigations. Black
cotton soil with varying percentages of RBI-81 viz., 0, 0.5, 1, 1.5, 2, and 2.5 percent were studied for moisture density relationships
and strength behaviour of soils. Also the effect of curing period was evaluated as literature review clearly emphasized the strength
gain of soils stabilized with RBI-81 over a period of time. The results obtained shows that the unconfined compressive strength of
specimens treated with RBI-81 increased approximately by 250% for a curing period of 28 days as compared to virgin soil. Further
the CBR value improved approximately by 400%. The studies indicated an increasing trend for soil strength behaviour with
increasing percentage of RBI-81 suggesting its potential applications in soil stabilization.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
Abstract
Increase in traffic along with heavier magnitude of wheel loads cause rapid deterioration in pavements. There is a need to improve
density, strength of soil subgrade and other pavement layers. In this study an attempt is made to improve the properties of locally
available loamy soil using twin approaches viz., i) increasing the compaction of soil and ii) treating the soil with chemical stabilizer.
Laboratory studies are carried out on both untreated and treated soil samples compacted by different compaction efforts. Studies
show that increase in compaction effort results in increase in density of soil. However in soil treated with chemical stabilizer, rate of
increase in density is not significant. The soil treated with chemical stabilizer exhibits improvement in both strength and performance
properties.
Keywords: compaction, density, subgradestabilization, resilient modulus
Geographical information system (gis) for water resources managementeSAT Journals
Abstract
Water resources projects are inherited with overlapping and at times conflicting objectives. These projects are often of varied sizes
ranging from major projects with command areas of millions of hectares to very small projects implemented at the local level. Thus,
in all these projects there is seldom proper coordination which is essential for ensuring collective sustainability.
Integrated watershed development and management is the accepted answer but in turn requires a comprehensive framework that can
enable planning process involving all the stakeholders at different levels and scales is compulsory. Such a unified hydrological
framework is essential to evaluate the cause and effect of all the proposed actions within the drainage basins.
The present paper describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) which is
intended to meet the specific information needs of the various line departments of a typical State connected with water related aspects.
The HIS consist of a hydrologic information database coupled with tools for collating primary and secondary data and tools for
analyzing and visualizing the data and information. The HIS also incorporates hydrological model base for indirect assessment of
various entities of water balance in space and time. The framework would be maintained and updated to reflect fully the most
accurate ground truth data and the infrastructure requirements for planning and management.
Keywords: Hydrological Information System (HIS); WebGIS; Data Model; Web Mapping Services
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
Abstract
This paper presents an outlook on experimental behavior and a comparison with predicted formula on the behaviour of circular
concentrically loaded self-consolidating fibre reinforced concrete filled steel tube columns (HSSCFRC). Forty-five specimens were
tested. The main parameters varied in the tests are: (1) percentage of fiber (2) tube diameter or width to wall thickness ratio (D/t
from 15 to 25) (3) L/d ratio from 2.97 to 7.04 the results from these predictions were compared with the experimental data. The
experimental results) were also validated in this study.
Keywords: Self-compacting concrete; Concrete-filled steel tube; axial load behavior; Ultimate capacity.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
Abstract
Efficiency of the road network system is analyzed by travel time reliability measures. The study overlooks on an important measure of
travel time reliability and prioritizing Tiruchirappalli road network. Traffic volume and travel time were collected using license plate
matching method. Travel time measures were estimated from average travel time and 95th travel time. Effect of non-motorized vehicle
on efficiency of road system was evaluated. Relation between buffer time index and traffic volume was created. Travel time model has
been developed and travel time measure was validated. Then service quality of road sections in network were graded based on
travel time reliability measures.
Keywords: Buffer Time Index (BTI); Average Travel Time (ATT); Travel Time Reliability (TTR); Buffer Time (BT).
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
Abstract
Land and water are the two vital natural resources, the optimal management of these resources with minimum adverse environmental
impact are essential not only for sustainable development but also for human survival. Satellite remote sensing with geographic
information system has a pragmatic approach to map and generate spatial input layers of predicting response behavior and yield of
watershed. Hence, in the present study an attempt has been made to understand the hydrological process of the catchment at the
watershed level by drawing the inferences from moprhometric analysis and runoff. The study area chosen for the present study is
Yagachi catchment situated in Chickamaglur and Hassan district lies geographically at a longitude 75⁰52’08.77”E and
13⁰10’50.77”N latitude. It covers an area of 559.493 Sq.km. Morphometric analysis is carried out to estimate morphometric
parameters at Micro-watershed to understand the hydrological response of the catchment at the Micro-watershed level. Daily runoff
is estimated using USDA SCS curve number model for a period of 10 years from 2001 to 2010. The rainfall runoff relationship of the
study shows there is a positive correlation.
Keywords: morphometric analysis, runoff, remote sensing and GIS, SCS - method
-
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
A lossless color image compression using an improved reversible color transform with hierarchical prediction and modified huffman coding
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 334
A LOSSLESS COLOR IMAGE COMPRESSION USING AN IMPROVED
REVERSIBLE COLOR TRANSFORM WITH HIERARCHICAL
PREDICTION AND MODIFIED HUFFMAN CODING
Priyatosh Halder1
, Suman2
1
Radar Section, Air Force Jodhpur, Rajasthan, India
2
Asst Professor, ECE Department, Sri Sukhmani Institute of Engineering & Technology, Dera Bassi, Punjab
Abstract
In case of the conventional lossless color image compression methods, the pixels are interleaved from each color component, and
they are predicted and finally encoded. In this paper, we propose a lossless color image compression method using hierarchical
prediction of chrominance channel pixels and encoded with modified Huffman coding. An input image is chosen and the R, G and
B color channel is transform into YCuCv color space using an improved reversible color transform. After that a conventional
lossless image coder like CALIC is used to compress the luminance channel Y. The chrominance channel Cu and Cv are encoded
with hierarchical decomposition and directional prediction. The effective context modeling for prediction residual is adopted
finally. It is seen from the experimental result the proposed method improves the compression performance than the existing
method.
Keywords: Lossless color image compression, hierarchical prediction, reversible color transform, modified Huffman
coding.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
Recently increase in image related applications have created
an issue of the transformation of the images and its storing.
The considerable amount of space and bandwidth is required
for storage and transmission of image. This problem can be
overcome by the image compression. It reduces the number
of bites required to represent an image. So, the image
compression is an important part in the research field.
Recently, there are a number of image compression methods
are available. These can be classified into two types: lossless
and lossy compression.
In lossless compression, the reconstructed image is exactly
same as compressed image. So, it is used in various type of
images like medical, scientific and artistic image. There are
so many lossless image compression algorithms are
available. The algorithms which are widely used are JPEG-
LS [1], lossless JPEG [2], Low Complexity Lossless
Compression for images [3], Context based Adaptive
Lossless Image Codec [4], JPEG XR [5] and JPEG2000 [6].
The CALIC and LOCO-I were developed for the JPEG
standardization, whereas almost similarity is there between
the CALIC and LOCO-I but the CALIC provide better
compression rate at the more computational complexity. The
most of the conventional algorithm were developed for
grayscale image. Some modifications were carried out for
the compression of hyper-spectral or color images [7, 8]. In
this paper, we have been proposed a lossless color image
compression algorithm that efficiently compresses the
chrominance channels by using an improved RCT [9, 10]
and the hierarchical prediction [11, 12] of the pixels of
chrominance channel. After the RCT, the compression of
the luminance channel Y is done by a conventional lossless
image coder, such as CALIC. With the help of new methods
the chrominance channel Cu and Cv are compressed. The
generated prediction errors from the chrominance channels
are small in the smooth region but relatively large near the
edge or texture region. To get the accurate prediction, we
decompose a chrominance image into two subimages, one of
which consists of even numbered of rows from the input
image and the other odd. Then, the even subimage Xe is
used to predict the odd subimage. In two stages this
decomposition and prediction is carried out. The directional
predictor is employed to reduce large prediction error near
the object edges. Lastly, for the prediction error the effective
context modeling is adopted, which gets information from
neighboring pixels.
The paper can be divided as: section II describes the
improved reversible color transform (RCT). And the
hierarchical decomposition and detailed prediction process
is represented by section III. The condition of the proposed
work has given in section IV. The section V shows the
experimental results. Finally, we share the conclusion of the
work in the section VI.
2. IMPROVED REVERSIBLE COLOR
TRANSFORM
Usually per pixel the three color values: red, green and blue
are used to represent the natural image. Due to the
significant correlation of color channels with each other, a
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 335
color transform is used before compression. For example,
for lossy compression the popular YCbCr transform used is
defined as below. But due to not exact invertible transform
with integer arithmetic, it cannot be used for lossless image
compression.
Y 0.299 0.587 0.114 R
Cb = -0.16875 -0.33126 0.5 G
Cr 0.5 -0.41869 -0.08131 B (1)
In case of the JPEG 2000, a reversible color transform
(RCT) is introduced as follows:
Y= (R + 2G + B) /4 G = Y- (Cu + Cv) /4
Cu = R - G ↔ R = Cu + G
Cv = B – G ↔ B = Cv + G (2)
This is a simple transform. This transformation is known as
―reversible‖ transform because it is exactly invertible with
integer arithmetic. But the satisfactory performance of
decorrelation does not provided by it.
In case of improved RCT for increasing the decorrelation
performance, a lifting process is added after the RCT as
shown in Fig. 1.
Fig – 1: Flow graph of the improved RCT
It gives the output as follow:
Cv'= Cv – [γCu]
Cu' = Cu – [δCv'] (3)
Where γ and δ are scalar gains for prediction and update
steps in the lifting butterfly. Here for the existence of
inverse transform the floor operations on [γCu] and [δCv]
are required. Another expression of (3) without floor
function is as given below:
Cv' -γ -1+γ 1 R
≈ G
Cu' 1+γδ -1-δ+γδ -δ B (4)
Since the transform of (1) performs better than simple RCT
of (2), so in case of improved RCT the two parameters γ and
δ is find out such a way it approximates (1) while keeping
the reversibility. By comparing (4) and (1), we get the value
of γ and δ are 2x0.16875 and 2x0.08131 respectively
which may be the reasonable choice.
3. HIERARCHICAL DECOMPOSITION AND
PIXEL PREDICTION
The chrominance channels Cu and Cv which is available
after the reversible color transform have some different
statistical characteristics from Luminance Y or original
color planes R, G and B. The chrominance channel, through
the color transform the signal variation is suppressed, but
near the object boundary it still remains. As a result, in the
smooth region the prediction error of the chrominance
channels Cu and Cv are much reduced but near the edge and
texture it remains comparably larger. So along with the
exact prediction the importance is given to estimate the
accurate magnitude of prediction error.
For this purpose, a hierarchical decomposition scheme is
employed. In Fig. 2, all the pixels in the input image A is
separated into two subimages: an odd subimage Ao and an
even subimage Ae. The Ao is predicted by the Ae. This
decomposition and prediction is performed into two levels.
Fig – 2: Input images and its decomposition subimages
To avoid the large prediction error near the object edges
directional prediction is also applied. Two directional
predictors âh(i, j) and âv(I ,j) are used for each pixel. These
are calculated as follow:
âh(i, j ) = ao(i, j − 1) (5)
âv (i, j ) = round(ae(i, j ) + ae(i + 1, j ))/2 (6)
One of these is selected to predict a(i, j). For the most pixels
the vertical predictor is used because it is used in both past
and future of image signal.
Algorithm 1: Calculation of âo (i, j)
if dir (i- 1, j) = H or dir (i, j-1) = H then
Calculate dir (i, j)
Encode dir (i, j)
if dir (i, j) = H then
âo(i,j ) ← âh (i, j)
else
âo(i, j )← âv (i, j)
end if
else
âo(i, j )← âv (i, j)
Calculate dir (i, j)
end if
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 336
The calculation of the predictors is described in the
Algorithm 1. Here V and H mean the vertical and horizontal
edge directions. dir (i, j) is H only when the current pixel
ao(i, j) is near to the horizontal estimate âh (i, j) than the
vertical estimate âv (i, j). In algorithm 2 the parameter T1 is
introduced where | ao(i, j)- âh(i, j)| is very smaller than
|ao(i, j)- âv(i, j) |. The calculation of dir (i, j) is described in
algorithm2.
Algorithm 2: Calculation of dir (i, j)
if | ao(i, j)- âh(i, j)| + T1< |ao(i, j)- âv(i, j) | then
dir (i, j) ← H
else
dir (i, j)← V
end if
4. PROPOSED METHOD
Firstly, an input RGB color image is transformed into
YCuCv color space by improved reversible color transform
as described in Section II. To encode the luminance image Y
the standard lossless image coder, such as CALIC is used.
The chrominance images Cu and Cv are encoded using the
method as described Section III.
For efficient compression, the prediction error e(i, j) =ao
(i, j) – âo (i, j) is modeled with the coding context Ci which is
defined as below:
Ci = {(y, x) : qi-1 ≤ α (y, x) < qi }, 1 ≤ i ≤ K (7)
here qo =0 and qk = ∞ .
At pixel ao (x, y) the representation of the local activity α (y,
x) is as below:
α (y, x) = | ae (y, x) – ae (y +1, x) | (8)
In all the experiments number of context K and the
parameter T1 in the Algorithm 2 are set as 6 and 3. For each
context, the prediction errors are encoded by the modified
Huffman coder [13, 14].
The major steps of our proposed method for image
compression summarizing the following steps:
1) Select an input color images.
2) RGB color image is transfer into YCuCv color space by
improved reversible color transform (RCT).
3) CALIC is used to encode the luminance image Y.
4) The chrominance images Cu and Cv encode by
hierarchical decomposition and directional prediction
method as below:
i) The chrominance channel of image is separated
row by row into two subimages- an even subimage
Ae and odd subimage Ao.
ii) The prediction and encoding of the odd subimage
Ao
(1)
is occurred using the Ae
(1)
subimage.
iii) The even subimage Ae
(1)
is further decomposed
column by column into even subimage Ae
(2)
and odd
subimage Ao
(2)
.
iv) Using Ae
(2)
the Ao
(2)
is compressed.
5) The prediction error is calculated.
6) The prediction error is modeled with coding context.
7) For each context, a modified Huffman coder is used to
encode the prediction error.
The proposed method in the form of flow chart is given
below:
Fig – 3: Flow chart of proposed method
5. EXPERIMENTAL RESULTS
In the introduction already stated that the lossless image
compression method may be the CALIC [4], which shows
higher coding gain than the JPEG-LS (or LOCO-I) [2] [3] at
the cost of higher computational complexity. Recently, the
hierarchical prediction and context adaptive coding
(HPCAC) method [12] shows the better result than the
previous lossless image compression methods. Here the
comparison will be made between the existing hierarchical
prediction and context adaptive (HPCAC) method with our
proposed scheme. As for test images [15], we have taken 08
medical images, 08 commercial digital camera images and
04 classic images. For comparison, bit per pixels (bpp) are
measured for the compression streams.
As medical image we have taken the following images as
shown in Fig. 4
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 337
Fig – 4: The medical images
The Table-I shows the compressed bit rates (bpp) for the
medical images.
Table 1: Compressed Bit Rates (bpp) for Medical Images
Image Size Previous
HPCAC
method
Proposed
method
PET1
PET2
PET3
Eye1
Eye2
Eyeground
Endoscope1
Endoscope2
256x256
256x256
256x256
3216x2136
3216x2136
1600x1216
608x552
568x506
5.6453
6.1598
5.8768
4.6208
4.3350
2.9656
7.0451
4.8968
4.3994
4.7451
4.5889
2.0670
1.8994
2.8888
5.7238
3.9060
Average 5.1932 3.7773
Fig. 5 shows the comparison of bits per pixel (bpp) of
medical images for previous hierarchical prediction and
context adaptive coding (HPCAC) method and our proposed
method.
Fig – 5: Compressed bit rates for Medical Images
On the average the proposed algorithm produce 27.26% less
bits than the existing method.
As commercial digital camera images we have taken the
following images as shown in Fig. 6
Fig – 6: The digital camera images
The Table-II shows the bit per pixels for the commercial
digital camera images:
Table 2: Compressed Bit Rates (bpp) for Commercial
Digital Camera Images
Image Size Previous HPCAC
method
Proposed
method
Ceiling
Locks
Flamingo
Berry
Sunset
Flower
Park
Fireworks
4288x2848
4288x2848
4288x2848
4288x2848
4288x2848
4032x3024
4032x3024
4032x3024
7.2080
7.1623
6.6371
6.8917
5.9700
6.0655
5.5622
5.2855
5.4668
6.0402
2.7518
5.6239
3.4576
4.5139
3.4905
2.2319
Average 6.3478 4.1971
Fig. 7 shows the comparison of bits per pixel (bpp) of digital
camera images for previous hierarchical prediction and
context adaptive coding (HPCAC) method and our proposed
method.
Fig- 7: Compressed bit rates for Digital Camera Images
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 338
On the average the proposed algorithm produce 33.88% less
bits than the existing method.
As classic images we have taken the following images as
shown in Fig. 8
Fig- 8: The classic images
The Table-3 shows the compressed bit rates (bpp) for the
classic images:
Table 3: Compressed Bit Rates (bpp) for the Classic Images
Image Size Previous HPCAC
method
Proposed
method
Lena
Peppers
Mandrill
Barbara
512x512
512x512
512x512
512x512
13.6461
15.2102
18.5305
11.4575
8.9790
9.4900
19.6958
10.9527
Average 14.7111 12.2794
Fig. 9 shows the comparison of bits per pixel (bpp) of
classic images for previous hierarchical prediction and
context adaptive coding (HPCAC) method and our proposed
method.
Fig- 9: Compressed bit rates for Classic Images
On the average the proposed algorithm produce 16.53% less
bits than the existing method.
Fig. 9 shows the comparison of average bits per pixel (bpp)
of medical images, digital camera images and classic images
for previous hierarchical prediction and context adaptive
coding (HPCAC) method and our proposed method.
Fig – 10: Average compressed bit rates for various images
Finally, it is seen that than the previous hierarchical
prediction and context adaptive coding (HPCAC) method
our proposed method performed better.
6. CONCLUSION
Here we have proposed a lossless color image compression
base on the hierarchical prediction and modified Huffman
coding. Firstly, we transform the R, G and B color space of
an input image into YCuCv color space using improved
reversible color transform. The improved reversible color
transform increase the de-correlation performance. After the
color transform the luminance channel Y is compressed by a
conventional lossless image coder. The chrominance
channel is encoded with proposed hierarchical
decomposition and directional prediction method and finally
context modeling with modified Huffman coding is used. It
is seen that by the proposed method the performance of
compressed bit rates (bpp) for medical images, commercial
digital camera image and classic image is improved by
27.26%, 33.88% and 16.53%.
REFERENCES
[1]. Information Technology–Lossless and Near-Lossless
Compression of Continuous-Tone Still Images (JPEGLS),
ISO/IEC Standard 14495-1, (1999).
[2]. Pennebaker, W. B. and Mitchell, J. L., ―JPEG Still
Image Data Compression Standard‖, New York: Van
Nostrand Reinhold, (1993).
[3]. M. Weinberger, G. Seroussi, and G. Sapiro, ―The
LOCO-I lossless image compression algorithm: principles
and standardization into JPEG-LS,‖ IEEE Trans. On Image
Proc., vol. 9, no. 8 (2000), 1309-1324.
[4]. X. Wu and N. Memon, ―Context-based, adaptive,
lossless image coding,‖ IEEE Trans. Commun., vol. 45
(1997), 437 - 444.
[5]. ITU-T and ISO/IEC, JPEG XR Image Coding System—
Part 2: Image Coding Specification, ISO/IEC Standard
29199-2 (2011).
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 05 | May-2015, Available @ http://www.ijret.org 339
[6]. Information Technology—JPEG 2000 Image Coding
System—Part 1: Core Coding System, INCITS/ISO/IEC
Standard 15444-1 (2000).
[7]. X. Wu and N. Memon, ―Context-based lossless
interband compression–Extending CALIC,‖ IEEE Trans.
Image Processing, vol. 9 (2000). 994-1001.
[8]. Enrico Magli, Gabriella Olmo, Emanuele Quacchio,
―Optimized on-board lossless and near-lossless compression
of hyperspectral data using CALIC,‖ IEEE Geoscience and
Remote Sensing Letters, Vol.1, No.1 (2004), 21-25.
[9]. Malvar, H.S., Sullivan, G.J., and Srinivasan, S.,
―Lifting-based reversible color transformations for image
compression,‖ Proc. SPIE Applications of Digital Image
Processing XXXI, San Diego, CA, USA, Vol. 7073 (2008),
707307.1-707307.10.
[10]. Seyun Kim and Nam Ik Cho, ―A lossless color image
compression method based on a new reversible color
transform‖, Visual Communications and Image Processing
(VCIP) IEEE, (2012), 27-30.
[11]. P. Roos, M. A. Viergever, M. C. A. van Dijke, and J.
H. Peters, ―Reversible intraframe compression of medical
images,‖ IEEE Trans. Med. Imag., vol. 7, no. 4, (1988),
328–336.
[12]. Seyun Kim and Nam Ik Cho, ―Hierarchical Prediction
and Context Adaptive Coding for Lossless Color Image
Compression‖, IEEE Transaction on Image Processing, Vol.
23, No. 1, (2014).
[13]. Donald L. Duttweiler and Christodoulos Chamzas,
―Probability Estimation in Arithmetic and Adaptive-
Huffman Entropy Coders‖, IEEE TRANSACTIONS ON
IMAGE PROCESSING, VOL. 4, NO. 3.(1995).
[14]. MING-I LU and CHANG-FUU CHEN, ―An Encoding
Procedure and a Decoding Procedure for a New Modified
Huffman Code‖, IEEE TRANSACTIONS ON
ACOUSTICS. SPEECH, AND SIGNAL PROCESSING.
VOL. 38, NO. I , (1990)
[15]. (Dec. 3, 2013) [Online]. Available:
http://ispl.snu.ac.kr/light4u/project/ LCIC