Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
DIP Using Image Encryption and XOR Operation Affine Transformiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
The main aim of image compression is to represent the image with minimum number of bits and thus reduce the size of the image. This paper presents a Symbols Frequency based Image Coding (SFIC) technique for image compression. This method utilizes the frequency of occurrence of pixels in an image. A frequency factor, y is used to merge y pixel values that are in the same range. In this approach, the pixel values of the image that are within the frequency factor, y range are clubbed to the least pixel value in the set. As a result, there is omission of larger pixel values and hence the total size of the image reduces and thus results in higher compression ratio. It is noticed that the selection of the frequency factor, y has a great influence on the performance of the proposed scheme. However, higher PSNR values are obtained since the omitted pixels are mapped to pixels in the similar range. The proposed approach is analyzed with quantization and without quantization. The results are analyzed. This proposed new compression model is compared with Quadtree-segmented AMBTC with Bit Map Omission. From the experimental analysis it is observed that the proposed SFIC image compression scheme with both lossless and lossy techniques outperforms AMBTC-QTBO. Hence, the proposed new compression model is a better choice for lossless and lossy compression applications.
This survey propose a Novel Joint Data-Hiding and
Compression Scheme (JDHC) for digital images using side match
vector quantization (SMVQ) and image in painting. In this
JDHC scheme image compression and data hiding scheme are
combined into a single module. On the client side, the data should
be hided and compressed in sub codebook such that remaining
block except left and top most of the image. The data hiding and
compression scheme follows raster scanning order i.e. block by
block on row basis. Vector Quantization used with SMVQ and
Image In painting for complex block to control distortion and
error injection. The receiver side process is based on two
methods. First method divide the received image into series of
blocks the receiver achieve hided data and original image
according to the index value in the segmented block. Second
method use edge based harmonic in painting is used to get
original image if any loss in the image.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
DIP Using Image Encryption and XOR Operation Affine Transformiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
Conference Proceedings of the National Level Technical Symposium on Emerging Trends in Technology, TECHNOVISION ’10, G.N.D.E.C. Ludhiana, Punjab, India- 9th-10th April, 2010
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
Image enhancement has a primitive role in the vision-based applications. It involves the processing of the input image by boosting its visualization for various applications. The primary objective is to filter the unwanted noises, clutters, sharpening or blur. The characteristics such as resolution and contrast are constructively altered to obtain an outcome of an enhanced image in the bio-medical field. The paper highlights the different techniques proposed for the digital enhancement of images. After surveying these methods that utilize Multiprocessor System-on-Chip (MPSoC), it is concluded that these methodologies have little accuracy and hence none of them are efficiently capable of enhancing the digital biomedical images.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
A Comprehensive lossless modified compression in medical application on DICOM...IOSR Journals
ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
Novel hybrid framework for image compression for supportive hardware design o...IJECEIAES
Performing the image compression over the resource constrained hardware is quite a challenging task. Although, there has been various approaches being carried out towards image compression considering the hardware aspect of it, but still there are problems associated with the memory acceleration associated with the entire operation that downgrade the performance of the hardware device. Therefore, the proposed approach presents a cost effective image compression mechanism which offers lossless compression using a unique combination of the non-linear filtering, segmentation, contour detection, followed by the optimization. The compression mechanism adapts analytical approach for significant image compression. The execution of the compression mechanism yields faster response time, reduced mean square error, improved signal quality and significant compression ratio performance.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
A CASE STUDY OF INNOVATION OF AN INFORMATION COMMUNICATION SYSTEM AND UPGRADE...ijaia
In this paper, a case study is analyzed. This case study is about an upgrade of an industry communication system developed by following Frascati research guidelines. The knowledge Base (KB) of the industry is gained by means of different tools that are able to provide data and information having different formats and structures into an unique bus system connected to a Big Data. The initial part of the research is focused on the implementation of strategic tools, which can able to upgrade the KB. The second part of the proposed study is related to the implementation of innovative algorithms based on a KNIME (Konstanz Information Miner) Gradient Boosted Trees workflow processing data of the communication system which travel into an Enterprise Service Bus (ESB) infrastructure. The goal of the paper is to prove that all the new KB collected into a Cassandra big data system could be processed through the ESB by predictive algorithms solving possible conflicts between hardware and software. The conflicts are due to the integration of different database technologies and data structures. In order to check the outputs of the Gradient Boosted Trees algorithm an experimental dataset suitable for machine learning testing has been tested. The test has been performed on a prototype network system modeling a part of the whole communication system. The paper shows how to validate industrial research by following a complete design and development of a whole communication system network improving business intelligence (BI).
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
K-SVD: ALGORITHM FOR FINGERPRINT COMPRESSION BASED ON SPARSE REPRESENTATION ijiert bestjournal
In current years there has been an increasing interest in the study of sparse representation of
signals. Using an overcomplete glossary that contains prototype signal-atoms, signals are
described by sparse linear combinations of these atoms. Recognition of persons by means of
biometric description is an important technology in the society, because biometric identifiers
cannot be shared and they intrinsically characterize the individual’s bodily distinctiveness.
Among several biometric recognition technologies, fingerprint compression is very popular
for personal identification. One more fingerprint compression algorithm based on sparse
representation using K-SVD algorithm is introduced. In the algorithm, First we construct a
dictionary for predefined fingerprint photocopy patches. For a new given fingerprint images,
suggest its patches according to the dictionary by computing
-minimization by MP method
and then quantize and encode the representation.This paper comparesdissimilarcompression
standards like JPEG,JPEG-2000,WSQ,K-SVDetc. The paper show that this is effective
compared with several competing compression techniques particularly at high compression
ratios.
A Study on Relationship between Employee Satisfaction and Its DeterminantsIJERA Editor
Satisfaction refers to the level of fulfillment of one needs, wants and desire. Satisfaction
dependsbasicallyuponwhatanindividual wants from the world, and what he gets. Employee satisfaction is a
measure of how happy workers are with their job and working environment and may be many factors affecting
the organizational effectiveness and one of them is the employee satisfaction. Effective organizations should
have a culture that encourages the employee satisfaction. Many measures support that employee satisfaction is a
factor in employee motivation, employee goal achievement and positive employee morale in the work place. So
the present study to explain the relationship between employee satisfaction and its determinants by using
statistical tools.
Evaluation of Energy Contribution for Additional Installed Turbine Flow in Hy...IJERA Editor
The paper presents the methodology for analysis of energy value of existing hydro power plant (HPP) comparing
with the same HPP upgraded with additional power. The additional power can be added with installed turbine
flows of existing unit(s) or with installing the new additional unit(s). The paper treated the energy evaluation of
the storage HPP Spilje which is under installed HPP comparing the technical parameters, water reservoir, turbine
discharge and water inflow. The analyses have been made based on the statistical analysis of the input
parameters of hydrology through a series of hydrological inflow data for a long period of years with monthly
distribution. The hydrological data (from 1970 until 2014) which covers extremely wet and extremely dry
hydrological periods, which is important in order to get statistics of expected values with appropriate
probabilities of occurrence – data from monthly inflow values [1]. The results from analysis can be applied in
determining the expected hydro production, or appropriate expected revenue of the electricity generated from the
HPP. The additional turbine flow in existing units or installing the new unit(s) can benefit with increased income
coming as the result of avoiding spilling water and generating additional peak energy instead of base one.
Introduction to Mail Management System IJERA Editor
Developing different techniques to overcome this problem and protect the consumer, which planted in the hearts
of trust people and encouraged them to purchase via the internet. At first you must know what is the main reason
for the work of the filtration email the mail service is fast and easy way to exchange messages are singing from
the use of traditional mail, there are a large number from of services in the internet requires that you have one
email sometimes one of us would like to cancel his participation in one of the sites because of the large number
of incoming messages to him from the site does not know and have for a way unsubscribe to get rid of these
messages, or would like one of us that some of the messages is prohibited from specific people or locations for
some of these problems has the following solution is the work of a filter to any e-mail and work ban for
unwanted messages application spam filtering as a last resort, you can set up your own spam blocker with your
email application's built-in filter capability Email filtering is the processing of email to organize it according to
specified criteria.
Conference Proceedings of the National Level Technical Symposium on Emerging Trends in Technology, TECHNOVISION ’10, G.N.D.E.C. Ludhiana, Punjab, India- 9th-10th April, 2010
An Investigation towards Effectiveness in Image Enhancement Process in MPSoC IJECEIAES
Image enhancement has a primitive role in the vision-based applications. It involves the processing of the input image by boosting its visualization for various applications. The primary objective is to filter the unwanted noises, clutters, sharpening or blur. The characteristics such as resolution and contrast are constructively altered to obtain an outcome of an enhanced image in the bio-medical field. The paper highlights the different techniques proposed for the digital enhancement of images. After surveying these methods that utilize Multiprocessor System-on-Chip (MPSoC), it is concluded that these methodologies have little accuracy and hence none of them are efficiently capable of enhancing the digital biomedical images.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
A Comprehensive lossless modified compression in medical application on DICOM...IOSR Journals
ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE
Cuda Based Performance Evaluation Of The Computational Efficiency Of The Dct ...acijjournal
Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units),coupled
with the need to store and deliver large quantities of digital data especially images, has brought a number
of challenges for Computer Scientists, the research community and other stakeholders. These challenges,
such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of the
research community in recent years and has led to the investigation of image compression techniques that
can achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separate
an image into parts of differing frequencies and has the advantage of excellent energy-compaction.
This paper investigates the use of the Compute Unified Device Architecture (CUDA) programming model
to implement the DCT based Cordic based Loeffler algorithm for efficient image compression. The
computational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signal
to Noise Ratio) is used to evaluate image reconstruction quality in this paper. The results are presented
and discussed
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Compressed Medical Image Transfer in Frequency DomainCSCJournals
A common approach to the medical image compression algorithm begins by separating the region of interests from the background of the medical images and then lossless and lossy compression schemes are applied on the ROI part and background respectively. The compressed files (ROI and background) are now transmitted through different media of communications (local host, Intranet and Internet) between the server and clients. In this work, a medical image transfer coding scheme based on lossless Haar wavelet transforms method is proposed. At first, the proposed scheme is tested on Intranet (for both RoI and background) in order to compare its results with Internet tests. An adaptive quantization algorithm is used to apply on quasi lossless ROI wavelet coefficients while a uniform quantization is used to apply on lossy background wavelet coefficients. Finally, the retained quantization indices are entropy encoded with an optimal variable coding algorithm. The test results have indicated that the performance of the proposed MITC via Intranet is much better than via Internet in terms of transferring time, while the quality of the reconstructed medical image remains constant despite the medium of communication. For best adopted parameters, a compressed medical image file (760 KB „³ 19.38 KB) is transmitted through Internet (bandwidth= 1024 kbps) with transfer time = 0.156 s while the uncompressed file is sent with transfer time = 6.192 s.
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
Novel hybrid framework for image compression for supportive hardware design o...IJECEIAES
Performing the image compression over the resource constrained hardware is quite a challenging task. Although, there has been various approaches being carried out towards image compression considering the hardware aspect of it, but still there are problems associated with the memory acceleration associated with the entire operation that downgrade the performance of the hardware device. Therefore, the proposed approach presents a cost effective image compression mechanism which offers lossless compression using a unique combination of the non-linear filtering, segmentation, contour detection, followed by the optimization. The compression mechanism adapts analytical approach for significant image compression. The execution of the compression mechanism yields faster response time, reduced mean square error, improved signal quality and significant compression ratio performance.
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
In this paper, we implemented a new model of image compression and decompression method to search the aimed image based on the robust image block variance estimation. Many methods of image compression have been proposed in the literature to minimize the error rate and compression ratio. For encoding the medium type of images, traditional models use hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. In this proposed work, we have implemented block estimation and image distortion rate to optimize the compression ration and to minimize the error rate. Experimental results show that proposed model gives a high compression rate and less rate compared to traditional models.
A CASE STUDY OF INNOVATION OF AN INFORMATION COMMUNICATION SYSTEM AND UPGRADE...ijaia
In this paper, a case study is analyzed. This case study is about an upgrade of an industry communication system developed by following Frascati research guidelines. The knowledge Base (KB) of the industry is gained by means of different tools that are able to provide data and information having different formats and structures into an unique bus system connected to a Big Data. The initial part of the research is focused on the implementation of strategic tools, which can able to upgrade the KB. The second part of the proposed study is related to the implementation of innovative algorithms based on a KNIME (Konstanz Information Miner) Gradient Boosted Trees workflow processing data of the communication system which travel into an Enterprise Service Bus (ESB) infrastructure. The goal of the paper is to prove that all the new KB collected into a Cassandra big data system could be processed through the ESB by predictive algorithms solving possible conflicts between hardware and software. The conflicts are due to the integration of different database technologies and data structures. In order to check the outputs of the Gradient Boosted Trees algorithm an experimental dataset suitable for machine learning testing has been tested. The test has been performed on a prototype network system modeling a part of the whole communication system. The paper shows how to validate industrial research by following a complete design and development of a whole communication system network improving business intelligence (BI).
Dataset Pre-Processing and Artificial Augmentation, Network Architecture and ...INFOGAIN PUBLICATION
Training a Convolutional Neural Network (CNN) based classifier is dependent on a large number of factors. These factors involve tasks such as aggregation of apt dataset, arriving at a suitable CNN network, processing of the dataset, and selecting the training parameters to arrive at the desired classification results. This review includes pre-processing techniques and dataset augmentation techniques used in various CNN based classification researches. In many classification problems, it is usually observed that the quality of dataset is responsible for proper training of CNN network, and this quality is judged on the basis of variations in data for every class. It is not usual to find such a pre-made dataset due to many natural concerns. Also it is recommended to have a large dataset, which is again not usually made available directly as a dataset. In some cases, the noise present in the dataset may not prove useful for training, while in others, researchers prefer to add noise to certain images to make the network less vulnerable to unwanted variations. Hence, researchers use artificial digital imaging techniques to derive variations in the dataset and clear or add noise. Thus, the presented paper accumulates state-of-the-art works that used the pre-processing and artificial augmentation of dataset before training. The next part to data augmentation is training, which includes proper selection of several parameters and a suitable CNN architecture. This paper also includes such network characteristics, dataset characteristics and training methodologies used in biomedical imaging, vision modules of autonomous driverless cars, and a few general vision based applications.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
K-SVD: ALGORITHM FOR FINGERPRINT COMPRESSION BASED ON SPARSE REPRESENTATION ijiert bestjournal
In current years there has been an increasing interest in the study of sparse representation of
signals. Using an overcomplete glossary that contains prototype signal-atoms, signals are
described by sparse linear combinations of these atoms. Recognition of persons by means of
biometric description is an important technology in the society, because biometric identifiers
cannot be shared and they intrinsically characterize the individual’s bodily distinctiveness.
Among several biometric recognition technologies, fingerprint compression is very popular
for personal identification. One more fingerprint compression algorithm based on sparse
representation using K-SVD algorithm is introduced. In the algorithm, First we construct a
dictionary for predefined fingerprint photocopy patches. For a new given fingerprint images,
suggest its patches according to the dictionary by computing
-minimization by MP method
and then quantize and encode the representation.This paper comparesdissimilarcompression
standards like JPEG,JPEG-2000,WSQ,K-SVDetc. The paper show that this is effective
compared with several competing compression techniques particularly at high compression
ratios.
A Study on Relationship between Employee Satisfaction and Its DeterminantsIJERA Editor
Satisfaction refers to the level of fulfillment of one needs, wants and desire. Satisfaction
dependsbasicallyuponwhatanindividual wants from the world, and what he gets. Employee satisfaction is a
measure of how happy workers are with their job and working environment and may be many factors affecting
the organizational effectiveness and one of them is the employee satisfaction. Effective organizations should
have a culture that encourages the employee satisfaction. Many measures support that employee satisfaction is a
factor in employee motivation, employee goal achievement and positive employee morale in the work place. So
the present study to explain the relationship between employee satisfaction and its determinants by using
statistical tools.
Evaluation of Energy Contribution for Additional Installed Turbine Flow in Hy...IJERA Editor
The paper presents the methodology for analysis of energy value of existing hydro power plant (HPP) comparing
with the same HPP upgraded with additional power. The additional power can be added with installed turbine
flows of existing unit(s) or with installing the new additional unit(s). The paper treated the energy evaluation of
the storage HPP Spilje which is under installed HPP comparing the technical parameters, water reservoir, turbine
discharge and water inflow. The analyses have been made based on the statistical analysis of the input
parameters of hydrology through a series of hydrological inflow data for a long period of years with monthly
distribution. The hydrological data (from 1970 until 2014) which covers extremely wet and extremely dry
hydrological periods, which is important in order to get statistics of expected values with appropriate
probabilities of occurrence – data from monthly inflow values [1]. The results from analysis can be applied in
determining the expected hydro production, or appropriate expected revenue of the electricity generated from the
HPP. The additional turbine flow in existing units or installing the new unit(s) can benefit with increased income
coming as the result of avoiding spilling water and generating additional peak energy instead of base one.
Introduction to Mail Management System IJERA Editor
Developing different techniques to overcome this problem and protect the consumer, which planted in the hearts
of trust people and encouraged them to purchase via the internet. At first you must know what is the main reason
for the work of the filtration email the mail service is fast and easy way to exchange messages are singing from
the use of traditional mail, there are a large number from of services in the internet requires that you have one
email sometimes one of us would like to cancel his participation in one of the sites because of the large number
of incoming messages to him from the site does not know and have for a way unsubscribe to get rid of these
messages, or would like one of us that some of the messages is prohibited from specific people or locations for
some of these problems has the following solution is the work of a filter to any e-mail and work ban for
unwanted messages application spam filtering as a last resort, you can set up your own spam blocker with your
email application's built-in filter capability Email filtering is the processing of email to organize it according to
specified criteria.
A Method for Probabilistic Stability Analysis of Earth DamsIJERA Editor
This paper proposes a new probabilistic methodology for analyzing the stability of Earth Dams, based on the
technique of the First Order Reliability Method for Structural Reliability. Differently from others methodologies
present in literature, the proposed method interprets the involved variables as random ones. So, three results are
provided here: the Structural Reliability Index, the Probability of Rupture and the most probable values of the
random variables for the occurrence of a dam break. In order to illustrate it, real data from a cross section of the
Left Bank Earthfill Dam of Itaipu Hydroelectric Power Plant (IHPP), located on the city of Foz do Iguaçu,
Paraná, Brazil were used. The numerical results achieved by the proposed methodology evidence that IHPP dam
has currently good structural conditions, confirming that the safety procedures adopted in Itaipu Dam may be
considered as appropriate. The use of the proposed method enables to complement the previously existing
knowledge about the structural conditions, improving the process of risk management.
Seismic Vulnerability and Mitigation on Non-Engineered Traditional Buildings ...IJERA Editor
Earthquake is one of the most deadly phenomena which disturb the harmonious living among human
beings and claimed large number of lives without any notice and warning. However one should always be
ready to learn, how to live with seismic hazard and minimize its adverse effect on built environment, as its
happening can’t be prevented. The 6.1magnitute Earthquake of 21st September, 2009 caused huge damage in
the eastern part of Bhutan and adjoining areas like Indian states and Bangladesh. This incident has now
exposed to the seismic vulnerability and raised concerns over the safety of the built environment in Bhutan –
particularly, the traditional stone and timber houses, some of which were badly affected during the earthquake.
This paper presents an overview of damages observed in the region and its causes and performances of building
during earthquake. It also describes, the seismic performance of those structures can be improved from life
safety point of view, by adopting simple low-cost modifications to the existing construction practices and
material selection with alternative solutions to make building earthquake resistant.
Application of Rapid Bioassay Method for Assessing Its Water Purification by ...IJERA Editor
Appreciated integral toxicity of four water samples taken from various sources, urban and rural environment,
and explored some of the properties of the reagent chemical purification of water - potassium ferrate K2FeO4.
These data allow suggesting for practical use test system based on bacterial luminescence for express evaluation
of the toxicity of chemical reagents used for water purification, selection of their effective concentrations and
optimal processing time of water samples.
Investigations in Effective Viscosity of Fluid in a Porous MediumIJERA Editor
In this work we address the quantification of Brinkman’s effective viscosity, which arises due to the presence of
the porous medium and its effect on increasing or decreasing the fluid viscosity in relation to the base fluid
viscosity. This work is motivated in the main part by the lack of consensus in the available literature on the
ranges of effective viscosity. To this end, this work provides a model of quantifying the effective viscosity by
incorporating the porous microstructure in the volume-averaged Navier-Stokes equations. Extensive analysis and
testing are provided in the current work which considers five different porous microstructures, and the effective
viscosity is quantified in each case under Poiseuille flow
Seasonal Variation of Groundwater Quality in Parts of Y.S.R and Anantapur Dis...IJERA Editor
Groundwater is used for domestic, industrial water supply and for irrigation all over the world. The groundwater
quality is a function of natural processes as well as anthropogenic activities. The safe potable water is
enormously essential for living and groundwater is one of the sources for human consumption in both urban as
well as rural areas. The area is located in the survey of India toposheet Number 57 J/3 lying between east
780
00’
0
’’ to 780
15’
0
” longitudes and 140
15’ 0
’’ to 140
30’
0
’’ North latitudes covering an area of 720 sq. kms.
Geologically, it is underlain mainly by Peninsular gneisses of Archean age followed by Gulcheru and Vemapalli
formations comprising quartzites, conglomerates, dolomites and shales. Major geomorphic units are
denudational hills, residual hills, pediments, pediplains, structural hills and valleys. The study area experiences a
semiarid climate. Physicochemical parameters viz., pH, total hardness, calcium, chloride, total dissolved solids,
fluoride were analyzed. Most of parameter show higher value than permissible limit in pre and post monsoon.
Further, a moderation in water quality was observed after the monsoon season, which can be attributed to a
possible dilution due to groundwater recharge People dependent on this water may prone to health hazard.
Therefore some effective measures are urgently required to enhance the quality of water in these areas
Optimization of Test Pattern Using Genetic Algorithm for Testing SRAMIJERA Editor
An optimization of test pattern for testing of a Static Random Access Memory (SRAM) using genetic algorithm
interconnects presented here is a method that associates a turn on inputs to numerous nets, which gives rise to
test vectors to determine stuck-at, open, and bridging faults. This set up gives us privilege in reducing
unnecessary composition that reduces the testing time for application-dependent testing for coverage of faults.
This optimized test pattern is used as a test source for testing a circuit and identifying the faults in the circuit.
The faults which are covered in are stuck at open and bridging faults. Genetic algorithm reduces the redundancy
and optimizes the test pattern which results in reduced testing time and power consumption
Investigation on Bridges Connection to Network Carrefour in Existing Roads in...IJERA Editor
The rapid and continuous growth of populations and vehicles has caused very big traffic volume in most of
African big cities. In order to ensure a better distribution of traffic and enables quick access to vehicles,
reasonable interchange can be designed to allow greater connection and realizes rapid dividing of vehicles at
existing road network carrefour. This paper uses Cotonou, which is the largest urban and economic city of Benin
Republic as a case study. Firstly, the paper reviews literatures on history and development of interchanges bridge
systems. Based on the case study, the paper then mainly introduces a typical interchange at Godomey carrefour,
explains difficulties encountered in engineering design and realizes rapid dividing of vehicles by designing
reasonable interchange at Godomey carrefour, so as to accumulate certain experience in the construction of
interchange at existing road network carrefour. By studying this case, the research seeks to identify and expand
on lessons learned under the first interchange bridge design in Benin. Using the lessons learned government
agencies, engineering and construction communities could adopt reasonable structure and construction method
according to local conditions based on current situations.
Renewable Energy Based Floating Power Generator (Rivers and Canals)IJERA Editor
We have developed a stand alone, (river and canal water stream) floating power generator system for village
electrification, agriculture water pumping, bridge street lights and such other utilities. The system is the unique
one of its kind as per our knowledge and various surveys. The physical structure of the system is made of the
non corrosive and unbreakable materials like mild steel, fiber glass etc. It works, as it rotates in the water flow.
It does not require any kind of the external electric grid power for its working. As the water flows, the specially
designed blades of the system rotate in the direction of the flow and ultimately the consistent power is
generated, this power can be used directly or it may be stored in battery and the utilized as and when required.
No permanent installation, No pollution and environment friendly floating Pico turbine. The observations taken
from the sight are tabulated and accordingly results are discussed.
Development of 30 Watt Solar Bag with Wireless Power Transmission UnitIJERA Editor
The development of 30 watt Solar bag with wireless power transmission unit using photo voltaic cells to charge
tablets, mobile phones, smart phones, digital cameras and all types of batteries (Including 12 volt acid and dry
cell for emergency situations) using in such kind of appliances. The phenomenon that we have implanted in our
project to transmit wireless power is resonant coupling. The Battery backup of the solar bag would be around 2
hours with load estimation of around 30 watts. Our product is very user friendly as it allows us the power
transmission in the range of 12 inches, whereas we have also implemented the overall logic by using PIC
Microcontroller based model 16F877A to enhance its versatility.
E-Commerce: Study, Development and PrototypingIJERA Editor
This project was a study of the development of an electronic commerce in Paraguay, tracking laws and
regulatory decrees formulated by the technical agencies. As a result, a prototype is developed, which reproduces
the steps to follow in a web-based e-commerce transaction using fictitious credit cards as payment method. A
particular digital certificate created for testing purposes was used, performing secure connection via https
protocol and the digital certificate issued by the Enforcement Authority and the Ministry of Industry and Trade.
The results were obtained through the measurement of the database log, matching planned rules for transactions
with the results obtained
A Computational Investigation of Flow Structure Within a Sinuous DuctIJERA Editor
In the present investigation the distribution of mean velocity are experimentally studied on three constant area
rectangular curved ducts with an aspect ratio of 2.4. First one is C-shape, second one is S-shape and third one
is a DS-shape duct. The experiment is carried out at mass averaged mean velocity of 40m/s for all the ducts.
The velocity distribution shows for C-duct, the bulk flow shifting from outer wall to the inner wall along the
flow passage and for S-duct, the bulk flow shifting from outer wall to the inner wall in the first half and from
inner wall to the outer wall in the second half along the flow passage of curved ducts are very instinct. Due to
the imbalance of centrifugal force and radial pressure gradient, secondary motions in the forms of counter
rotating vortices have been generated within both the curved duct. For DS-duct the velocity distributions shows
the Bulk of flow shifting from inner watt to outer wall in the first bend and third bend of the duct and outer wall
to inner wall in the second bend and forth bend of the duct along the flow passage is very instinct. Flow at end
of the DS-duct is purely uniform in nature due to non existence of secondary motion. The experimental results
then were numerically validated with the help of Fluent, which shows a good agreement between the
experimental and predicted results for all the ducts
Seismic Analysis and Optimization of RC Elevated Water Tank Using Various Sta...IJERA Editor
As known from very upsetting experiences, poorly designed elevated water tanks were heavily damaged or
collapsed during earthquakes. This might be due to the lack of knowledge regarding the behaviour of supporting
system of the tank, and also due to improper selection of geometry of staging patterns. For certain proportions of
the tank and the structure, the sloshing of the water during earthquake may be one of the dominant
factors.Dynamic analysis of tank containing liquid is complex involving fluid-structure interaction.In this paper,
the seismic behavioural effect of circular elevated water tank is studied for specific capacity of tank for various
staging arrangements in plan, variation in number of periphery columns and variation in number of stages in
elevation. Two mass idealizations suggested by Gujarat State Disaster Management Authority are considered
here. Under earthquake loads; a complicated pattern of stresses is generated in the tanks. Total 36 combinations
were analysedwith SAP2000 using Response Spectrum Method (RSM) and results are presented. It is observed
that increase in number of columns, does not assure the increase in the improvement of structural responses.
Radial arrangement with six staging levels is found to be best for the number of columns used. To suggest
number of columns with suitable diameter cost optimization is done for the radial staging arrangement with six
staging levels consideringcritical direction of seismic force, quantity of concrete and steel required. It is found
that eight numbers of columns gives less cost as compared to six, ten and twelve with optimized diameter of
300mm.
Analytical and Experimental Studieson Fracture Behaviour of Foundry Sand Conc...IJERA Editor
Concrete is a widely used as vital material in the construction world. We can do partial substitution of industrial
waste such as foundry sand like material in sand. Foundry sand is not only the economical material also
improves the properties of the concrete. Foundry sand has emerged as construction material in its own right.
This type of concrete normally contains around (30%, 40%) by mass of total sand materials. It improves the
workability, minimizes cracking due to thermal and drying shrinkage, and enhances durability to reinforcement
corrosion, sulphateattack, and alkali-silica expansion. Fracture mechanics is the field of mechanics concerned
with the study of the formation of cracks in materials. It uses methods of analytical Solid mechanics to calculate
the driving force on a crack and those of experimental Solid mechanics to characterize the material's resistance
to fracture. J-integral and critical stress intensity factor is the fracture parameter. The fracture parameters
calculated in our study are stress intensity factor, Critical j-integral. Three point bending test is used to find the
fracture parameter. The study is carried out on beams of grade M20 with 0%, 30%, & 40% foundry sand. The
test is conducted for normal beams and pre-cracked beams of having a notch to depth ratio (A/W) of 0.2. This
study concludes that the critical j-integral and stress intensity factors increases by comparing the 0%, 30%, 40%
foundry sand concrete.
Modelling Framework of a Neural Object RecognitionIJERA Editor
In many industrial, medical and scientific image processing applications, various feature and pattern recognition
techniques are used to match specific features in an image with a known template. Despite the capabilities of
these techniques, some applications require simultaneous analysis of multiple, complex, and irregular features
within an image as in semiconductor wafer inspection. In wafer inspection discovered defects are often complex
and irregular and demand more human-like inspection techniques to recognize irregularities. By incorporating
neural network techniques such image processing systems with much number of images can be trained until the
system eventually learns to recognize irregularities. The aim of this project is to develop a framework of a
machine-learning system that can classify objects of different category. The framework utilizes the toolboxes in
the Matlab such as Computer Vision Toolbox, Neural Network Toolbox etc.
Role of the Biochemistry Labs in Promoting the Health Care Services for the I...IJERA Editor
The health care in the State of Kuwait depends to a greater extent on the biochemical and clinical labs attached
at each hospital. The data obtained from these laboratories will facilitate the process of diagnosing the disease
accurately. This will have a positive impact on the selection of appropriate treatment for the patients in general
and for diabetics specifically.
The main objective of this research was to build a profile for lab analysis and a database for building a
comprehensive system of integrated activities to raise health care for diabetic patients in Kuwait. The study
revealed the burden of admitted diabetic cases on the blood chemistry laboratory in Sabah Hospital (in relation
to length of stay and total numbers of lab requests). The aim was fulfilled by designing a model of the
biochemical tests for diabetics; filling in forms from the reality of patient data, completing and analyzing the
results electronically.
The study showed the importance of biochemical and clinical labs since they act as the link of patient's
information at the secondary health care level.
Analysis of Seed Proteins in Groundnut Cultivars (Arachis hypogaea L.)IJERA Editor
The seed protein contents and protein banding pattern were studied in commonly cultivated groundnut cultivars.
The groundnut cultivars such as ICGV00351, TMV-7, CO-4,CO-6 and TG-374 were used for quantitative and
qualitative analysis of seed proteins. The protein contents varied among the different varieties of groundnut. The
maximum protein content was observed in CO-6 followed by CO-4, TMV-7, ICGV00351 and TG-374. There
was a slight differences in protein content among the different cultivars. All the five cultivars of groundnut were
subjected to SDS-PAGE analysis. The results revealed that the variation in total number of bands and MW-Rf
values. The maximum number of MW-Rf value was noticed in TG-374 and ICGV00351, and the minimum
MW-Rf value was 11 recorded in CO-6 and TMV-7.
Reversible Data Hiding Using Contrast Enhancement ApproachCSCJournals
Reverse Data Hiding is a technique used to hide the object's data details. This technique is used to ensure the security and to protect the integrity of the object from any modification by preventing intended and unintended changes. Digital watermarking is a key ingredient to multimedia protection. However, most existing techniques distort the original content as a side effect of image protection. As a way to overcome such distortion, reversible data embedding has recently been introduced and is growing rapidly. In reversible data embedding, the original content can be completely restored after the removal of the watermark. Therefore, it is very practical to protect legal, medical, or other important imagery. In this paper a novel removable (lossless) data hiding technique is proposed. This technique is based on the histogram modification to produce extra space for embedding, and the redundancy in digital images is exploited to achieve a very high embedding capacity. This method has been applied to various standard images. The experimental results have demonstrated a promising outcome and the proposed technique achieved satisfactory and stable performance both on embedding capacity and visual quality. The proposed method capacity is up to 129K bits with PSNR between 42-45dB. The performance is hence better than most exiting reversible data hiding algorithms.
Ijri ece-01-01 joint data hiding and compression based on saliency and smvqIjripublishers Ijri
Global interconnect planning becomes a challenge as semiconductor technology continuously scales. Because of the increasing wire resistance and higher capacitive coupling in smaller features, the delay of global interconnects becomes large compared with the delay of a logic gate, introducing a huge performance gap that needs to be resolved A novel equalized global link architecture and driver– receiver co design flow are proposed for high-speed and low-energy on-chip communication by utilizing a continuous-time linear equalizer (CTLE). The proposed global link is analyzed using a linear system method, and the formula of CTLE eye opening is derived to provide high-level design guidelines and insights.
Compared with the separate driver–receiver design flow, over 50% energy reduction is observed.
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...ijcsa
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. There are different ways of abbreviating image files. For the use of Internet, the two most common abbreviated graphic image formats are the JPEG formulation and the GIF formulation. The JPEG procedure is more often utilized or
photographs, while the GIF method is commonly used for logos, symbols and icons but at the same time
they are not preferred as they use only 256 colors. Other procedures for image compression include the
utilization of fractals and wavelets. These procedures have not profited widespread acceptance for the
utilization on the Internet. Abbreviating an image is remarkably not similar than the compressing raw
binary data. General-purpose abbreviation techniques can be utilized to compress images, the obtained
result is less than the optimal. This is because of the images have certain analytical properties, which can
be exploited by encoders specifically designed only for them. Also, some of the finer details of the image
can be renounced for the sake of storing a little more bandwidth or deposition space. In the paper,
compression is done on medical image and the compression technique that is used to perform compression
is discrete wavelet transform and discrete cosine transform which compresses the data efficiently without
reducing the quality of an image
Digital watermarking by utilizing the properties of selforganization map bas...IJECEIAES
Information security is one of the most important branches concerned with maintaining the confidentiality and reliability of data and the medium for which it is transmitted. Digital watermarking is one of the common techniques in this field and it is developing greatly and rapidly due to the great importance it represents in the field of reliability and security. Most modern watermarking systems, however, use the self-organization map (SOM), which is safer than other algorithms because an unauthorized user cannot see the result of the SOM's training. Our method presents a semifragile watermark under spatial domain using least significant bit (LSB) and by relying on most significant bit (MSB) when the taken values equal to (2 or 4 bits) depending on the characteristics of SOM through developing the so-called best matching unit (BMU) which working to determine the best location for concealment. As a result, it shows us the ability of the proposed method to maintain the quality of the host with the ability to retrieve data, whether it is a binary image or a secret message.
An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Stora...IJERA Editor
Today’s image capturing technologies are producing High Definitional-scale images which are also heavier on memory, which has prompted many users into cloud storage, cloud computing is an service based technology and one of the cloud service is Data Storage as a Service (DSaaS), two parties are involved in this service the Cloud Service Provider and The User, user stores his vital data onto the cloud via internet example: Dropbox. but a bigger question is on trustiness over the CSP by user as user data is stored remote devices which user has no clue about, in such situation CSP has to create a trust worthiness to the costumer or user, in these paper we addressed the mention insecurity issue with a well defined trusted image Storing and retrieval framework (TISR) using compress sensing methodology.
Design of Image Compression Algorithm using MATLABIJEEE
This paper gives the idea of recent developments in the field of image security and improvements in image security. Images are used in many applications and to provide
image security using image encryption and authentication.
Image encryption techniques scramble the pixels of the image and decrease the correlation among the pixels, such that the encrypted image cannot be accessed by unauthorized user.
Unified Approach With Neural Network for Authentication, Security and Compres...CSCJournals
The Present demands of scientific and social life forced image processing based applications to have a tremendous growth. This growth at the same time has given numbers of challenges to researcher to meet the desired objectives of either users or from solution perspectives. Among the various challenges, the most dominating areas are: reduction in required memory spaces for storage or taken transmission time from one location to other, protection of image contents to maintain the privacy and to facilitate the mechanism to identify the malicious modification if there is any either in storage or in transmission channel. Even though there are number of methods proposed by various researchers and are existed as solutions, questions are remain open in terms of quality, cost and complexity. In this paper we have proposed the concept based on neural network to achieve the quality of compression, protection and authentication all together using the ability of universal approximation by learning, one way property and one to one mapping characteristics correspondingly. With the proposed methods not only we can authenticate the image but also positions of malicious activity given in the image can be located with high precision. Proposed methods are very efficient in performance as well as carry the features of simplicity and cost effectiveness.
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionIJAEMSJORNAL
In recent years, the modeling of human behaviors and patterns of activity for recognition or detection of special events has attracted considerable research interest. Various methods abounding to build intelligent vision systems aimed at understanding the scene and making correct semantic inferences from the observed dynamics of moving targets. Many systems include detection, storage of video information, and human-computer interfaces. Here we present not only an update that expands previous similar surveys but also a emphasis on contextual abnormal detection of human activity , especially in video surveillance applications. The main purpose of this survey is to identify existing methods extensively, and to characterize the literature in a manner that brings to attention key challenges.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
Similar to Efficient Image Compression Technique using Clustering and Random Permutation (20)
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Efficient Image Compression Technique using Clustering and Random Permutation
1. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 18 | P a g e
Efficient Image Compression Technique using Clustering and
Random Permutation
Jitendra Meghwal*, Akash Mittal**, Yogendra Kumar Jain***
*Department of Computer Science, Samrat Ashok Technological Institute, Vidisha, M.P, India
** Department of Computer Science, Samrat Ashok Technological Institute, Vidisha, M.P, India
***Department of Computer Science, Samrat Ashok Technological Institute, Vidisha, M.P, India
ABSTRACT
Multimedia data compression is a challenging situation for compression technique, due to the possibility of loss
of data as well as it require large amount of storage place. The minimization of storage place and proper
transmission of these data need compression. In this dissertation we proposed a block based DWT image
compression technique using genetic algorithm and HCC code matrix. The HCC code matrix compressed into
two different set redundant and non-redundant which generate similar pattern of block coefficient. The similar
block coefficient generated by particle of swarm optimization. The process of particle of swarm optimization is
select for the optimal block of DWT transform function. For the experimental purpose we used some standard
image such as Lena, Barbara and cameraman image. The size of resolution of this image is 256*256. The source
of image is Google.
Keywords - DWT, HCC, Transform function, HVS.
I. INTRODUCTION
The computer is becoming more and more
powerful day by day. As a result, the use of digital
images is increasing rapidly. Along with this
increasing use of digital images comes the serious
issue of storing and transferring the huge volume of
data representing the images because the
uncompressed multimedia (graphics, audio and
video) data requires considerable storage capacity
and transmission bandwidth. Though there is a rapid
progress in mass storage density, speed of the
processor and the performance of the digital
communication systems, the demand for data storage
capacity and data transmission bandwidth continues
to exceed the capabilities of on hand technologies
[13]. Besides, the latest growth of data intensive
multimedia based web applications has put much
pressure on the researchers to find the way of using
the images in the web applications more effectively.
Internet teleconferencing, High Definition Television
(HDTV), satellite communications and digital storage
of movies are not feasible without a high degree of
compression. Compression-then-Encryption (CTE)
paradigm meets the requirements in many secure
transmission scenarios, the order of applying the
compression and encryption needs to be reversed in
some other situations. As the content owner, Alice is
always interested in protecting the privacy of the
image data through encryption. Nevertheless, Alice
has no incentive to compress her data, and hence, will
not use her limited computational resources to run a
compression algorithm before encrypting the data.
This is especially true when Alice uses a resource-
deprived mobile device. To limit the effects of data
loss that may occur on the communications channel,
the wavelet transformed image data are partitioned
into segments, each loosely corresponding to a
different region of the image. Each segment is
compressed independently, so that the effects of data
loss or corruption are limited to the affected segment.
The image is actually a kind of redundant data i.e. it
contains the same information from certain
perspective of view [6]. By using data compression
techniques, it is possible to remove some of the
redundant information contained in images. Image
compression minimizes the size in bytes of a graphics
file without degrading the quality of the image to an
unacceptable level. The reduction in file size allows
more images to be stored in a certain amount of disk
or memory space. It also reduces the time necessary
for images to be sent over the Internet or downloaded
from web pages. Two elementary components of
compression are redundancy and irrelevancy
reduction. Redundancy reduction aims at removing
duplication from the signal source image. Irrelevancy
reduction omits parts of the signal that is not noticed
by the signal receiver, namely the Human Visual
System (HVS). Image compression is an application
of data compression that encodes the original image
with few bits. The objective of image compression is
to reduce the redundancy of the image and to store or
transmit data in an efficient form. Section-II gives the
information of about introduction of image
compression. In section III discuss the method of
image compression. In section IV discuss the
RESEARCH ARTICLE OPEN ACCESS
2. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 19 | P a g e
proposed method. In section V comparative result
finally, in section VI conclusion and future scope.
II. PRINCIPALS OF IMAGE
COMPRESSION
Communication systems now a day‟s greatly rely
on image compression algorithms to send images and
video from one place to another. Every day, a
massive amount of information is stored, processed,
and transmitted digitally. The arrival of the biometric
identity concept demands that governments around
the world keep profiles of their citizens, and for
businesses to keep profiles of their customers and
produce the information over the internet. Image
compression addresses the problem of pressing large
amounts of digital information into smaller packets
(by reducing the size of image and data files) that can
be moved quickly along an electronic medium before
communication can take place effectively.
Compressing images results in reduced transmission
time and decrease storage requirements [4]. Whereas
uncompressed multimedia files require considerable
storage capacity and transmission bandwidth. The
good compression system should be able to
reconstruct the compressed image source or an
approximation of it with good quality. It is an
important branch of image processing that is still a
very active research field and attractive to industry.
Basic components of a data compression system are
illustrated in below Figure. There are two different
encoding systems in operation within the
compression system, the predictive coding, and the
transform coding. Predictive coding works directly
on the input image pixel values as they read into the
system; the spatial or space domain encoder also has
the capacity to efficiently expect the value of the
present sample from the value of those which have
been processed previously. So this type of coding has
to search for the relationship that governs the tile
pixels and make a decision for the best way of
operating them before the coding process start.
Transform coding on the other hand uses frequency
domain, in which the encoding system initially
converting the pixels in space domain into frequency
domain via transformation function. Thus producing
a set of spectral coefficients which are then suitably
coded and transmitted. The background research of
this chapter will focus on the transform coding since
the Bin DCT algorithm by itself is also transforms the
row image in space domain into spatial frequency
domain. The decoder on the other side must perform
an inverse transform before the reconstructed image
can be displayed. The coding technique implemented
in this project is the transform coding using
multiplier-less approximation of the DCT algorithms.
Figure 1: Basic Data Compression System.
Feature extraction process play important role in
face detection and pattern detection system based
authentication mechanism. The goal of feature
extraction is to find a specific representation of the
data that can highlight relevant information [7]. This
representation can be found by maximizing a
criterion or can be a pre-defined representation.
Usually, a face image is represented by a high
dimensional vector containing pixel values (holistic
representation) or a set of vectors where each vector
summarizes the underlying content of a local region
by using a high level transformation (local
representation). In this section we made distinction in
the holistic and local feature extraction and
differentiate them qualitatively as opposed to
quantitatively. It is argued that a global feature
representation based on local feature analysis should
be preferred over a bag-of-feature approach. The
problems in current feature extraction techniques and
their reliance on a strict alignment is discussed.
Finally we introduce to use face-GLOH signatures
that are invariant with respect to scale, translation and
rotation and therefore do not require properly aligned
images. The resulting dimensionality of the vector is
also low as compared to other commonly used local
features such as Gabor, Local Binary Pattern
Histogram „LBP‟ etc. and therefore learning based
methods can also benefit from it [9].
III. WAVELET TRANSFORM
METHOD
Wavelet transforms have been one of the
important signal processing developments in the last
decade, especially for the applications such as time-
frequency analysis, data compression, segmentation
and vision [12]. During the past decade, several
efficient implementations of wavelet transforms have
been derived. The theory of wavelets has roots in
quantum mechanics and the theory of functions
though a unifying framework is a recent occurrence.
Wavelet analysis is performed using a prototype
function called a wavelet. Wavelets are functions
3. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 20 | P a g e
defined over a finite interval and having an average
value of zero. The basic idea of the wavelet transform
is to represent any arbitrary function f (t) as a
superposition of a set of such wavelets or basis
functions [11]. These basis functions or baby
wavelets are obtained from a single prototype
wavelet called the mother wavelet, by dilations or
contractions (scaling) and translations (shifts).
Efficient implementation of the wavelet transforms
has been derived based on the Fast Fourier transform
and short-length fast-running FIR algorithms in order
to reduce the computational complexity per
computed coefficient. All wavelet packet transforms
are calculated in a similar way. Therefore we shall
concentrate initially on the Haar wavelet packet
transform, which is the easiest to describe. The Haar
wavelet packet transform is usually referred to as the
Walsh transform [9].
DISCRETE WAVELET TRANSFORM
Image data input is shown at left in the diagram.
The DWT coefficient buffer at right in the diagram
stores wavelet coefficients computed by the DWT
stage. The program flow in the diagram produces
DWT coefficients for a single segment. Once the
coefficients corresponding to a segment have been
computed and placed in the buffer, the BPE stage can
begin encoding that segment. The BPE stage relies on
all of the coefficients in a segment being available
simultaneously [6]. The calculation of a DWT
coefficient depends on image pixels in a limited
neighborhood. Consequently, when images are
encoded using more than one segment, it is often
possible to implement the DWT stage so that DWT
coefficients for a segment are produced without
requiring input of a complete image frame. For
example, in a push-broom imaging application, DWT
coefficients may be produced in a pipeline fashion as
new image scan lines are produced by the imager.
WAVELET PACKET IMAGE CODING
One practical way of constructing wavelet bases
is to iterate a two channel perfect reconstruction filter
bank over the low pass branch. This results in a
dyadic division of the frequency axis that works well
for most signals we encounter in practice. However,
for signals with strong stationary high pass
components, it is often argued that wavelet basis will
not perform well, and improvements can be generally
found if we search over all possible binary trees for a
particular filter set, instead of using the fixed wavelet
tree [19]. Wavelet packet transforms are obviously
more general than the wavelet transform, they also
offer a rich library of bases from which the best one
can be chosen for a fixed signal with a fixed cost
function. The advantage of the wavelet packet
framework is thus its universality in adapting the
transform to a signal without training or assuming
any statistical property of the signal.
IV. PROPOSED METHOD
In this section describe the proposed algorithm
for image compression using wavelet transform and
particle of swarm optimization technique. Basically
in this algorithm POS are used for removal of
redundant structure of packet in common similar
block and create separate block and both block
supply to HCC matrix and finally image are
compressed. For the searching of redundant structure
block and non-redundant structure block used fitness
constrains function, those structure satisfied the given
constraints are called non redundant structure else
redundant structure. The proposed algorithm is a
combination of integer wavelet packet transform
function, particle of swarm optimization and HCC
matrix. Now the process of algorithm divided into
three sections. Section first discusses the generation
of packet, section two discusses the optimization of
packet and section three discusses the HCC code
matrix for compression. For the grouping of packet
used cluster algorithm. Cluster algorithm group the
similar packet and dissimilar packet.
Figure 2: Proposed model of image compression.
4. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 21 | P a g e
V. EXPERIMENTAL RESULT
ANALYSIS
The experimental result partitioned by three
methods one is JPEG, second is PCRP and third is
our proposed technique. There are several lossless
image compression algorithms like Lossless
JPEG,JPEG 2000,PNG,CALIC and JPEG-LS.JPEG-
LS has excellent coding and best possible
compression efficiency.
Figure 3: Shows the compression of Data set
which includes cameraman compressed image as
an output.
Figure 4: Shows the compression of Data set
which includes child compressed image as an
output.
Data
Set
Techniq
ue
PSNR
(dB)
Compres
sion
Ratio
Elapse
Time(
Sec)
Camera
man
Image
JPEG 13.560
47 0.18335
10.871
784
PCRP 21.560
47 1.4834
11.151
017
PROPO
SED
23.560
47 1.8834
6.7283
27
Image
11
JPEG 8.7689
57 0.42525
11.821
748
PCRP 16.768
957 1.7253
12.584
048
PROPO
SED
18.768
951 2.1253
14.682
364
Image
22
JPEG 6.0294
92 0.70117
13.358
795
PCRP 14.029
492 2.0012
12.565
998
PROPO
SED
16.029
492 2.4012
12.539
689
Table 1: Shows the comparative PSNR,
Compression ratio, Elapse time of image
Compression using various techniques.
Figure 5: Shows that performance of data set
cameraman counts of data and rate of PSNR,
Compression ratio and elapse time.
5. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 22 | P a g e
Figure 6: Shows that performance of data set
image counts of data and rate of PSNR,
Compression ratio and elapse time.
VI. CONCLUSION AND FUTURE WORK
In this paper we proosed a cluester based image
compression technique.the cluester based image
compression techniqe used k-means cluestering
technique for the grouping of packet. In the particle
of swarm optimization process define the fitness
constraints according to the diffrence value of
structure refrence packet. if diffrence of packet is
zero value assigned 1 and the diffrence value get
value assined 0. both similar and disimilry packet
collect in two different unit and paases throgh HCC
matrix after that image is compressed. the proposed
algorithm implemenrt in matlab software with
version 7.14.0. algorithm used hear wavelet faimly
process for decompostion of image. the decomposed
transfrom function passes through coding and
generates packet transfrom. for the valiadtion of
proposed algorithm used another image compression
technique such as JPEG,PCRT and compostie
algorithm. for the valuation of parameter used two
standard parameter one is PNSR and another is
compression ratio. our emprical valaution of PSNR
and C.R shows that better value in comression of
other compression algorithm. The proposed
algorithm is very efficient in terms of PSNR value
and C.R instead of PCRT algorithm and JPEG based
compression technique. But little point of disappoint
in terms of computational time because the search
and optimization of structure reference packet more
REFERENCES
[1] Jiantao Zhou, Xianming Liu, Oscar C. Au,
Yuan Yan Tang “Designing an Efficient
Image Encryption-Then-Compression
System via Prediction Error Clustering and
Random Permutation” IEEE
TRANSACTIONS ON INFORMATION
FORENSICS AND SECURITY, Vol-9,
2014. Pp 39-52.
[2] Kamrul Hasan Talukder, Koichi Harada
“Haar Wavelet Based Approach for Image
Compression and Quality Assessment of
Compressed Image” IAENG International
Journal of Applied Mathematics, Vol-6,
2007.
[3] Matthew C. Stamm, K.J.Ray Liu “Anti-
Forensics of Digital Image Compression”
IEEE TRANSACTIONS ON
INFORMATION FORENSICS AND
SECURITY, Vol-6, 2011. Pp 1050-1065.
[4] Radhika Bansal, Rohini Sharma “Designing
an Efficient Image Encryption-Compression
System Using A New Haar Wavelet”
International Journal of Latest Scientific
Research and Technology, 2014. Pp 64-71..
[5] Junzhou Huang , Shaoting Zhang, Dimitris
Metaxas “Efficient MR image reconstruction
for compressed MR imaging” Elsevier ltd.
2011. Pp 670-679.
[6] Veenadevi.S.V.1, A.G. Ananth “FRACTAL
IMAGE COMPRESSION USING
QUADTREE DECOMPOSITION AND
HUFFMAN CODING” Signal & Image
Processing : An International Journal (SIPIJ)
Vol-3, 2012. Pp 207-213.
[7] Yushu Zhang, Kwok-Wo Wong, Leo Yu
Zhang, Di Xiao “Robust Coding of
Encrypted Images via Structural Matrix”
IEEE, 2014. Pp 1-10.
[8] Ba-Ngu Vo, Ba-Tuong Vo, Nam-Trung
Pham, and David Suter “Joint Detection and
Estimation of Multiple Objects From Image
Observations” IEEE TRANSACTIONS ON
SIGNAL PROCESSING, VOL-58, 2010. Pp
5129-5140.
[9] N. Senthilkumaran, J. Suguna “Neural
Network Technique for Lossless Image
Compression Using X-Ray Images”
International Journal of Computer and
Electrical Engineering, Vol.3, 2011. Pp 17-
23.
[10] S .S. Panda, M.S.R.S Prasad, MNM Prasad,
Ch SKVR Naidu “ image compression
using back propagation neural network”
international journal of engineering science
& advanced technology, Vol-2, 2012. Pp 74-
78.
6. Jitendra Meghwal et al. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 1, (Part - 1) January 2016, pp.18-23
www.ijera.com 23 | P a g e
[11] Ilya Sutskever, James Martens, Geoffrey
Hinton “Generating Text with Recurrent
Neural Networks” International Conference
on Machine Learning, 2011. Pp 1-8.
[12] M.Ferni Ukrit, A.Umamageswar,
G.R.Suresh “A Survey on Lossless
Compression for Medical Images”
International Journal of Computer
Applications, Vol-31, 2011. Pp 47-50.
[13] N Nazari, R Shams, M Mohrekesh, S
Samavi, “Near-lossless Compression for
High Frame Rate Videos,” Proceedings of
21st ICEE Electrical Engineering (ICEE),
2013, Pp 1-5..
[14] A. Alarabeyyat, S. Al-Hashemi, T. Khdour,
M. Hjouj Btoush, S. Bani-Ahmad, R. Al
Hashemi “ Lossless Image Compression
Technique Using Combination Methods”
Journal of Software Engineering and
Applications, 2012, Pp 752-763.
[15] M.Ferni ukrit, G.R.Suresh “Lossless
Compression for Medical Image Sequences
Using Combination Methods” Proceeding of
NCIEEE, 2013. Pp 14-18.
[16] Emmanouil Darakis,Marcin Kowiel, Risto
Nasanen, Thomas J. Naughton “Visually
lossless compression of digital hologram
sequences” Image Quality and System
Performance, 2010. Pp 1-8.”.
[17] S.Bhavani, K.Thanushkodi “A Survey On
Coding Algorithms In Medical Image
Compression” International Journal on
Computer Science and Engineering, vol-2,
2010. Pp 1429-1435.
[18] Kamrul Hasan Talukder, Koichi Harada
“Haar Wavelet Based Approach for Image
Compression and Quality Assessment of
Compressed Image” IAENG International
Journal of Applied Mathematics, 2007. Pp
1-8.
[19] Ganesh Dasika, Ankit Sethia, Vincentius
Robby, Trevor Mudge, and Scott Mahlke
“MEDICS: Ultra-Portable Processing for
Medical Image Reconstruction” ACM 2010.
Pp 1-12.
[20] Qiu, Lin Feng, Feng Xia, Guowei Wu, and
Yu Zhou, “A Packet Buffer Evaluation
Method Exploiting Queueing Theory for
Wireless Sensor Networks”, DOI: 10.2298/
CSIS110227057Q.