The document proposes two new algorithms - the New Backlight Dimming Algorithm (NBDA) and the New Image Enhancement Algorithm (NIEA) - to simultaneously reduce LCD backlight power consumption and enhance image contrast using content-based histogram analysis.
The NBDA analyzes image histograms to select the appropriate LCD backlight current level to reduce power. The NIEA then enhances image contrast to compensate for brightness changes from dimming the backlight, maintaining image quality.
Experimental results on an FPGA platform show the algorithms can on average reduce power consumption by 47% while improving image enhancement ratio by 6.8%, assessed using PSNR and SSIM metrics, allowing viewers to perceive little change in image quality despite
Land Cover Feature Extraction using Hybrid Swarm Intelligence Techniques - A ...IDES Editor
This document presents a hybrid algorithm using biogeography-based optimization (BBO) and ant colony optimization (ACO) for land cover feature extraction from remote sensing images. The algorithm first analyzes a training image to identify features that BBO and ACO classify efficiently. It then applies BBO to clusters containing these features and ACO to remaining clusters. An evaluation shows the hybrid algorithm achieves a higher kappa coefficient of 0.97 compared to 0.67 for BBO alone, indicating better classification accuracy. The authors conclude the algorithm effectively handles uncertainties in remote sensing images and future work could improve efficiency further.
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Discrete cosine transform (DCT) is a widely used tool in image and video compression applications. Recently, the high-throughput DCT designs have been adopted to fit the requirements of real-time application.
Operating the shifting and addition in parallel, an error-compensated adder-tree (ECAT) is proposed to deal with the truncation errors and to achieve low-error and high-throughput discrete cosine transform (DCT) design. Instead of the 12 bits used in previous works, 9-bit distributed arithmetic. DA-based DCT core with an error-compensated adder-tree (ECAT). The proposed ECAT operates shifting and addition in parallel by unrolling all the words required to be computed. Furthermore, the error-compensated circuit alleviates the truncation error for high accuracy design. Based on low-error ECAT, the DA-precision in this work is chosen to be 9 bits instead of the traditional 12 bits. Therefore, the hardware cost is reduced, and the speed is improved using the proposed ECAT.
This document summarizes a technique called CADU (collaborative adaptive down-sampling and upconversion) to improve image compression at low bit rates. The technique adaptively decreases high frequency information by directionally prefiltering an image before uniform downsampling. This allows the downsampled image to be conventionally compressed while avoiding aliasing artifacts. At the decoder, the low-resolution image is decompressed and then upconverted to the original resolution using constrained least squares restoration with an autoregressive model. Experimental results show CADU outperforms JPEG2000 in PSNR and visual quality at low to medium bit rates. The technique suggests oversampling wastes resources and could hurt quality given tight bit budgets.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses a modified pointwise shape-adaptive discrete cosine transform (SA-DCT) algorithm for deblocking block-DCT compressed images. The key points are:
1) The original pointwise SA-DCT method uses a constant DCT threshold coefficient. The proposed modified method uses an adaptive DCT threshold coefficient instead.
2) The adaptive DCT threshold coefficient is determined based on the mean squared error and maximum absolute difference of the image, related to the quantization table values.
3) Experiments show the proposed modified pointwise SA-DCT method achieves improved deblocking performance over the original method.
SECURED COLOR IMAGE WATERMARKING TECHNIQUE IN DWT-DCT DOMAIN ijcseit
The multilayer secured DWT-DCT and YIQ color space based image watermarking technique with
robustness and better correlation is presented here. The security levels are increased by using multiple pn
sequences, Arnold scrambling, DWT domain, DCT domain and color space conversions. Peak signal to
noise ratio and Normalized correlations are used as measurement metrics. The 512x512 sized color images
with different histograms are used for testing and watermark of size 64x64 is embedded in HL region of
DWT and 4x4 DCT is used. ‘Haar’ wavelet is used for decomposition and direct flexing factor is used. We
got PSNR value is 63.9988 for flexing factor k=1 for Lena image and the maximum NC 0.9781 for flexing
factor k=4 in Q color space. The comparative performance in Y, I and Q color space is presented. The
technique is robust for different attacks like scaling, compression, rotation etc.
Land Cover Feature Extraction using Hybrid Swarm Intelligence Techniques - A ...IDES Editor
This document presents a hybrid algorithm using biogeography-based optimization (BBO) and ant colony optimization (ACO) for land cover feature extraction from remote sensing images. The algorithm first analyzes a training image to identify features that BBO and ACO classify efficiently. It then applies BBO to clusters containing these features and ACO to remaining clusters. An evaluation shows the hybrid algorithm achieves a higher kappa coefficient of 0.97 compared to 0.67 for BBO alone, indicating better classification accuracy. The authors conclude the algorithm effectively handles uncertainties in remote sensing images and future work could improve efficiency further.
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Discrete cosine transform (DCT) is a widely used tool in image and video compression applications. Recently, the high-throughput DCT designs have been adopted to fit the requirements of real-time application.
Operating the shifting and addition in parallel, an error-compensated adder-tree (ECAT) is proposed to deal with the truncation errors and to achieve low-error and high-throughput discrete cosine transform (DCT) design. Instead of the 12 bits used in previous works, 9-bit distributed arithmetic. DA-based DCT core with an error-compensated adder-tree (ECAT). The proposed ECAT operates shifting and addition in parallel by unrolling all the words required to be computed. Furthermore, the error-compensated circuit alleviates the truncation error for high accuracy design. Based on low-error ECAT, the DA-precision in this work is chosen to be 9 bits instead of the traditional 12 bits. Therefore, the hardware cost is reduced, and the speed is improved using the proposed ECAT.
This document summarizes a technique called CADU (collaborative adaptive down-sampling and upconversion) to improve image compression at low bit rates. The technique adaptively decreases high frequency information by directionally prefiltering an image before uniform downsampling. This allows the downsampled image to be conventionally compressed while avoiding aliasing artifacts. At the decoder, the low-resolution image is decompressed and then upconverted to the original resolution using constrained least squares restoration with an autoregressive model. Experimental results show CADU outperforms JPEG2000 in PSNR and visual quality at low to medium bit rates. The technique suggests oversampling wastes resources and could hurt quality given tight bit budgets.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses a modified pointwise shape-adaptive discrete cosine transform (SA-DCT) algorithm for deblocking block-DCT compressed images. The key points are:
1) The original pointwise SA-DCT method uses a constant DCT threshold coefficient. The proposed modified method uses an adaptive DCT threshold coefficient instead.
2) The adaptive DCT threshold coefficient is determined based on the mean squared error and maximum absolute difference of the image, related to the quantization table values.
3) Experiments show the proposed modified pointwise SA-DCT method achieves improved deblocking performance over the original method.
SECURED COLOR IMAGE WATERMARKING TECHNIQUE IN DWT-DCT DOMAIN ijcseit
The multilayer secured DWT-DCT and YIQ color space based image watermarking technique with
robustness and better correlation is presented here. The security levels are increased by using multiple pn
sequences, Arnold scrambling, DWT domain, DCT domain and color space conversions. Peak signal to
noise ratio and Normalized correlations are used as measurement metrics. The 512x512 sized color images
with different histograms are used for testing and watermark of size 64x64 is embedded in HL region of
DWT and 4x4 DCT is used. ‘Haar’ wavelet is used for decomposition and direct flexing factor is used. We
got PSNR value is 63.9988 for flexing factor k=1 for Lena image and the maximum NC 0.9781 for flexing
factor k=4 in Q color space. The comparative performance in Y, I and Q color space is presented. The
technique is robust for different attacks like scaling, compression, rotation etc.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET - Underwater Image Enhancement using PCNN and NSCT FusionIRJET Journal
This document discusses techniques for enhancing underwater images that have been degraded due to scattering and absorption in the water medium. It proposes a new method for color image fusion using Non-Subsampled Contourlet Transform (NSCT) and Pulse Coupled Neural Network (PCNN). NSCT is used to decompose the image into sub-bands, while PCNN is used to fuse the high frequency sub-band coefficients. The proposed method is shown to outperform other fusion methods in objective quality assessment metrics. Various other underwater image enhancement techniques are also discussed, including wavelength compensation, multi-band fusion, image mode filtering, and approaches using neural networks like convolutional neural networks.
The intention of image compression is to discard worthless data from image so as to shrink the quantity of data bits favored for image depiction, to lessen the storage space, broadcast bandwidth and time. Likewise, data hiding convenes scenarios by implanting the unfamiliar data into a picture in invisibility manner. The review offers, a method of image compression approaches by using DWT transform employing steganography scheme together in combination of SPIHT to compress an image.
This document presents a new algorithm for progressive medical image coding using binary wavelet transforms (BWT). It divides grayscale medical images into binary bit-planes and applies a three-level BWT to each bit-plane. It then encodes each BWT bit-plane using quadtree-based partitioning to exploit the energy concentration in high-frequency subbands. Experiments on ultrasound, MRI and CT images show it provides significant improvements in bitrate for required quality compared to existing progressive image coding methods.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
Image watermarking based on integer wavelet transform-singular value decompos...IJECEIAES
With the era of rapid technology in multimedia, the copyright protection is very important to preserve an ownership of multimedia data. This paper proposes an image watermarking scheme based on Integer Wavelet Transform (IWT) and Singular Value Decomposition (SVD). The binary watermark is scrambled by Arnold transform before embedding watermark. Embedding locations are determined by using variance pixels. Selected blocks with the lowest variance pixels are transformed by IWT, thus the LL sub-band of 8×8 IWT is computed by using SVD. The orthogonal U matrix component of U3,1 and U4,1 are modified using certain rules by considering the watermark bits and an optimal threshold. This research reveals an optimal threshold value based on the trade-off between robustness and imperceptibility of watermarked image. In order to measure the watermarking performance, the proposed scheme is tested under various attacks. The experimental results indicate that our scheme achieves higher robustness than other scheme under different types of attack.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
International Journal of Computational Engineering Research(IJCER)ijceronline
The document discusses image compression using artificial neural networks. It begins with an introduction to image compression and the need for it. Then it reviews various existing neural network approaches for image compression, including backpropagation networks, hierarchical networks, multilayer feedforward networks, and radial basis function networks. It proposes a new approach using a multilayer perceptron with a modified Levenberg-Marquardt training algorithm to improve compression performance. Authentication and protection would be incorporated by exploiting the one-to-one mapping and one-way properties of neural networks. The proposed system is described as compressing images using neural networks trained with a modified LM algorithm to achieve high compression ratios while maintaining image quality.
Project Report on Medical Image Compression submitted for the award of B.Tech degree in Electrical and Electronics Engineering by Paras Prateek Bhatnagar, Paramjeet Singh Jamwal, Preeti Kumari and Nisha Rajani during session 2010-11.
Image Resolution Enhancement Using Undecimated Double Density Wavelet TransformCSCJournals
This document presents a new image resolution enhancement technique using Undecimated Double Density Wavelet Transform (UDDWT). It begins with background on existing resolution enhancement methods and issues with discrete wavelet transform. It then describes the development of UDDWT and the proposed method which uses forward and inverse UDDWT to construct a high resolution image from a low resolution input image. Results show the technique improves measures like PSNR, VIF and BIQI compared to other methods, enhancing image quality. The technique offers exact shift invariance and preserves high frequency content better than interpolation methods.
Satellite image contrast enhancement using discrete wavelet transformHarishwar Reddy
This document discusses contrast enhancement of satellite images using discrete wavelet transform and singular value decomposition. It provides background on contrast and techniques like histogram equalization. It then describes discrete wavelet transform and singular value decomposition, their applications, advantages, and uses. The document concludes that a new technique was proposed combining DWT and SVD for image equalization, which showed better results than conventional techniques in experiments.
The document proposes a new video watermarking algorithm using the dual-tree complex wavelet transform (DTCWT). The DTCWT offers advantages like shift invariance and directional selectivity. The algorithm embeds a watermark by adding its coefficients to high frequency DTCWT coefficients of video frames. Masks are used to hide the watermark perceptually. Experimental results show the proposed method is robust to geometric distortions, lossy compression, and a joint attack, outperforming comparable DWT-based methods. It is suitable for playback control due to its robustness and simple implementation.
This document discusses wavelet transforms and fast wavelet transforms for image compression. It provides background on discrete wavelet transforms (DWT) and fast wavelet transforms. DWT is useful for image compression because it concentrates image energy into low-frequency coefficients. Compression is achieved by quantizing coefficients, prioritizing low-frequency ones. Popular image compression techniques like JPEG2000 use DWT. Fast wavelet transforms like Mallat's algorithm allow faster image analysis than DWT. The document reviews various image compression techniques and their performance in terms of compression ratio and image quality.
A Comparative Study of Image Compression AlgorithmsIJORCS
The document compares three image compression algorithms: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and a hybrid DCT-DWT algorithm. DCT is used in JPEG and provides simple hardware implementation but can cause blocking artifacts at high compression. DWT provides multi-resolution decomposition and achieves higher compression ratios but requires more computation. The hybrid algorithm aims to combine the advantages of DCT and DWT by applying DWT followed by DCT, allowing for better performance than either individual method. Experimental results showed the hybrid approach generally had better performance in terms of PSNR, MSE, and compression ratio.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses Hydra and Blacklight, open source repository frameworks developed by a community of institutions. Hydra provides a shared infrastructure for repositories with modular "heads" or applications. Blacklight is a repository front-end that can aggregate content from multiple sources. The document outlines the growth of Hydra partners and applications, and priorities to further develop solution bundles, turnkey applications, and strengthen the community through training and documentation.
Chris Awre discusses collaboration as key to addressing challenges with digital archives. Born-digital archives require working beyond any single institution. Collaboration allows archives to deliver services by sharing skills and resources. Successful collaborations include specific subject or use case groups. Barriers include inertia, limited capacity, and lack of follow through. Case studies on the AIMS project and Hydra initiative show benefits of practical collaboration and developing common infrastructure. Network-level activities can better support local services through resource sharing. The presentation calls for discussion on enabling collaboration while addressing concerns.
The document discusses Blacklight, an open source discovery interface built on Apache Solr. Blacklight was originally developed at the University of Virginia to create a better interface for their library catalog. It allows faceted browsing, relevance-based searching, and exposing metadata from repositories. The document provides details on Blacklight's functionality, use of Solr, implementation with Hydra repositories, and adoption by other universities as their library catalog interface. Community support has been key to Blacklight's ongoing development.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET - Underwater Image Enhancement using PCNN and NSCT FusionIRJET Journal
This document discusses techniques for enhancing underwater images that have been degraded due to scattering and absorption in the water medium. It proposes a new method for color image fusion using Non-Subsampled Contourlet Transform (NSCT) and Pulse Coupled Neural Network (PCNN). NSCT is used to decompose the image into sub-bands, while PCNN is used to fuse the high frequency sub-band coefficients. The proposed method is shown to outperform other fusion methods in objective quality assessment metrics. Various other underwater image enhancement techniques are also discussed, including wavelength compensation, multi-band fusion, image mode filtering, and approaches using neural networks like convolutional neural networks.
The intention of image compression is to discard worthless data from image so as to shrink the quantity of data bits favored for image depiction, to lessen the storage space, broadcast bandwidth and time. Likewise, data hiding convenes scenarios by implanting the unfamiliar data into a picture in invisibility manner. The review offers, a method of image compression approaches by using DWT transform employing steganography scheme together in combination of SPIHT to compress an image.
This document presents a new algorithm for progressive medical image coding using binary wavelet transforms (BWT). It divides grayscale medical images into binary bit-planes and applies a three-level BWT to each bit-plane. It then encodes each BWT bit-plane using quadtree-based partitioning to exploit the energy concentration in high-frequency subbands. Experiments on ultrasound, MRI and CT images show it provides significant improvements in bitrate for required quality compared to existing progressive image coding methods.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
Image watermarking based on integer wavelet transform-singular value decompos...IJECEIAES
With the era of rapid technology in multimedia, the copyright protection is very important to preserve an ownership of multimedia data. This paper proposes an image watermarking scheme based on Integer Wavelet Transform (IWT) and Singular Value Decomposition (SVD). The binary watermark is scrambled by Arnold transform before embedding watermark. Embedding locations are determined by using variance pixels. Selected blocks with the lowest variance pixels are transformed by IWT, thus the LL sub-band of 8×8 IWT is computed by using SVD. The orthogonal U matrix component of U3,1 and U4,1 are modified using certain rules by considering the watermark bits and an optimal threshold. This research reveals an optimal threshold value based on the trade-off between robustness and imperceptibility of watermarked image. In order to measure the watermarking performance, the proposed scheme is tested under various attacks. The experimental results indicate that our scheme achieves higher robustness than other scheme under different types of attack.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
International Journal of Computational Engineering Research(IJCER)ijceronline
The document discusses image compression using artificial neural networks. It begins with an introduction to image compression and the need for it. Then it reviews various existing neural network approaches for image compression, including backpropagation networks, hierarchical networks, multilayer feedforward networks, and radial basis function networks. It proposes a new approach using a multilayer perceptron with a modified Levenberg-Marquardt training algorithm to improve compression performance. Authentication and protection would be incorporated by exploiting the one-to-one mapping and one-way properties of neural networks. The proposed system is described as compressing images using neural networks trained with a modified LM algorithm to achieve high compression ratios while maintaining image quality.
Project Report on Medical Image Compression submitted for the award of B.Tech degree in Electrical and Electronics Engineering by Paras Prateek Bhatnagar, Paramjeet Singh Jamwal, Preeti Kumari and Nisha Rajani during session 2010-11.
Image Resolution Enhancement Using Undecimated Double Density Wavelet TransformCSCJournals
This document presents a new image resolution enhancement technique using Undecimated Double Density Wavelet Transform (UDDWT). It begins with background on existing resolution enhancement methods and issues with discrete wavelet transform. It then describes the development of UDDWT and the proposed method which uses forward and inverse UDDWT to construct a high resolution image from a low resolution input image. Results show the technique improves measures like PSNR, VIF and BIQI compared to other methods, enhancing image quality. The technique offers exact shift invariance and preserves high frequency content better than interpolation methods.
Satellite image contrast enhancement using discrete wavelet transformHarishwar Reddy
This document discusses contrast enhancement of satellite images using discrete wavelet transform and singular value decomposition. It provides background on contrast and techniques like histogram equalization. It then describes discrete wavelet transform and singular value decomposition, their applications, advantages, and uses. The document concludes that a new technique was proposed combining DWT and SVD for image equalization, which showed better results than conventional techniques in experiments.
The document proposes a new video watermarking algorithm using the dual-tree complex wavelet transform (DTCWT). The DTCWT offers advantages like shift invariance and directional selectivity. The algorithm embeds a watermark by adding its coefficients to high frequency DTCWT coefficients of video frames. Masks are used to hide the watermark perceptually. Experimental results show the proposed method is robust to geometric distortions, lossy compression, and a joint attack, outperforming comparable DWT-based methods. It is suitable for playback control due to its robustness and simple implementation.
This document discusses wavelet transforms and fast wavelet transforms for image compression. It provides background on discrete wavelet transforms (DWT) and fast wavelet transforms. DWT is useful for image compression because it concentrates image energy into low-frequency coefficients. Compression is achieved by quantizing coefficients, prioritizing low-frequency ones. Popular image compression techniques like JPEG2000 use DWT. Fast wavelet transforms like Mallat's algorithm allow faster image analysis than DWT. The document reviews various image compression techniques and their performance in terms of compression ratio and image quality.
A Comparative Study of Image Compression AlgorithmsIJORCS
The document compares three image compression algorithms: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and a hybrid DCT-DWT algorithm. DCT is used in JPEG and provides simple hardware implementation but can cause blocking artifacts at high compression. DWT provides multi-resolution decomposition and achieves higher compression ratios but requires more computation. The hybrid algorithm aims to combine the advantages of DCT and DWT by applying DWT followed by DCT, allowing for better performance than either individual method. Experimental results showed the hybrid approach generally had better performance in terms of PSNR, MSE, and compression ratio.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
Neural network based image compression with lifting scheme and rlceSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses Hydra and Blacklight, open source repository frameworks developed by a community of institutions. Hydra provides a shared infrastructure for repositories with modular "heads" or applications. Blacklight is a repository front-end that can aggregate content from multiple sources. The document outlines the growth of Hydra partners and applications, and priorities to further develop solution bundles, turnkey applications, and strengthen the community through training and documentation.
Chris Awre discusses collaboration as key to addressing challenges with digital archives. Born-digital archives require working beyond any single institution. Collaboration allows archives to deliver services by sharing skills and resources. Successful collaborations include specific subject or use case groups. Barriers include inertia, limited capacity, and lack of follow through. Case studies on the AIMS project and Hydra initiative show benefits of practical collaboration and developing common infrastructure. Network-level activities can better support local services through resource sharing. The presentation calls for discussion on enabling collaboration while addressing concerns.
The document discusses Blacklight, an open source discovery interface built on Apache Solr. Blacklight was originally developed at the University of Virginia to create a better interface for their library catalog. It allows faceted browsing, relevance-based searching, and exposing metadata from repositories. The document provides details on Blacklight's functionality, use of Solr, implementation with Hydra repositories, and adoption by other universities as their library catalog interface. Community support has been key to Blacklight's ongoing development.
The document discusses osmotic power, a sustainable energy source that uses differences in salt concentrations between freshwater and saltwater to generate electricity. It notes that Norway built the first osmotic power plant, which is 13 times more efficient than current sustainable options and produces energy through an osmotic process that involves freshwater and saltwater at different salt concentrations and pressures, generating 1.62 megawatts of power emission-free. However, the technology also has high costs and requires frequent filter changes and emptying/refilling of containers.
Osmosis is the spontaneous movement of water molecules through a semi-permeable membrane from an area of higher water concentration to lower water concentration in order to equalize the concentration of water molecules on both sides of the membrane. The movement occurs due to differences in solute concentration between solutions separated by the semi-permeable membrane. Osmosis plays a key role in transporting water into and out of cells and the response of cells to their external environment depends on whether the outside solution is hypo-tonic, iso-tonic, or hyper-tonic compared to the cell cytoplasm.
This presentation introduces the process of osmosis. It defines osmosis as the spontaneous movement of water across a semi-permeable membrane from a less concentrated solution to a more concentrated one. It distinguishes osmosis from diffusion, which does not require a membrane. The presentation outlines key terms like hypertonic, hypotonic and isotonic solutions. It provides examples of osmosis in applications like plant water uptake, food preservation, and kidney dialysis. In conclusion, osmosis is the diffusion of water molecules through a selectively permeable membrane to equalize concentrations.
Osmotic power is generated by exploiting the pressure difference created across a semi-permeable membrane that separates fresh water and salt water reservoirs. Fresh water flows through the membrane into the higher salinity salt water reservoir, creating pressure that can be used to drive turbines and generate electricity. Osmotic power plants have the advantages of being renewable, producing electricity reliably without carbon emissions. However, they also have high upfront costs and require access to a steady source of fresh and salt water with a sufficient salinity gradient.
Osmotic power presentation ids xi december 2009 tcm9-7043jinxxyd
Statkraft is developing osmotic power as a new renewable energy technology. Osmotic power uses osmosis, the natural process by which water moves from a low salt concentration to a high one, to generate electricity. Statkraft has built a prototype osmotic power plant in Norway to test membrane and system components at a small scale. The technology has potential for cost reductions through larger membrane elements, higher system efficiencies, and economies of scale in larger plants. Statkraft is working with partners on membrane and system development to advance osmotic power toward commercialization.
This document outlines osmotic power, which generates energy from the difference in salt concentration between seawater and freshwater. It works via pressure retarded osmosis (PRO) where freshwater naturally moves through a semi-permeable membrane into higher salinity seawater, increasing pressure. This pressure powers a turbine to generate electricity. Key components include membrane modules to separate the waters, filters to optimize membrane performance, and a turbine/generator. Experimental results showed a prototype achieving over 90% efficiency and the potential to scale installations by adding more membrane modules.
Power consumption is increasing globally, requiring more power generation which sometimes causes environmental pollution. Non-conventional renewable power plants like hydro, solar, geothermal, and wind are encouraged but affected by climate and cannot operate continuously. Osmotic power plants, a promising new technology, use semipermeable membranes and osmosis to generate pollution-free power from the difference in salt concentration between fresh and salt water, and can operate 24/7. The first prototype was built in Norway in 2009. While expensive now, osmotic power has potential to provide up to 50% of the EU's current power from its global annual potential of 1600-1700 terawatt hours.
X-Ray Image Enhancement using CLAHE MethodIRJET Journal
This document presents a method for enhancing X-ray images using Contrast Limited Adaptive Histogram Equalization (CLAHE). CLAHE improves local contrast and edge definition by applying histogram equalization separately to small regions of the image rather than the entire image. It prevents overamplification of noise that can occur with adaptive histogram equalization. The proposed method uses an image processing filter chain including noise reduction, high pass filtering, and CLAHE to enhance 2D X-ray images. Key parameters of the filter chain are optimized using an interior point algorithm. The goal is to provide customized tissue contrast for each treatment location to allow for accurate patient setup and analysis in radiation therapy. The CLAHE method is shown to effectively enhance contrast in X-
Image Resolution Enhancement by using Wavelet TransformIRJET Journal
This document presents a technique for enhancing the resolution of low resolution images using wavelet transforms. It decomposes low resolution images into sub-bands using discrete wavelet transform (DWT) and stationary wavelet transform (SWT). The high frequency sub-bands produced by DWT are interpolated and corrected using the high frequency sub-bands from SWT. An inverse DWT is then applied to combine the interpolated sub-bands and produce a higher resolution output image. The technique is compared to conventional methods like bilinear and bicubic interpolation as well as state-of-the-art resolution enhancement techniques. It is shown to produce higher quality results measured using metrics like peak signal-to-noise ratio. The technique has applications in
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGAIRJET Journal
This document describes a method for enhancing underwater images using wavelet decomposition and fusion on an FPGA (field programmable gate array). Underwater images often have low contrast and visibility due to light scattering in water. The proposed method performs color correction and contrast enhancement on an input underwater image. It then decomposes the color-corrected and contrast-enhanced images into low and high frequency components using wavelet transforms. Image fusion is performed on the wavelet coefficients to combine the detailed information from both images. The fused image is reconstructed via inverse wavelet transform. Experimental results show the proposed fusion-based approach improves underwater image visibility. Implementing the algorithm on an FPGA provides benefits over general processors for computationally intensive image processing.
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
1) The document proposes analog signal processing as a solution to reduce computation time for image alignment algorithms that have high computational loads.
2) It modifies the Normalized Cross Correlation (NCC) algorithm for image alignment by only using the diagonal elements of the template and reference image blocks to calculate correlation. This reduces computations compared to using all pixels.
3) A new imaging architecture is proposed that uses an analog processor to implement the modified NCC algorithm in parallel with digital image acquisition, providing faster computation.
Comparison of different Fingerprint Compression Techniquessipij
The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR) and mean square error (MSE).A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG) standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT) and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.
IRJET- Handwritten Decimal Image Compression using Deep Stacked AutoencoderIRJET Journal
This document proposes using a deep stacked autoencoder neural network for compressing handwritten decimal image data. It involves training multiple autoencoders in sequence to form a deep network that can compress the high-dimensional input images into lower-dimensional encoded representations while minimizing information loss. The autoencoders are trained one layer at a time using scaled conjugate gradient descent. Testing on the MNIST handwritten digits dataset showed the deep stacked autoencoder achieved compression by encoding the 400-dimensional input images down to a 25-dimensional representation while maintaining good reconstruction accuracy, as measured by minimizing the mean squared error at each layer.
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
This document summarizes research on compression algorithms for remote sensing images. It begins with an abstract describing the challenges of transmitting large remote sensing images from sensors to networks. The document then reviews 18 different research papers on various compression algorithms for remote sensing images, including wavelet-based algorithms, fractal coding methods, and region-based approaches. It evaluates each algorithm's performance in compressing remote sensing images while maintaining quality. The document aims to perform a comparative case study of these different compression algorithms.
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVDIRJET Journal
The document presents a method for contrast enhancement of gray level and color images using discrete wavelet transform (DWT) and singular value decomposition (SVD). It begins with an introduction to common contrast enhancement techniques like general histogram equalization (GHE) and their limitations. The proposed method first applies GHE, then uses DWT to decompose the input image into subbands. It calculates a correction coefficient using the LL subbands and SVD. It multiplies this to the input image LL subband to generate a new LL subband. After recombining the subbands using inverse DWT, it produces an output image with enhanced contrast and brightness, without affecting color. Experimental results on sample images show improved mean, standard deviation and P
IRJET- Contrast Enhancement of Grey Level and Color Image using DWT and SVDIRJET Journal
The document presents a method for contrast enhancement of gray level and color images using discrete wavelet transform (DWT) and singular value decomposition (SVD). It begins with an introduction to common contrast enhancement techniques like general histogram equalization (GHE) and their limitations. The proposed method first applies GHE, then uses DWT to decompose the input image into subbands. It calculates a correction coefficient using the LL subbands and SVD. It multiplies this to the input image LL subband to generate a new LL subband. After recombining the subbands using inverse DWT, it yields an output image with enhanced contrast and brightness, without affecting color. Experimental results on sample images show improved mean, standard deviation and P
Analog signal processing approach for coarse and fine depth estimationsipij
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the
market every day. Some of these have very severe Size, Weight and Power constraints whereas other
devices have to handle very high computational loads. Some require both these conditions to be met
simultaneously. Current imaging architectures and digital image processing solutions will not be able to
meet these ever increasing demands. There is a need to develop novel imaging architectures and image
processing solutions to address these requirements. In this work we propose analog signal processing as a
solution to this problem. The analog processor is not suggested as a replacement to a digital processor but
it will be used as an augmentation device which works in parallel with the digital processor, making the
system faster and more efficient. In order to show the merits of analog processing two stereo
correspondence algorithms are implemented. We propose novel modifications to the algorithms and new
imaging architectures which, significantly reduces the computation time
Analog signal processing approach for coarse and fine depth estimationsipij
This document discusses an analog signal processing approach for coarse and fine depth estimation using stereo image pairs. It proposes modifications to existing normalized cross correlation (NCC) and sum absolute differences (SAD) stereo correspondence algorithms to reduce computation time. For the NCC algorithm, it suggests using only the diagonal elements of image blocks to compute correlation, reducing computations from 2D to 1D. For hardware implementation, it presents a new imaging architecture with parallel analog and digital systems, where the analog system performs the computationally intensive NCC algorithm on sensor data in real-time to reduce overall processing time compared to digital-only systems. Experimental results show the modified algorithms can achieve faster computation speeds without compromising performance.
Post-Segmentation Approach for Lossless Region of Interest Codingsipij
This paper presents a lossless region of interest coding technique that is suitable for interactive telemedicine over networks. The new encoding scheme allows a server to transmit only a part of a compressed image data progressively as a client requests it. This technique is different from region scalable coding in JPEG2000 since it does not define region of interest (ROI) when encoding occurs. In the proposed method, the image is fully encoded and stored in the server. It also allows a user to select a ROI after the compression is done. This feature is the main contribution of research. The proposed coding method achieves the region scalable coding by using the integer wavelet lifting, successive quantization, and partitioning that rearranges the wavelet coefficients into subsets. Each subset that represents a local area in an image is then separately coded using run-length and entropy coding. In this paper, we will show the benefits of using the proposed technique with examples and simulation results.
The document presents a study on detecting glaucoma using a convolutional neural network (CNN). It discusses how existing glaucoma detection methods require manual feature extraction from fundus images, which CNNs can avoid by automatically learning image features. The proposed method uses a CNN architecture with convolutional and fully connected layers to classify fundus images as glaucoma or non-glaucoma. The CNN is trained on preprocessed images and evaluated on test images, achieving accurate classification results. The study demonstrates that a CNN can effectively detect glaucoma from fundus images without manual feature engineering.
1) The document proposes a method for color image enhancement using Laplacian pyramid decomposition and histogram equalization. It separates an input image into red, green, and blue color channels.
2) Each color channel is decomposed into a Laplacian pyramid, and histogram equalization is applied to enhance the contrast in each band-pass image.
3) The enhanced band-pass images are then recombined using the Laplacian pyramid reconstruction equation to produce enhanced color channels, which are combined to generate the output enhanced color image. The method aims to improve both local and global contrast while maintaining natural image quality.
1) The document proposes a method for color image enhancement using Laplacian pyramid decomposition and histogram equalization. It separates an input image into red, green, and blue color channels.
2) Each color channel is decomposed into a Laplacian pyramid, and histogram equalization is applied to enhance the contrast in each level. The enhanced levels are then recombined to improve both local and global contrast.
3) The method aims to overcome issues with traditional histogram equalization like over-enhancement, by applying a smoothing technique before contrast adjustment in each level of the pyramid. The final enhanced image is reconstructed by combining the processed color channels.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
This document is a seminar report on digital image processing submitted by a student, N.Ch. Karthik, in partial fulfillment of a Bachelor of Technology degree. It discusses correcting raw images by subtracting dark current and bias, flat fielding for pixel sensitivity variations, and displaying images by limiting histograms, using transfer functions, and histogram equalization. The report also covers mathematical image manipulations and references other works.
Similar to Content based lcd backlight power reduction (20)
The candidate is applying for the English Teacher position and holds a Master's in English with B.Ed., bringing expertise in curriculum teaching, classroom management, and student engagement. Their approach prioritizes diverse learning styles and nurturing critical thinking, creativity, and communication skills in an inclusive learning environment aligned with the school's values. They are excited to contribute by supporting students academically and personally.
This document provides tips for increasing self-confidence in 3 steps: 1) understand how the mind works to believe in yourself, 2) know your strengths and weaknesses to accept compliments, 3) categorize situations to confront your fears or avoid them.
This document summarizes a research article that analyzes the concept of heroism. The researchers aimed to develop a taxonomy of different types of heroism and differentiate heroic action from altruism. They explored paradoxes surrounding heroism, such as how heroic actors can be both elevated and negated. The researchers assert that insufficient justification, rather than risk alone, better explains how heroic status is ascribed. They briefly present results from a study supporting their arguments. The researchers identify areas for future research, such as how extension neglect may influence views of non-prototypical heroes and how injury/death of heroes resolves dissonance in their favor.
This document discusses barriers to communication and how to overcome them. It identifies 7 main barriers: language, culture, status consciousness, emotions, hearing what we expect, jealousy, and communication overload. It then provides 6 ways to overcome these barriers, including improving listening and reading skills, using empathy, feedback, constraining emotions, and observation. The key barriers to effective communication are differences in language, culture, emotions, and how people process information.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
The chapter Lifelines of National Economy in Class 10 Geography focuses on the various modes of transportation and communication that play a vital role in the economic development of a country. These lifelines are crucial for the movement of goods, services, and people, thereby connecting different regions and promoting economic activities.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
2. LAI et al.: CONTENT-BASED LCD BACKLIGHT POWER REDUCTION USING HISTOGRAM ANALYSIS 551
Fig. 2. Block diagram for NBDA.
to NBDA to analyze the image histogram. First, RGB is trans-
formed to , and Y (luminance) is regarded as gray-level.
By using a statistical analysis of the image histogram, NBDA
calculates the mean value and median value of the displayed
image. A high mean value indicates that the backlight will be
controlled to select a low current to save the system power based
Fig. 3. New backlight dimming algorithm.
on different backlight current levels. Fig. 2 shows the NBDA
block diagram to control backlight by histogram analysis. The
image histogram represents the distribution of the gray level.
The five steps of the NBDA algorithm are detailed in Fig. 3.
The definitions of the mean and the median of the image his-
togram are as follows:
(5)
(1)
(2) (6)
(3)
(7)
where is the luminance value from the RGB to
color space, and is the probability density function. Ac-
Equation (4) shows that the generated output data,
cording to (1) and (2), the static values of the histogram can
, become times of the original input data,
be estimated. Otherwise, a different backlight current level ac-
(R, G, B), where is the contrast factor gain. Because of the
cording to (3) (Step 4) can be selected. In Step 5, if the abso-
elimination of , (5) and (6) show that the proposed NIEA has
lute difference between the mean value and the median value
no distortion in the hue (H) or saturation (S)color space, so the
is greater than 60 (decimal), it implies there is a large variation
values of the new and are the same as the original H
in the image. Therefore, the NBDA will not change the LCD
and S. However, (7) shows that the enhancing luminance
backlight current because of the image fidelity issue, and the
becomes times of the original luminance (V).
original settings are kept. For this study, the backlight current
After the LCD backlight current level is selected based
is divided into eight different levels, and the NBDA selects the
on NBDA, NIEA compensates for the image contrast so that
proper backlight current level in terms of the mean value of the
viewers notice no conspicuous changes in the image quality.
image histogram.
NIEA defines a luminance enhancement curve, as shown in
B. New Image Enhancement Algorithm (NIEA) Fig. 4, which splits the image pixels into 16 equal intervals.
An input, , can be mapped to an output, , by the
From the viewpoint of the color space, when the gray-level
luminance enhancement curve. Since the luminance enhance-
data of the image are input to the NIEA, the proposed image
ment curve is nonlinear, the piecewise linear method is used to
enhancement approach does not cause distortion in hue (H) or
approach the luminance enhancement curve consisting of 16
saturation (S); only the image luminance (V) is enhanced. The
line segments.
analysis is derived as follows:
The NIEA algorithm is shown in Fig. 5. In Step 1, the gray-
level data are input to NIEA pixel by pixel; then can be
(4) calculated for the corresponding image pixel. In Step 2, the new
are then calculated and output to the LCD panel.
3. 552 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 7, NO. 10, OCTOBER 2011
visual quality, the structural similarity index metric (SSIM) was
derived according to [16]. HVS is highly adapted for extracting
structural information. The formulas are represented as follows:
(9)
(10)
(11)
where , , and are luminance, contrast
and structural similarity, respectively. Fig. 6 shows the block
diagram of the SSIM measurement system. In order to calculate
the value of SSIM, we rearrange (9) to (10) and (11), where
, and stand for mean, standard deviation and correlation
coefficient, respectively. The final value of SSIM is between 0
and 1. When the value is closer to 1, it signifies, from the HVS
perspective, that the extracted structural information of the two
images is almost the same.
Fig. 4. Piecewise-linear method of NIEA. III. IMPLEMENTATION AND PERFORMANCE ANALYSIS
The proposed algorithms are implemented on a field pro-
grammable gate array (FPGA) platform. The block diagram
of the FPGA platform is shown in Fig. 7. An external flash
memory (USB flash disk) and SDRAM module on board to
store the original images data and the modified image data, re-
spectively, are required. Fig. 8 shows our proposed architecture.
The architecture consists of three parts: color transformation
module, NBDA module, and NIEA module. The transforma-
tion module from the RGB to is implemented using
canonical-signed-digit (CSD) fixed-coefficient multiplier. The
NBDA and NIEA modules are implemented using hardware
description language (HDL), Verilog, according to the algo-
rithms of NBDA and NIEA (Figs. 3 and 5). Fig. 9 shows the
photo of the display platform, with a 3-in TFT LCD panel with
a resolution of 960 240 pixels to display the modified image.
The maximum voltage the platform could support is 9.6 V, and
the corresponding current is 25 mA. The circuits on FPGA read
the image data from flash memory and perform the proposed
NBDA and NIEA algorithms.
From the experimental results, the upper parts of Figs. 10–13
Fig. 5. New image enhancement algorithm (NIEA). show the original test images without the proposed algorithms
having been performed. Thus, the backlight controller does
not change the backlight current; the default current setting
Consequently, NIEA can improve the image quality. The for- is around 22 mA as measured by the current meter. Next, the
mula for is defined in (8): middle parts of Figs. 10–13 show the modified test images
using the proposed algorithms, NBDA and NIEA. The back-
light controller lowers the backlight current to reduce power
dissipation. Moreover, the lower parts of Figs. 10–13 show the
(8) histogram analysis used to determine the values of mean, me-
where –16, , dian, standard deviation and correlation coefficient to evaluate
represents the original image pixels (0–255) and the image quality.
represents the enhanced image pixels (0–255). Furthermore, NBDA and NIEA select the suitable backlight
current by using image histogram and enhancing the image con-
C. Image Quality Assessment Using SSIM Index trast to compensate for the image brightness; for example, the
Usually, the mean square error (MSE) and the peak signal-to- values of the mean and medium in Fig. 10 are C (hex) and
noise ratio (PSNR) are adopted to evaluate image quality. How- 5 (hex), respectively. Because the difference between the two
ever, as they are sometimes not well-matched to perceive the values is less than 60 (decimal), NBDA selects the current level
4. LAI et al.: CONTENT-BASED LCD BACKLIGHT POWER REDUCTION USING HISTOGRAM ANALYSIS 553
Fig. 6. Block diagram of the SSIM measurement system.
Fig. 7. Block diagram of the FPGA platform.
Fig. 8. Proposed architecture.
Fig. 10. Test Image 1.
In order to compare backlight power savings based on the
same comparison level, [5] proposed a backlight dimming al-
gorithm using a backlight dimming gray (BDG) level at 75% of
the histogram, which is defined as the characteristic of image
data. This backlight-dimming ratio is calculated as
. For example, the BDR of Fig. 10 is equal to
. The backlight current of [5] mA
mA, the power saving of the backlight
. However, the power saving of
our NBDA is . Hence the pro-
posed algorithm saves more power than [5].
From the experimental results in Table I, the backlight current
Fig. 9. Display platform. selected by NBDA, on average, reduces power consumption by
47%. This is superior to [5]. However, NIEA not only increases
the image contrast but also sustains the image quality.
0 to drive the LCD panel. Then, the (R, G, B) data are input to In Tables II and III, the ratio of image enhancement and
perform NIEA, and NIEA adopts the piecewise linear method PSNR value are 6.8233% and 93.116 dB on average, respec-
to compensate for image brightness and obtain the final current tively. In order to obtain a good match with HVS quality, the
(9.2 mA). SSIM method is used to evaluate the images. As Table IV
5. 554 JOURNAL OF DISPLAY TECHNOLOGY, VOL. 7, NO. 10, OCTOBER 2011
Fig. 13. Test Image 4.
Fig. 11. Test Image 2.
TABLE I
COMPARISON OF BACKLIGHT POWER SAVING RATIO
TABLE II
RATIO OF IMAGE ENHANCEMENT
Fig. 12. Test Image 3.
shows, the values are close to 1; thus, our proposed algorithms
can sustain the original image quality.
NBDA adopts the content-based histogram analysis to select the
IV. CONCLUSION corresponding TFT LCD backlight current and decreases power
In this paper, we have proposed two algorithms to realize consumption. Moreover, the NIEA increases the image contrast
lower power consumption and image contrast enhancement: the level which compensates for the brightness of the image when
NBDA, and the new image enhancement algorithm (NIEA). The the user can identify no conspicuous changes in the image by the
6. LAI et al.: CONTENT-BASED LCD BACKLIGHT POWER REDUCTION USING HISTOGRAM ANALYSIS 555
TABLE III [11] E. Y. Oh, S. H. Baik, M. H. Sohn, K. D. Kim, H. J. Hong, J. Y. Bang,
PSNR TO EVALUATE IMAGE QUALITY K. J. Kwon, M. H. Kim, H. Jang, J. K. Yoon, and I. J. Chung, “IPSmode
dynamic LCD-TV realization with low black luminance and high con-
trast by adaptive dynamic image control technology,” J. Soc. Inf. Dis-
play, vol. 13, pp. 215–219, 2005.
[12] C.-C. Sun, S.-J. Ruan, M.-C. Shie, and T.-W. Pa, “Dynamic contrast en-
hancement based on histogram specification,” IEEE Trans. Consumer
Electron., vol. 51, no. 4, pp. 1300–1305, Nov. 2005.
[13] H. Cho and O. Kwon, “A backlight dimming algorithm for low power
and high image quality LCD applications,” IEEE Trans. Consumer
Electron., vol. 55, no. 4, pp. 839–844, May 2009.
[14] A. Bartolini, M. Ruggiero, and L. Benini, “Visual quality analysis for
dynamic backlight scaling in LCD systems,” in Proc. IEEE Des. Autom.
& Test in Eur. Conf. & Exhib., 2009, pp. 1428–1433.
[15] M. Ruggiero, A. Bartolini, and L. Benini, “DBS4video: Dynamic lu-
TABLE IV
minance backlight scaling based on multi-histogram frame character-
SSIM TO EVALUATE IMAGE QUALITY
ization for video streaming application,” in Proc. 8th ACM EMSOFT,
Atlanta, GA, 2008, pp. 109–118.
[16] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
quality assessment: From error visibility to structural similarity,” IEEE
Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[17] S. Lee, K. Um, and B. Choi, “A power reduction method for LCD
backlight based on human visual characteristics,” in Proc. Int. Conf.
on Consumer Electron., 2008, pp. 1–2.
Yeong-Kang Lai (M’94) was born in Taipei, Taiwan,
in 1966. He received the B.S. degree in electrical
engineering from the Tamkang University, Taipei,
Taiwan, in 1988, and the M.S. and Ph.D. degree
HVS quality. The experimental results show that the proposed from the National Taiwan University, in 1990 and
NBDA algorithm, on average, reduces power consumption by 1997, respectively.
From 1992 to 1993, he was with the Institute
47%, while the proposed NIEA algorithm enhances the image of Information Science, Academia Sinica, Taiwan,
contrast ratio and sustains image quality. Finally, SSIM is used where he worked on video conference system. In
to measure image quality, which proves to be very close to that 1997, he joined the Electrical Engineering Depart-
ment, Chang Gung University, Taoyuan, Taiwan,
of the original image. as an Assistant Processor. From 1998 to 2001, he was Assistant Processor of
the Information Engineering Department at National Dong Hwa University,
Hualien, Taiwan. Currently, he is Associate Processor of the Department of
REFERENCES Electrical Engineering, National Chung Hsing University, Taichung, Taiwan.
[1] G. Z. Wang, F. C. Lin, and Y. P. Huang, “Delta-color adjustment for He is also a member of the honor society Phi Tau Phi. His research interests
spatial modulated color backlight algorithm on high dynamic range include video compression, DSP architecture design, video signal processor
LCD TVs,” J. Display Technol., vol. 6, no. 6, pp. 215–220, Jun. 2010. design, and VLSI signal processing.
[2] C. H. Chen and H. P. D. Shieh, “Effects of backlight profiles on
perceived image quality for high dynamic range LCDs,” J. Display
Technol., vol. 4, no. 2, pp. 153–159, Jun. 2008.
[3] W. S. Oh, D. Cho, K. M. Cho, G. W. Moon, B. Yang, and T. Jang, “A Yu-Fan Lai (S’06) was born in Taichung, Taiwan, on
novel two-dimensional adaptive dimming technique of X-Y channel June 14, 1978. He received the B.S. degree in auto-
drivers for LED backlight system in LCD TVs,” J. Display Technol., matic control engineering from the Feng Chia Uni-
vol. 5, no. 1, pp. 20–26, Jan. 2009. versity, Taichung, Taiwan, in 2000, and the M.S. de-
[4] F. C. Lin, Y. P. Huang, L. Y. Liao, C. Y. Liao, H.-P. D. Shieh, T. M. gree in electrical engineering from the Chung Hwa
Wang, and S. C. Yeh, “Dynamic backlight gamma on high dynamic University, Hsinchu, Taiwan, in 2003. From 2003 to
range LCD TVs,” J. Display Technol., vol. 4, no. 2, pp. 139–146, Jun. 2007, he ever worked in Ritek Corporation, Hsinchu,
2008. Taiwan. He is currently pursuing the Ph.D. degree in
[5] C.-C. Lai and C.-C. Tsai, “Backlight power reduction and image con- the department of electrical engineering at National
trast enhancement using adaptive dimming for global backlight appli- Chung Hsing University.
cations,” IEEE Trans. Consumer Electron., vol. 54, no. 2, pp. 669–674, His major research interests include image and
May 2008. video processing, VLSI architecture design of image and video coding, and
[6] T. Shirai, S. Shimizukawa, T. Shiga, and S. Mikoshiba, “RGB-LED VLSI design for digital signal processing.
backlights for LCD-TVs with 0D, 1D, and 2D adaptive dimming,” in
SID2006 Digest of Technical Papers, 2006, pp. 1520–1523.
[7] H. Chen, J. Sung, T. Ha, and Y. Park, “Locally pixel-compensated
backlight dimming for improving static contrast on LED backlight
LCDs,” in SID2007 Dig. Tech. Papers, 2007, pp. 1339–1342. Peng-Yu Chen received the M.S. degree in electrical
[8] D. Yeo, Y. Kwon, E. Kang, S. Park, B. Yang, G. Kim, and T. Jang, engineering from the National Chung Hsing Univer-
“Smart algorithms for local dimming LED backlight,” in SID Int. Symp. sity, Taichung, Taiwan.
Dig. Tech. Papers, 2008, pp. 986–989. His major research interests include image and
[9] S. Lee, K. Um, and B. Choi, “A power reduction method for LCD video processing, FPGA architecture design of
backlight based on human visual characteristics,” in Proc. Int. Conf. image and video coding, and FPGA design for
on Consumer Electron., 2008, pp. 197–198. digital signal processing.
[10] N. Raman and G. J. Hekstra, “Content based contrast enhancement for
liquid crystal displays with backlight modulation,” IEEE Trans. Con-
sumer Electron., vol. 51, no. 1, pp. 18–21, Feb. 2005.