ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE
4 ijaems jun-2015-5-hybrid algorithmic approach for medical image compression...INFOGAIN PUBLICATION
This document summarizes a research paper that proposes a hybrid algorithm for medical image compression using discrete wavelet transform (DWT) and Huffman coding techniques. The algorithm performs multilevel decomposition of medical images using DWT, quantizes the coefficients, assigns Huffman codes, and compresses the images. Simulation results on test medical images showed that the algorithm achieved excellent reconstruction quality with better compression ratios compared to other techniques. The algorithm is well-suited for compressing and transmitting large volumes of medical images over cloud platforms.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
This document discusses and compares different image compression techniques. It proposes a hybrid technique combining discrete wavelet transform (DWT), discrete cosine transform (DCT), and Huffman encoding. DWT is applied to decompose images into frequency subbands, then DCT is applied to the low-frequency subbands. Huffman encoding is also used. This hybrid technique achieves higher compression ratios than standalone DWT and DCT, with better peak signal-to-noise ratios to measure reconstructed image quality. Simulation results on medical and other images demonstrate the effectiveness of the proposed hybrid compression method.
MULTI WAVELET BASED IMAGE COMPRESSION FOR TELE MEDICAL APPLICATIONprj_publication
Analysis and compression of medical image is an important area of biomedical
engineering. Analysis of medical image and data compression are rapidly evolving field with
growing applications in the teleradiology, Bio-medical, tele-medicine and medical data
analysis. Wavelet based techniques are latest development in the field of medical image
compression. The ROI must be compressed by a Lossless or a near lossless compression
algorithm. Wavelet based techniques are most recent growth in the area of medical image
compression.
Wavelet multi-resolution decomposition of images has shown its efficiency in many
image processing areas and specifically in compression. Transformed coefficients are
obtained by expanding a signal on a wavelet basis. The transformed signal is a different
representation of the same underlying data. Such representation is efficient if a relevant part
of the original information is found in a relative small number of coefficients. In this sense,
wavelets are near optimal bases for a wide class of signals with some smoothness, which is
the reason for compression.
Keywords: Image compression, Integer Multiwavelet Transform.
1. INTRODUCTION
Image Compression is used to reduce the number of bits required to represent an
image or a video sequence. A Compression algorithm takes an input X and generates
compressed information that requires fewer bits. The Decompression algorithm reconstructs
the compressed information and gives the original.
A compression of medical image is an important area of biomedical and telemedici
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
REVIEW ON TRANSFORM BASED MEDICAL IMAGE COMPRESSION cscpconf
Advance medical imaging requires storage of large quantities of digitized clinical data. Due to
the bandwidth and storage limitations, medical images must be compressed before transmission
and storage. Diagnosis is effective only when compression techniques preserve all the relevant
and important image information needed. There are basically two types of image compression:
lossless and lossy. Lossless coding does not permit high compression ratios where as lossy
achieve high compression ratio. Among the existing lossy compression schemes, transform
coding is one of the most effective strategies. In this paper, a review has been made on the
different compression techniques on medical images based on transforms like Discrete Cosine
Transform(DCT), Discrete Wavelet Transform(DWT), Hybrid DCT-DWT and Contourlet
transform. And it has been analyzed that Contourlet transform have superior overall
performance over other transforms in terms of PSNR.
4 ijaems jun-2015-5-hybrid algorithmic approach for medical image compression...INFOGAIN PUBLICATION
This document summarizes a research paper that proposes a hybrid algorithm for medical image compression using discrete wavelet transform (DWT) and Huffman coding techniques. The algorithm performs multilevel decomposition of medical images using DWT, quantizes the coefficients, assigns Huffman codes, and compresses the images. Simulation results on test medical images showed that the algorithm achieved excellent reconstruction quality with better compression ratios compared to other techniques. The algorithm is well-suited for compressing and transmitting large volumes of medical images over cloud platforms.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
International Journal on Soft Computing ( IJSC )ijsc
This document provides a summary of various wavelet-based image coding schemes. It discusses the basics of image compression including transformation, quantization, and encoding. It then reviews the wavelet transform approach and several popular wavelet coding techniques such as EZW, SPIHT, SPECK, EBCOT, WDR, ASWDR, SFQ, EPWIC, CREW, SR and GW coding. These techniques exploit the multi-resolution properties of wavelets for superior energy compaction and compression performance compared to DCT-based methods like JPEG. The document provides details on how each coding scheme works and compares their features.
This document discusses and compares different image compression techniques. It proposes a hybrid technique combining discrete wavelet transform (DWT), discrete cosine transform (DCT), and Huffman encoding. DWT is applied to decompose images into frequency subbands, then DCT is applied to the low-frequency subbands. Huffman encoding is also used. This hybrid technique achieves higher compression ratios than standalone DWT and DCT, with better peak signal-to-noise ratios to measure reconstructed image quality. Simulation results on medical and other images demonstrate the effectiveness of the proposed hybrid compression method.
MULTI WAVELET BASED IMAGE COMPRESSION FOR TELE MEDICAL APPLICATIONprj_publication
Analysis and compression of medical image is an important area of biomedical
engineering. Analysis of medical image and data compression are rapidly evolving field with
growing applications in the teleradiology, Bio-medical, tele-medicine and medical data
analysis. Wavelet based techniques are latest development in the field of medical image
compression. The ROI must be compressed by a Lossless or a near lossless compression
algorithm. Wavelet based techniques are most recent growth in the area of medical image
compression.
Wavelet multi-resolution decomposition of images has shown its efficiency in many
image processing areas and specifically in compression. Transformed coefficients are
obtained by expanding a signal on a wavelet basis. The transformed signal is a different
representation of the same underlying data. Such representation is efficient if a relevant part
of the original information is found in a relative small number of coefficients. In this sense,
wavelets are near optimal bases for a wide class of signals with some smoothness, which is
the reason for compression.
Keywords: Image compression, Integer Multiwavelet Transform.
1. INTRODUCTION
Image Compression is used to reduce the number of bits required to represent an
image or a video sequence. A Compression algorithm takes an input X and generates
compressed information that requires fewer bits. The Decompression algorithm reconstructs
the compressed information and gives the original.
A compression of medical image is an important area of biomedical and telemedici
IMAGE COMPRESSION AND DECOMPRESSION SYSTEMVishesh Banga
Image compression is the application of Data compression on digital images. In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and
bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may
exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas
of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering
approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out
the discontinuities that appear across DCT block boundaries. Although some of these approaches are able
to decrease the severity of these unwanted artifacts to some extent, other approaches have certain
limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking
algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing
smoothening, detection of blocked edges and then filtering the difference between the pixels containing the
blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an
image and therefore increases the subjective as well as objective quality of the reconstructed image.
REVIEW ON TRANSFORM BASED MEDICAL IMAGE COMPRESSION cscpconf
Advance medical imaging requires storage of large quantities of digitized clinical data. Due to
the bandwidth and storage limitations, medical images must be compressed before transmission
and storage. Diagnosis is effective only when compression techniques preserve all the relevant
and important image information needed. There are basically two types of image compression:
lossless and lossy. Lossless coding does not permit high compression ratios where as lossy
achieve high compression ratio. Among the existing lossy compression schemes, transform
coding is one of the most effective strategies. In this paper, a review has been made on the
different compression techniques on medical images based on transforms like Discrete Cosine
Transform(DCT), Discrete Wavelet Transform(DWT), Hybrid DCT-DWT and Contourlet
transform. And it has been analyzed that Contourlet transform have superior overall
performance over other transforms in terms of PSNR.
Thesis on Image compression by Manish MystManish Myst
The document discusses using neural networks for image compression. It describes how previous neural network methods divided images into blocks and achieved limited compression. The proposed method applies edge detection, thresholding, and thinning to images first to reduce their size. It then uses a single-hidden layer feedforward neural network with an adaptive number of hidden neurons based on the image's distinct gray levels. The network is trained to compress the preprocessed image block and reconstruct the original image at the receiving end. This adaptive approach aims to achieve higher compression ratios than previous neural network methods.
In this technical article, we present a Novel algorithm for the lossy compression method, where the performance and storage has been proscribed with hardware descriptive language (HDL).
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
Image processing techniques can be used for face recognition applications. The process involves decomposing face images into subbands using discrete wavelet transform. The mid-frequency subband is selected and principal component analysis is applied to extract representational bases. These bases are stored for training images and used to translate probe images into representations which are classified to identify faces by matching with training representations. This approach segments discriminatory facial features to recognize identities despite variations in illumination, pose, expression and other factors.
Matlab Based Image Compression Using Various Algorithmijtsrd
Image Compression is extremely intriguing as it manages this present reality issues. It assumes critical part in the exchange of information, similar to a picture, from one client to other. This paper exhibits the utilization MATLAB programming to execute a code which will take a picture from the client and returns the compacted structure as a yield. WCOMPRESS capacity is utilized which incorporates wavelet change and entropy coding ideas. This paper displays the work done on different sorts of pictures including JPEG (Joint Photographic Expert Group), PNG and so on and broke down their yield. Different pressure procedures like EZW, WDR, ASWDR, and SPIHIT which are exceptionally regular in picture handling are utilized Beenish Khan | Ms. Poonam | Mr. Mohammad Talib"Matlab Based Image Compression Using Various Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14394.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/14394/matlab-based-image-compression-using-various-algorithm/beenish-khan
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...IOSR Journals
1. The document discusses different image fusion techniques, specifically wavelet transform and curvelet transform based fusion.
2. Wavelet transform is commonly used for image fusion due to its simplicity and ability to preserve time-frequency details. Curvelet transform is better for fusing images with curved edges.
3. The paper compares fusion results of medical images like MR and CT using wavelet and curvelet transforms, finding that curvelet transform provides superior results in metrics like entropy and peak signal-to-noise ratio.
1. Digital image processing focuses on improving images for human interpretation and machine perception. It involves digitizing an image using sensors and processors then displaying the digital image.
2. The key stages of digital image processing are enhancement, restoration, compression, and registration. Registration involves mapping image frames for tasks like object recognition.
3. Common processing techniques include contrast intensification to improve poor contrast, smoothing to reduce noise, and sharpening to enhance blurred details.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
Comparative Study between DCT and Wavelet Transform Based Image Compression A...IOSR Journals
This document compares DCT and wavelet transform based image compression algorithms. It finds that wavelet transforms provide better compression ratios and lower mean square errors than DCT. As the level of the wavelet transform increases, the compression ratio increases while the mean square error initially decreases for wavelet levels 1-3. While DCT has faster encoding, it produces blocking artifacts, whereas wavelet transforms maintain good visual quality at higher compression ratios by considering correlations across blocks. Overall, the study shows that wavelet transforms enable higher compression with better visual quality than DCT.
Quality assessment of resultant images after processingAlexander Decker
This document discusses quality assessment of images after processing. It provides an overview of traditional perceptual image quality assessment approaches, which are based on measuring errors between distorted and reference images. These methods involve channel decomposition, error normalization based on visual sensitivity, and error pooling. The document also discusses information theoretic approaches to quality assessment, which view it as an information fidelity problem rather than just a signal fidelity problem. These approaches relate visual quality to the mutual information shared between the reference and test images. However, these methods make assumptions that are difficult to validate.
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
Quality Compression for Medical Big Data X-Ray Image using Biorthogonal 5.5 W...IJERA Editor
Medical Big Data (MBD) consists of very useful type of information. It is very important for a physician for decision making and treatments to cure the patient. For accurate diagnosis, data availability is the most important factor. MBD over network needs intelligent compression schemes so that it is transferred to the destination by utilizing available bandwidth. Biorthogonal 5.5 Wavelet Compression scheme compress the MBD without losing the important information, thus making the information reliable and less in size; transference by efficient bandwidth utilization from source to destination.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
Digital Image Compression using Hybrid Scheme using DWT and Quantization wit...IRJET Journal
This document discusses a hybrid image compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT). It begins with an introduction to image compression and its goals of reducing file size while maintaining quality. Next, it outlines the proposed hybrid compression method, which applies DWT to blocks of the image, then DCT to the approximation coefficients from DWT. This is intended to achieve higher compression ratios than DCT or DWT alone, with fewer blocking artifacts and false contours. Simulation results on various test images show the hybrid method provides higher PSNR and lower MSE than the individual transforms, demonstrating it outperforms them in terms of both quality and compression. The document concludes the hybrid approach is more suitable for
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Javanese Traditional Market Cultural Value in Accounting WorldIOSR Journals
This study explores the accounting values of Javanese traditional market and the potential extinction of those values that now are being marginalized by modern accounting. Modern accounting notions that only focus on economic value have lack meaning to a society whose need not only economic but also non economic value. This study uses grounded theory method to explore the Javanese traditional market values. This study found that tolerance which is consisted of hesitate and trust among traders in Javanese traditional market are the values that can be used in accounting as a concept to challenge the marginalization and domination of modern accounting notions.
An EOQ Model for Weibull Deteriorating Items With Price Dependent DemandIOSR Journals
In the present paper we developed an economic order quantity model for Weibull deteriorating items with price dependent demand rate together with a replenishment policy for profit maximization. The demand rate is a continuous and differentiable function of price. The variable items deteriorate with time shortages are allowed and completely back-ordered. Further it is illustrated with the help of numerical examples.
The document describes a proposed smartdust network system for tactical border surveillance using multiple sensor signatures. The system consists of two smartdust sensor motes equipped with seismic, magnetic, acoustic, and thermal sensors and a central monitoring mote. The smartdust motes are designed using an ARM microcontroller, IEEE 802.15.4 radio transceiver, and various MEMS sensors. They form an ad-hoc wireless network to detect intrusions based on sensor signatures and relay information to the central monitoring mote. The central mote also displays the intrusion tracking history. The system is intended to address limitations of existing border surveillance methods like aircraft, armed forces, and ground RADAR systems by providing a low-cost, low
The efficacy of strategic management processes: An empirical study of Nationa...IOSR Journals
The purpose of this study was to appraise the adequacy of strategic management processes of National Sports Associations (NSAs) in Zimbabwe. Most of Zimbabwe NSAs perform poorly as evidenced by the perennial failure of national teams to qualify for major regional and international tournament. Their strategic management processes appear to be inadequate to meet the modern day environmental challenges. The study used the descriptive survey as the design. A sample of fifty three National Sports Association and Provincial Sports Association chairpersons were randomly selected from a population of seventy nine chairpersons. Hand delivered questionnaires with both open ended and closed questions were used to collect data. Results from the study indicate that most NSAs have core values, corporate objectives, policies and make use of physical structures. However the results also show that the majority of NSAs do not have strategic plans, do not practice strategic management and do not conduct strategic reviews. The results led to the conclusion that the strategic management processes in NSAs are inadequate. The study recommended that NSAs should prioritize strategic management and that both the Sports and Recreation Commission and Zimbabwe Olympic Committee should assist NSAs technically and financially to facilitate the adoption of strategic management principles
Organizational Learning in Nigerian Institutions: Constraints and ChallengesIOSR Journals
Organisations are recognised as legal and corporate entities. They have an image, and they can sue and be sued. To that extent therefore, this paper posits that organizations can also learn, and asserts that the absence or dearth of adequate organisational learning culture denies organizations the much needed competitive edge necessary to survive and remain viable in the contemporary and highly globalised world. The paper x-rays the technical and social perspectives of organisational learning and argues that the learning process is context and culture-driven. Conceding that effective organisational learning practice enhances performance, it highlights some hindrances to the learning process, suggests remedies and concludes with a case study of the organisational learning problems of a Nigerian public sector organization.
Thesis on Image compression by Manish MystManish Myst
The document discusses using neural networks for image compression. It describes how previous neural network methods divided images into blocks and achieved limited compression. The proposed method applies edge detection, thresholding, and thinning to images first to reduce their size. It then uses a single-hidden layer feedforward neural network with an adaptive number of hidden neurons based on the image's distinct gray levels. The network is trained to compress the preprocessed image block and reconstruct the original image at the receiving end. This adaptive approach aims to achieve higher compression ratios than previous neural network methods.
In this technical article, we present a Novel algorithm for the lossy compression method, where the performance and storage has been proscribed with hardware descriptive language (HDL).
This document discusses various image compression methods and algorithms. It begins by explaining the need for image compression in applications like transmission, storage, and databases. It then reviews different types of compression, including lossless techniques like run length encoding and Huffman encoding, and lossy techniques like transformation coding, vector quantization, fractal coding, and subband coding. The document also describes the JPEG 2000 image compression algorithm and applications of JPEG 2000. Finally, it discusses self-organizing feature maps (SOM) and learning vector quantization (VQ) for image compression.
Image processing techniques can be used for face recognition applications. The process involves decomposing face images into subbands using discrete wavelet transform. The mid-frequency subband is selected and principal component analysis is applied to extract representational bases. These bases are stored for training images and used to translate probe images into representations which are classified to identify faces by matching with training representations. This approach segments discriminatory facial features to recognize identities despite variations in illumination, pose, expression and other factors.
Matlab Based Image Compression Using Various Algorithmijtsrd
Image Compression is extremely intriguing as it manages this present reality issues. It assumes critical part in the exchange of information, similar to a picture, from one client to other. This paper exhibits the utilization MATLAB programming to execute a code which will take a picture from the client and returns the compacted structure as a yield. WCOMPRESS capacity is utilized which incorporates wavelet change and entropy coding ideas. This paper displays the work done on different sorts of pictures including JPEG (Joint Photographic Expert Group), PNG and so on and broke down their yield. Different pressure procedures like EZW, WDR, ASWDR, and SPIHIT which are exceptionally regular in picture handling are utilized Beenish Khan | Ms. Poonam | Mr. Mohammad Talib"Matlab Based Image Compression Using Various Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14394.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/14394/matlab-based-image-compression-using-various-algorithm/beenish-khan
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...IOSR Journals
1. The document discusses different image fusion techniques, specifically wavelet transform and curvelet transform based fusion.
2. Wavelet transform is commonly used for image fusion due to its simplicity and ability to preserve time-frequency details. Curvelet transform is better for fusing images with curved edges.
3. The paper compares fusion results of medical images like MR and CT using wavelet and curvelet transforms, finding that curvelet transform provides superior results in metrics like entropy and peak signal-to-noise ratio.
1. Digital image processing focuses on improving images for human interpretation and machine perception. It involves digitizing an image using sensors and processors then displaying the digital image.
2. The key stages of digital image processing are enhancement, restoration, compression, and registration. Registration involves mapping image frames for tasks like object recognition.
3. Common processing techniques include contrast intensification to improve poor contrast, smoothing to reduce noise, and sharpening to enhance blurred details.
11.0003www.iiste.org call for paper_d_discrete cosine transform for image com...Alexander Decker
This document summarizes a research paper on 3D discrete cosine transform (DCT) for image compression. It discusses how 3D-DCT video compression works by dividing video streams into groups of 8 frames treated as 3D images with 2 spatial and 1 temporal component. Each frame is divided into 8x8 blocks and each 8x8x8 cube is independently encoded using 3D-DCT, quantization, and entropy encoding. It achieves better compression ratios than 2D JPEG by exploiting correlations across multiple frames. The document provides details on the 3D-DCT compression and decompression process. It reports that testing on a set of 8 images achieved a compression ratio of around 27 with this technique.
Comparative Study between DCT and Wavelet Transform Based Image Compression A...IOSR Journals
This document compares DCT and wavelet transform based image compression algorithms. It finds that wavelet transforms provide better compression ratios and lower mean square errors than DCT. As the level of the wavelet transform increases, the compression ratio increases while the mean square error initially decreases for wavelet levels 1-3. While DCT has faster encoding, it produces blocking artifacts, whereas wavelet transforms maintain good visual quality at higher compression ratios by considering correlations across blocks. Overall, the study shows that wavelet transforms enable higher compression with better visual quality than DCT.
Quality assessment of resultant images after processingAlexander Decker
This document discusses quality assessment of images after processing. It provides an overview of traditional perceptual image quality assessment approaches, which are based on measuring errors between distorted and reference images. These methods involve channel decomposition, error normalization based on visual sensitivity, and error pooling. The document also discusses information theoretic approaches to quality assessment, which view it as an information fidelity problem rather than just a signal fidelity problem. These approaches relate visual quality to the mutual information shared between the reference and test images. However, these methods make assumptions that are difficult to validate.
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
Quality Compression for Medical Big Data X-Ray Image using Biorthogonal 5.5 W...IJERA Editor
Medical Big Data (MBD) consists of very useful type of information. It is very important for a physician for decision making and treatments to cure the patient. For accurate diagnosis, data availability is the most important factor. MBD over network needs intelligent compression schemes so that it is transferred to the destination by utilizing available bandwidth. Biorthogonal 5.5 Wavelet Compression scheme compress the MBD without losing the important information, thus making the information reliable and less in size; transference by efficient bandwidth utilization from source to destination.
Enhanced Image Compression Using WaveletsIJRES Journal
Data compression which can be lossy or lossless is required to decrease the storage requirement and better data transfer rate. One of the best image compression techniques is using wavelet transform. It is comparatively new and has many advantages over others. Wavelet transform uses a large variety of wavelets for decomposition of images. The state of the art coding techniques like HAAR, SPIHT (set partitioning in hierarchical trees) and use the wavelet transform as basic and common step for their own further technical advantages. The wavelet transform results therefore have the importance which is dependent on the type of wavelet used .In our thesis we have used different wavelets to perform the transform of a test image and the results have been discussed and analyzed. Haar, Sphit wavelets have been applied to an image and results have been compared in the form of qualitative and quantitative analysis in terms of PSNR values and compression ratios. Elapsed times for compression of image for different wavelets have also been computed to get the fast image compression method. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction.
Digital Image Compression using Hybrid Scheme using DWT and Quantization wit...IRJET Journal
This document discusses a hybrid image compression technique using both discrete cosine transform (DCT) and discrete wavelet transform (DWT). It begins with an introduction to image compression and its goals of reducing file size while maintaining quality. Next, it outlines the proposed hybrid compression method, which applies DWT to blocks of the image, then DCT to the approximation coefficients from DWT. This is intended to achieve higher compression ratios than DCT or DWT alone, with fewer blocking artifacts and false contours. Simulation results on various test images show the hybrid method provides higher PSNR and lower MSE than the individual transforms, demonstrating it outperforms them in terms of both quality and compression. The document concludes the hybrid approach is more suitable for
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
Medical Image Fusion Using Discrete Wavelet TransformIJERA Editor
Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multimodal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. The domain where image fusion is readily used nowadays is in medical diagnostics to fuse medical images such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) and MRA. This paper aims to present a new algorithm to improve the quality of multimodality medical image fusion using Discrete Wavelet Transform (DWT) approach. Discrete Wavelet transform has been implemented using different fusion techniques including pixel averaging, maximum minimum and minimum maximum methods for medical image fusion. Performance of fusion is calculated on the basis of PSNR, MSE and the total processing time and the results demonstrate the effectiveness of fusion scheme based on wavelet transform.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Javanese Traditional Market Cultural Value in Accounting WorldIOSR Journals
This study explores the accounting values of Javanese traditional market and the potential extinction of those values that now are being marginalized by modern accounting. Modern accounting notions that only focus on economic value have lack meaning to a society whose need not only economic but also non economic value. This study uses grounded theory method to explore the Javanese traditional market values. This study found that tolerance which is consisted of hesitate and trust among traders in Javanese traditional market are the values that can be used in accounting as a concept to challenge the marginalization and domination of modern accounting notions.
An EOQ Model for Weibull Deteriorating Items With Price Dependent DemandIOSR Journals
In the present paper we developed an economic order quantity model for Weibull deteriorating items with price dependent demand rate together with a replenishment policy for profit maximization. The demand rate is a continuous and differentiable function of price. The variable items deteriorate with time shortages are allowed and completely back-ordered. Further it is illustrated with the help of numerical examples.
The document describes a proposed smartdust network system for tactical border surveillance using multiple sensor signatures. The system consists of two smartdust sensor motes equipped with seismic, magnetic, acoustic, and thermal sensors and a central monitoring mote. The smartdust motes are designed using an ARM microcontroller, IEEE 802.15.4 radio transceiver, and various MEMS sensors. They form an ad-hoc wireless network to detect intrusions based on sensor signatures and relay information to the central monitoring mote. The central mote also displays the intrusion tracking history. The system is intended to address limitations of existing border surveillance methods like aircraft, armed forces, and ground RADAR systems by providing a low-cost, low
The efficacy of strategic management processes: An empirical study of Nationa...IOSR Journals
The purpose of this study was to appraise the adequacy of strategic management processes of National Sports Associations (NSAs) in Zimbabwe. Most of Zimbabwe NSAs perform poorly as evidenced by the perennial failure of national teams to qualify for major regional and international tournament. Their strategic management processes appear to be inadequate to meet the modern day environmental challenges. The study used the descriptive survey as the design. A sample of fifty three National Sports Association and Provincial Sports Association chairpersons were randomly selected from a population of seventy nine chairpersons. Hand delivered questionnaires with both open ended and closed questions were used to collect data. Results from the study indicate that most NSAs have core values, corporate objectives, policies and make use of physical structures. However the results also show that the majority of NSAs do not have strategic plans, do not practice strategic management and do not conduct strategic reviews. The results led to the conclusion that the strategic management processes in NSAs are inadequate. The study recommended that NSAs should prioritize strategic management and that both the Sports and Recreation Commission and Zimbabwe Olympic Committee should assist NSAs technically and financially to facilitate the adoption of strategic management principles
Organizational Learning in Nigerian Institutions: Constraints and ChallengesIOSR Journals
Organisations are recognised as legal and corporate entities. They have an image, and they can sue and be sued. To that extent therefore, this paper posits that organizations can also learn, and asserts that the absence or dearth of adequate organisational learning culture denies organizations the much needed competitive edge necessary to survive and remain viable in the contemporary and highly globalised world. The paper x-rays the technical and social perspectives of organisational learning and argues that the learning process is context and culture-driven. Conceding that effective organisational learning practice enhances performance, it highlights some hindrances to the learning process, suggests remedies and concludes with a case study of the organisational learning problems of a Nigerian public sector organization.
Innovation and Sustainable Development: The Question of Energy EfficiencyIOSR Journals
This document discusses the relationship between innovation and energy efficiency as it relates to sustainable development. It begins by defining key concepts like sustainable development, renewable energy, and energy efficiency. It then examines how technological innovation and the use of renewable energy can help foster sustainable development by reducing environmental impacts and promoting socioeconomic development. Specifically, it explores how renewable energy and energy efficiency in agriculture can contribute to the environmental, social and economic dimensions of sustainability in Tunisia.
Organizational citizenship Behavior as Attitude Integrity in Measurement of I...IOSR Journals
Quality of Human Resource represent one of the factor which to increase performance productivity an institution or organization. Therefore, needed Human Resource having high interest because interest or membership will be able to support the make-up of employees performance achievement. During the time at generally in governance institution not yet had officer with adequate interest, proved with still lower officer productivity and is difficult measure officer performance [in] governance institution scope. Performance Management System in a modern concept of human resource management is an objective and transparent performance measurement model of Organizational Citizenship Behavior in giving reward to individual’s sacrifice for organization. Three main elements of individual’s sacrifice performed in Organizational Citizenship Behavior (OCB) are compliance, loyality, and participation.The organization shoud appreciate these attitudes by giving clear job description and brief rewardsystemcriteriato encourage the individual’s job motivation. Combined with theindividual assessment of job description, job grading is used to compile a correct Key Performance Indexand a precise salary component. The aim of this action research is to give a comprehensive solution for Hospital X, in order to determine a Key Performance Indexsmodel, in response to some problems such as jobmotivation, work stress and performance. An interviews with hospital’s director and Human Resources section was conducted to compile the KPI. The results of this research can be recommended to the hospital to make a comprehensive performance assessment consist of the review of employee's job descriptions, Key Performance Indicator (KPI), job grading, specifying fundamental salary based on work,Bonus Scame and score summary
Energy Conservation in Wireless Sensor Networks: A ReviewIOSR Journals
Abstract: A wireless sensor network consists of a large number of sensor nodes which are deployed over an
area to perform local computations based on information gathered from the surroundings. Each node in the
network consists of a battery, but it is very difficult to change or recharge batteries. So the question is how to
enhance the lifetime of the network to such a long time. Therefore, in order to maximize the lifetime of the
network, the consumption of energy must be minimized. This is an important challenge in sensor networks as
sensors can not be easily replaced or recharged due to their ad-hoc deployment in hazardous environment. In
this paper, the main techniques used for energy conservation in sensor networks are discussed which include
duty cycling scheme, data driven approaches, mobility-based schemes, energy efficient MAC protocols and node
self scheduling scheme. These schemes can be used to improve the energy efficiency of the wireless sensor
network so that the network can work with greater efficiency and high battery lifetime.
Keywords : Energy conservation, energy consumption, sensor nodes, wireless sensor networks
An Analysis of the Relationship between Fiscal Deficits and Selected Macroeco...IOSR Journals
This study investigates the relationship that exists between the Government Deficit Spending and selected macroeconomic variables such as Gross Domestic Product (GDP), Exchange Rate, Inflation, Money Supply and Lending Interest Rate. The period covered is 1970 (when the civil war ended) and 2011. Ordinary Least Squares (OLS) technique was adopted to analyze the relationships. The study concludes that Government Deficit Spending (GDS) has positive significant relationship with GDP. Government Deficit Spending also has positive significant relationship with Exchange Rate, Inflation, and Money Supply. Government Deficit has negative significant relationship with Lending Interest Rate and most likely crowd-out the private sector by raising the cost of funds. Deficit spending has been known to have adverse effects on the economy and government is advised to curtail excessive deficit spending. It is recommended that further research is done to establish other variables that are affected by government deficit spending.
A Study to Evaluate Spatial Gait Parameters in Patients of Osteoarthritic KneeIOSR Journals
Abstract: Osteoarthritis is a slowly evolving articular disease, which appears to originate in the cartilage and
affects the underlying bone, soft tissues and synovial fluid. Quantitative gait analysis is an important clinical
tool for quantifying normal and pathological patterns of locomotion, and has also been shown to be useful for
prescription of treatment as well as in the evaluation of the results of such treatment. The objectives of this study
was to evaluate spatial gait parameters variations in patients with osteoarthritic knee as compared to normal
healthy individuals. 100 subjects were included in the study, 50 normal healthy subjects and 50 subjects with
bilateral knee osteoarthritis with age group of 40-60 years based on the inclusion and exclusion criteria. Before
gait analysis, all subjects were given informed consent as advised by the Institutional Ethical Committee. Gait
analysis was done for both groups with same protocol. Following gait parameters was measured individually
for each subject: Step length [m], Stride length [m], Normalised stride length [%], Step width [m] and Foot
angle [degree]. Paired and unpaired t- test will be used for the analysis of data. The study concluded that there
was significant difference is spatial gait parameters in patients with osteoarthritic knee as compared to normal
healthy individuals.
Keywords: Osteoarthritis, Gait parameters, Spatial gait parameters
Analysis of the Demand for Eggs in City Of MalangIOSR Journals
This research was aimed at determining the factors that influence the demand for eggs in the City of Malang and knowing the elasticity of demand in relation to the changes in price of the eggs in the City of Malang. Data collection was conducted from November 2012 to December 2012 from the consumers who purchase eggs at the traditional markets in the City of Malang (Dinoyo market and Pasar Besar market). The research method being employed in this study was a survey method. Sampling was conducted through purposive sampling method. The data collected included the primary data from 200 respondents through direct observations and interviews and the secondary data that were obtained from certain relevant agencies. Data were then analyzed by using multiple linear regressions in logarithms. Regression analysis result showed that the independent variables together significantly affected (P < 0.01) the dependent variable with a value of R ² was 0.731. Partially that each of the prices of the eggs, household income, the family members, and education, affected the demand for eggs in the City of Malang. The price elasticity of demand for eggs is elastic with a value of -2.824. The value of the income elasticity of demand for eggs was 0.022 which was inelastic, which means that eggs are normal goods or commodity. The value of cross-price elasticity of demand for eggs to broiler meat was -4.451, which means that the broiler meat are not as substitutes (commodity) for eggs of egg-laying chickens.
This document reports on a study of arbuscular mycorrhizal (AM) fungi associated with Pandanus fascicularis, a plant species found along coastal regions in Maharashtra, India. The key findings are:
1) All P. fascicularis root samples were colonized by AM fungi, with colonization rates ranging from 39-74%.
2) A total of 13 AM fungal morphospecies were identified among the study sites. Kuklospora colombiana was the most widely distributed.
3) Based on spore density and relative abundance, the dominant AM fungal species associated with P. fascicularis were Acaulospora biretic
Parametric Variation Based Analysis AND Effective Design of Rectangular Patch...IOSR Journals
Abstract : This paperdevelops an understanding of creating and improving the design of microstripantenna by the performance analysis of resultsfromitsvarious configurationsrelating to rectangular patch microstripantenna. Furthermore, itaccommodates a simulated patch antennawith effective results for bluetooth applicationsatafrequency of 2.4GHz. The proposed antenna is not only designed on the formulated calculations but also analyzed on different sizes, positions and orientations of substrate, feeding point and slots respectively. Propagation parameters are greatly improved by amendments suggested by the analysis of the variation based studies provided by this paper. The initial resultsobtained using formulae based designs are compared with the ameliorated results to illustrate the effects of such variations on antenna parameters. The final antenna show significantly improved return losses of -46.7dB, VSWR of 1.0093, Bandwidth of 180MHz and a far field radiation pattern with a gain of 2.2782dB. The Antenna designed is optimized and interpreted with Ansoft HFSS 13.0 simulator. Keywords: Bluetooth, rectangular patch antenna, feedpoint, trial and error method, slot orientation, wide bandwidth
1) The study investigated the antioxidant effects of Emblica officinalis (amla) aqueous extract in delaying cataract formation in hyperglycemic goat lenses.
2) Lenses incubated with glucose developed mature cataracts with decreased soluble proteins and increased lipid peroxidation, while lenses incubated with glucose and amla extract showed maintained transparency, higher soluble proteins, and lower lipid peroxidation.
3) Lenses incubated with glucose and amla extract also demonstrated significantly higher activities of antioxidant enzymes like superoxide dismutase, glutathione peroxidase, and glutathione reductase compared to lenses incubated with glucose alone.
An Effective Model for Evaluating Organizational Risk and Cost in ERP Impleme...IOSR Journals
Enterprise Resource planning (ERP) implementations in the context of small medium size enterprises is discussed in this paper. It is essential for small businesses to success implementation an ERP system to maintain control of their risks. ERP implementation is costly and risky for small and medium enterprises. Paper identified from archived literature, ERP implementation risk in selecting and implementing an ERP system. Authors hypothesized a relationship between two elements of organizational risk factors, adequate system and business processes re-engineering. The study also examines the relationship between ERP implementation cost and ERP project success in order to better manage enterprises ERP projects in context of SMEs.
Evaluating Perceived Quality of B-School WebsitesIOSR Journals
Websites are a window for the world for looking in most of the Information and a gateway for many activities. Websites are not only a necessity but also mandatory for B-Schools. A B-School Website serves as portal to all stakeholders in addition to being an information placeholder. The Websites serves from being an administrative tool to a Learning Management system. The utility and the effectiveness of website depend on the quality of the service it provides to the surfer. A study was undertaken to survey the various features of a B-School Website that could serve as a Quality Function Deployment (QFD) touchstones. In addition, the influence of perceived quality on the user satisfaction was also studied. A focus group of B-School Students evaluated the Websites and scored a checklist cum questionnaire. The results highlight the most frequently found features and the least found features of a B-School Website. The factors that are important for creating a quality website are also found. This study will help the B-school administrators and the Website designers to create a quality and satisfying Websites.
Performance Analysis of OFDM in Combating Multipath FadingIOSR Journals
Abstract: Mobile Communication system has been on high rampage for high data transmission over wireless medium with various challenges caused by the transmission Channel. OFDM is been discovered in recent years to deal with this problems because of its ability to elegantly cope with multipath interference. This paper investigates the performance of different modulation schemes using M-ary Phase Shift Keying (M-PSK) and M-ary Quadrature Amplitude Modulation (M-QAM) in information transmission with OFDM technique over Ideal channel AWGN and worst channel Rayleigh Fading channel in terms of Bits Error Rate (BER). Analysis was made for different types of modulation schemes BPSK, QPSK, 4-QAM and 16-QAM gray coded bit mapping. Also, a feasibility of OFDM been used to combat multipath fading was analyzed with comparison between a single carrier technique and OFDM multicarrier technique. Variation between SNR results with respect to BER is plotted to show the trade off differences between the modulation schemes with the result showing that OFDM allows data transmission with minimal error over fading channel than a Single Carrier. Keywords: OFDM, Single Carrier, AWGN, Rayleigh fading, BER, M-ary PSK, M-ary QAM
Reduce Evaporation Losses from Water ReservoirsIOSR Journals
Evaporation suppression is the reduction of evaporation bycontrolling the rate at which water vapor escapes from water surfaces. The need for water saving is greatest in areas of little rainfall and low runoff. Water losses by evaporation from storage reservoirs must be minimized for greatest utility of limitedsupplies. Using trash of polyethylene with different densities (800, 875 and 900 kg/m3) as floating cover to the water filling cylindrical container with 8 cm diameter led to reduce the evaporation rate. A suitable trash density of 800 kg/m3 gave reduction in evaporation rate of 57% from the theoretical results calculated using equation (4) which is a good result if compared with previous researches
This document provides a summary of Nobel Laureates in Physics from 1901 to 2013. It lists each laureate's name, birthplace, years of life, and a brief description of their award-winning work. Some key highlights include Wilhelm Rontgen being awarded the first Nobel Prize in Physics in 1901 for his discovery of X-rays, Albert Einstein winning in 1921 for his work on the photoelectric effect and theory of relativity, and Indian physicist C.V. Raman winning in 1930 for his discovery of the Raman effect. The document continues in chronological order through the 20th century, covering major discoveries such as the neutron, quantum mechanics, and the transistor effect.
The contribution of Sport and Physical activity towards the achievement of co...IOSR Journals
Abstract: In the last two decades Zimbabwe suffered severe socioeconomic and political crisis. The crisis was
characterised by unprecedented rates of inflation which were exacerbated by political instability and economic
sanctions. The economic challenges led to severe brain drain of Zimbabwe health professionals. The elements of
a previously well maintained health care system severely deteriorated. Community health was therefore
seriously compromised. Now that Zimbabwe seem to be on the recovery path this article critiques, reviews and
justifies the potential contribution which sport and physical activity can make towards the achievement of
community healthy objectives in Zimbabwe in line with millennium development goal number six. The content
analysis and review identified that sport and physical activity can play a significant role in improving the health
of members of the Zimbabwean community. It is apparent from the review that Sport and physical activity can
help reduce the incidence of obesity, cardiovascular diseases, HIV/AIDS, diabetes, hypertension and many other
health problems bedevilling the Zimbabwean society today and hence contribute towards the achievement of
Millennium Development Goal number six.
Keywords: Community health, Physical activity, Sport, MDGs, Zimbabwe
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
This document summarizes a research paper that proposes a hybrid image compression algorithm using discrete wavelet transform (DWT), discrete cosine transform (DCT), and Huffman encoding. The algorithm first applies 2D-DWT to blocks of the input image, discarding high-frequency coefficients. It then applies 4x4 2D-DCT to the low-frequency DWT coefficients. Huffman coding is then used to assign codes to the DCT coefficients. Experimental results on medical and other images show the hybrid algorithm achieves higher compression ratios and peak signal-to-noise ratios than using DWT or DCT alone.
This document summarizes a research paper that proposes a hybrid image compression algorithm using discrete wavelet transform (DWT), discrete cosine transform (DCT), and Huffman encoding. The algorithm first applies 2D-DWT to blocks of the input image, discarding high-frequency coefficients. It then applies 4x4 2D-DCT to the low-frequency DWT coefficients. Huffman coding is then used to assign codes to the DCT coefficients. Experimental results on medical and other images show the hybrid algorithm achieves higher compression ratios and peak signal-to-noise ratios than using DWT or DCT alone.
An approach for color image compression of bmp and tiff images using dct and dwtIAEME Publication
This document summarizes a research paper that compares image compression using the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It finds that DWT performs better than DCT in terms of Mean Square Error and Peak Signal to Noise Ratio. The paper analyzes compression of BMP and TIFF color images using DCT and DWT. It converts color images to grayscale, then compresses the grayscale images using DWT. DWT decomposes images into different frequency components and scales, allowing for better image compression compared to DCT.
Image compression techniques by using wavelet transformAlexander Decker
This document discusses image compression techniques using wavelet transforms. It begins with an introduction to image compression and discusses lossless and lossy compression methods. It then focuses on wavelet transforms, which decompose images into different frequency components, allowing for better compression. The document describes how wavelet-based compression avoids blocking artifacts seen in other methods like DCT. It details an image compression program called MinImage that implements various wavelet types and the embedded zerotree wavelet coding algorithm to achieve good compression ratios while maintaining image quality. In conclusion, wavelet transforms combined with entropy coding provide effective lossy compression of digital images.
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...ijcsa
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. There are different ways of abbreviating image files. For the use of Internet, the two most common abbreviated graphic image formats are the JPEG formulation and the GIF formulation. The JPEG procedure is more often utilized or
photographs, while the GIF method is commonly used for logos, symbols and icons but at the same time
they are not preferred as they use only 256 colors. Other procedures for image compression include the
utilization of fractals and wavelets. These procedures have not profited widespread acceptance for the
utilization on the Internet. Abbreviating an image is remarkably not similar than the compressing raw
binary data. General-purpose abbreviation techniques can be utilized to compress images, the obtained
result is less than the optimal. This is because of the images have certain analytical properties, which can
be exploited by encoders specifically designed only for them. Also, some of the finer details of the image
can be renounced for the sake of storing a little more bandwidth or deposition space. In the paper,
compression is done on medical image and the compression technique that is used to perform compression
is discrete wavelet transform and discrete cosine transform which compresses the data efficiently without
reducing the quality of an image
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
Abstract
Recently the digital imaging applications is increasing significantly and it leads the requirement of effective image compression techniques. Image compression removes the redundant information from an image. By using it we can able to store only the necessary information which helps to reduce the transmission bandwidth, transmission time and storage size of image. This paper proposed a new image compression technique using DCT-Biorthogonal Wavelet Transform with arithmetic coding for improvement the visual quality of an image. It is a simple technique for getting better compression results. In this new algorithm firstly Biorthogonal wavelet transform is applied and then 2D DCT-Biorthogonal wavelet transform is applied on each block of the low frequency sub band. Finally, split all values from each transformed block and arithmetic coding is applied for image compression.
Keywords: Arithmetic coding, Biorthogonal wavelet Transform, DCT, Image Compression.
A Review on Image Compression using DCT and DWTIJSRD
This document reviews image compression techniques using discrete cosine transform (DCT) and discrete wavelet transform (DWT). It discusses how DCT transforms images from spatial to frequency domains, allowing for energy compaction and efficient encoding. DWT is a multi-resolution technique that represents images at different frequency bands. The document analyzes various studies that have used DCT and DWT for compression and compares their performance in terms of metrics like peak signal-to-noise ratio and compression ratio. It finds that DWT generally provides better compression performance than DCT, though DCT requires less computational resources. A hybrid DCT-DWT technique is also proposed to combine the advantages of both methods.
Image compression and reconstruction using improved Stockwell transform for q...IJECEIAES
Image compression is an important stage in picture processing since it reduces the data extent and promptness of image diffusion and storage, whereas image reconstruction helps to recover the original information that was communicated. Wavelets are commonly cited as a novel technique for image compression, although the production of waves proceeding smooth areas with the image remains unsatisfactory. Stockwell transformations have been recently entered the arena for image compression and reconstruction operations. As a result, a new technique for image compression based on the improved Stockwell transform is proposed. The discrete cosine transforms, which involves bandwidth partitioning is also investigated in this work to verify its experimental results. Wavelet-based techniques such as multilevel Haar wavelet, generic multiwavelet transform, Shearlet transform, and Stockwell transforms were examined in this paper. The MATLAB technical computing language is utilized in this work to implement the existing approaches as well as the suggested improved Stockwell transform. The standard images mostly used in digital image processing applications, such as Lena, Cameraman and Barbara are investigated in this work. To evaluate the approaches, quality constraints such as mean square error (MSE), normalized cross-correlation (NCC), structural content (SC), peak noise ratio, average difference (AD), normalized absolute error (NAE) and maximum difference are computed and provided in tabular and graphical representations.
A N E XQUISITE A PPROACH FOR I MAGE C OMPRESSION T ECHNIQUE USING L OSS...ijcsitcejournal
The imminent evolution in the field of medical imaging, telehealth and teleradiology services has been on a
significant rise with a dire need for a proficient structure for the compression of a DICOM (Digital
Imaging and Communications
in Medicine) standard medical image obtained through various modalities,
with clinical relevance and digitized clinical data, and various other diagnostic phenomena and the
progressive transmission of such a medical image over varying bandwidths. The data
loss redundancy
during the process of compression is to be maintained below the alarming level, meaning it is to be under
scanner without the loss of data/information. In this paper we present an efficient time bound algorithm
that utilizes a process flow
wherein multiple ROI sectors as well as the Non
-
ROI sector of the DICOM
image are considered in the algorithmic machine and the compression is done based upon a hybrid
compression algorithm by LZW & SPIHT encoder & decoder machines. The paper provides a m
agnitude of
the overall compression ratio involved in thus compressing the DICOM standard image. It also provides a
brief description about the PSNR values obtained after suitably compressing the image. We analyze the
various encoder scenarios and have pro
jected a suitable hybrid lossless compression algorithm that helps
in the retrieval of the data/information related to the image.
The document summarizes an efficient image compression technique using Overlapped Discrete Cosine Transform (MDCT) combined with adaptive thinning.
In the first phase, MDCT is applied which is based on DCT-IV but with overlapping blocks, enabling robust compression. In the second phase, adaptive thinning recursively removes points from the image based on Delaunay triangulations, further compressing the image. Simulation results showed over 80% pixel reduction with 30dB PSNR, requiring less points for the compressed image. The technique combines MDCT for frequency-domain compression with adaptive thinning for spatial-domain compression.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
4 ijaems jun-2015-5-hybrid algorithmic approach for medical image compression...INFOGAIN PUBLICATION
As medical imaging facilities move towards complete filmless imaging and also generate a large volume of image data through various advance medical modalities, the ability to store, share and transfer images on a cloud-based system is essential for maximizing efficiencies. The major issue that arises in teleradiology is the difficulty of transmitting large volume of medical data with relatively low bandwidth. Image compression techniques have increased the viability by reducing the bandwidth requirement and cost-effective delivery of medical images for primary diagnosis.Wavelet transformation is widely used in the fields of image compression because they allow analysis of images at various levels of resolution and good characteristics. The algorithm what is discussed in this paper employs wavelet toolbox of MATLAB. Multilevel decomposition of the original image is performed by using Haar wavelet transform and then image is quantified and coded based on Huffman technique. The wavelet packet has been applied for reconstruction of the compressed image. The simulation results show that the algorithm has excellent effects in the image reconstruction and better compression ratio and also study shows that valuable in medical image compression on cloud platform.
Review On Fractal Image Compression TechniquesIRJET Journal
This document reviews different techniques for fractal image compression. It discusses how fractal image compression works by dividing an image into range and domain blocks. It then summarizes several papers that propose techniques for fractal image compression using discrete cosine transform (DCT) or discrete wavelet transform (DWT). These techniques aim to improve compression ratio and reduce encoding time. Finally, the document proposes a new method that combines wavelets with fractal image compression to further increase compression ratio while maintaining low image quality losses during decompression.
A REVIEW OF IMAGE COMPRESSION TECHNIQUESArlene Smith
This document reviews various image compression techniques used for medical images. It begins by discussing the need for compressing large volumes of medical images generated for storage and transmission purposes. It then summarizes several key lossless and lossy compression techniques that have been proposed in other research papers, including techniques using wavelet transforms, DCT, and Huffman encoding. The techniques are evaluated based on their advantages like preserving image quality, and limitations like being slow or expensive. Results showed compression ratios from 2.5% to over 40% were achieved without significantly degrading image quality. Overall the document provides an overview of different medical image compression methods and their performance.
Near Reversible Data Hiding Scheme for images using DCTIJERA Editor
This document presents a near-reversible data hiding scheme for images using discrete cosine transform (DCT). In the proposed scheme, data is embedded in the non-zero AC coefficients of DCT blocks in a way that minimizes modifications to the original coefficients, improving visual quality. During embedding, two mathematical functions are used to modify coefficients by amounts closer to their original values compared to other methods. Experimental results on test images show the proposed scheme achieves better visual quality than existing schemes while maintaining data hiding capacity and reversibility.
Iaetsd performance analysis of discrete cosineIaetsd Iaetsd
The document discusses image compression using the discrete cosine transform (DCT). It provides background on image compression and outlines the DCT technique. The DCT transforms an image into elementary frequency components, removing spatial redundancy. The document analyzes the performance of compressing different images using DCT in Matlab by measuring metrics like PSNR. Compression using DCT with different window sizes achieved significant PSNR values.
This document summarizes a research paper that analyzed medical image compression using discrete cosine transform (DCT) with entropy encoding and Huffman coding on MRI brain images. The paper implemented DCT to compress images at varying quality levels and block sizes. It also used Huffman encoding to assign shorter bit codes to more frequent symbols. The research tested the algorithms on a set of brain MRI images. Results showed that compression ratio increased with higher quality levels, while peak signal-to-noise ratio and mean squared error varied based on the technique and block size used. The paper concluded that DCT with entropy decomposition can effectively compress MRI images with less quality loss.
dFuse: An Optimized Compression Algorithm for DICOM-Format Image ArchiveCSCJournals
Medical images are useful for knowing the details of the human body for health science or remedial reasons. DICOM is structured as a multi-part document in order to facilitate extension of these images. Additionally, DICOM defined information objects are not only for images but also for patients, studies, reports, and other data groupings. More information details in DICOM, resulted in large size, and transferring or communicating these files took lots of time. To solve this, files can be compressed and transferred. Efficient compression solutions are available and they are becoming more critical with the recent intensive growth of data and medical imaging. In order to receive the original and less sized image, we need effective compression algorithm. There are different algorithms for compression such as DCT, Haar, Daubuchies which has its roots in cosine and wavelet transforms. In this paper, we propose a new compression algorithm called “dFuse”. It uses cosine based three dimensional transform to compress the DICOM files. We use the following parameters to check the efficiency of the proposed algorithm, they are i) file size, ii) PSNR, iii) compression percentage and iv) compression ratio. From the experimental results obtained, the proposed algorithm works well for compressing medical images.
Similar to A Comprehensive lossless modified compression in medical application on DICOM CT images (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
A Comprehensive lossless modified compression in medical application on DICOM CT images
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 3 (Nov. - Dec. 2013), PP 01-07
www.iosrjournals.org
www.iosrjournals.org 1 | Page
A Comprehensive lossless modified compression in
medical application on DICOM CT images
Alka sharma, Gagangeet Singh Aujla, Jagbir Singh Gill
1
(Computer Science & Engineering, CEC, Landran, Mohali, India)
2
(Computer Science & Engineering, CEC, Landran, Mohali, India)
3
(Computer Science & Engineering, CGC, COE, Landran, Mohali, India)
__________________________________________________________________________________
ABSTRACT : In current days, Digital Imaging and Communication in Medicine (DICOM) is widely used for
viewing medical images from different modalities, distribution and storage. Image processing can be processed
by photographic, optical and electronic means, because digital methods are precise, fast and flexible, image
processing using digital computers are the most common method. Image Processing can extract information,
modify pictures to improves and change their structure (image editing, composition and image compression
etc.). Image compression is the major entities of storage system and communication which is capable of
crippling disadvantages of data transmission and image storage and also capable of reducing the data
redundancy. Medical images are require to stored for future reference of the patients and their hospital findings
hence, the medical image need to undergo the process of compression before storing it. Medical images are
much important in the field of medicine, all these Medical image compression is necessary for huge database
storage in Medical Centre and medical data transfer for the purpose of diagnosis. Presently Discrete cosine
transforms (DCT), Run Length Encoding Lossless compression technique, Wavelet transforms (DWT), are the
most usefully and wider accepted approach for the purpose of compression. On basis of based on discrete
wavelet transform we present a new DICOM based lossless image compression method. In the proposed
method, each DICOM image stored in the data set is compressed on the basis of vertically, horizontally and
diagonally compression. We analyze the results from our study of all the DICOM images in the data set using
two quality measures namely PSNR and RMSE. The performance and comparison was made over each images
stored in the set of data set of DICOM images. This work is presenting the performance comparison between
input images (without compression) and after compression results for each images in the data set using DWT
method. Further the performance of DWT method with HAAR process is compared with 2D-DWT method using
the quality metrics of PSNR & RMSE. The performance of these methods for image compression has been
simulated using MATLAB.
Keywords: JPEG, DCT, DWT, SPIHT, DICOM, VQ, Lossless Compression, Wavelet Transform, image
Compression, PSNR, RMSE.
I. INTRODUCTION
In term preserving a soaring visual quality standard of decompressed image, the main motive of image
compression is to achieve a very squat but rate representation. Compression reduces the storage and finds its
potency and limitation. Without losing its entropy significantly compression transmits burden of raw
information by reducing the ubiquitous redundancy [4]. Two techniques exit for compression to find out, first is
lossy technique (in this some of data may be lost or some identified loss or changes may incurred). Second in
lossless technique, in this technique the reconstructed images exactly same as the original. In multimedia and
digital image compression is considered as an essential technology in providing better result. The challenge is to
obtain high compression ration without the loss of image communication field [5]. By using specialized
software, which will be described in more detail late in the article, DICOM images can be compressed by
converting the data into smaller image file [10]. As compression technique are very helpful in mass storage and
transmission, many lossless techniques are used like Discrete Wavelet transform, Huffman coding and run
length encoding, arithmetic coding are quality or diagnostic accuracy. The underline ides behind most of the
lossy compression algorithm is to transform the image and to project it on an orthogonal function basis in order
to distribute the energy of the signal among these de-correlated components. The progress of the theory of multi
resolution decomposition has allowed the application of compact coding of Wavelet image representation and
these representations diverge in the preference of the wavelet. The principle behind the Wavelet transform is to
decompose an original signal into a series of low resolution signal associated with the detailed signal. It contains
all the necessary information that is required to rebuild the signal at each resolution level and at the next higher
level [1][9]. In DICOM format all the information about the image is contained in the header is the image
modality and information about the patient. Over and above this field in the header which defines that ” Transfer
2. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 2 | Page
Syntax Unique Identification” that relates to the type of compression applied to the image data. The image data
then follows the header. Currently, DICOM standards supports run- length coding (RLE), lossy JPEG, lossless
and JPEG 2000 for compression of the image data [Ramakrishnan]. In this paper, by using DWT method
through HAAR process DICOM images can be compressed. Discrete Wavelet Transform (DWT) is a proficient
tool for multi- resolution sub band decomposition of signals useful numerous signal and image processing
applications, without great loss of visual quality than the previous technique such as the Discrete Cosine
Technique (DCT) and the Discrete Fourier Transform (DFT) [6]. Wavelet transform divides the information of
an image into approximation and detail sub signals. The approximation sub signal shows the general trend of
pixel values and other three detail sub signals show the vertical, horizontal and diagonal details or changes in the
images [8].
II. RELATED WORK
Ramakrishnan et al. (2006) proposed a methodology for compression of DICOM images based on wavelets
and splint for telemedicine applications. In this paper they proposed a method for compressing DICOM images
for different entities on well known set partitioning in hierarchical trees (SPIHT) SPIHT having the progressive
transmission capabilities which is quiet useful for telemedicine application in which DICOM image header is
transmitted first and then progressively followed by image details transmission in various stages. SPIHT
produces embedded bit streams of the image and works on the principle of spatial relationship among the
wavelet co-efficient at different levels and frequency sub-bands in the pyramid structure of wavelet
decomposition. They compare their results on PSNR, MSSIM metrics for different compression scheme like
JPEG-200, JPEGCS, JPEG to the propose scheme (SPIHT). Finally, they concluded that SPIHT gives better
result as compare to other scheme by consuming less bandwidth and saving time [11].
Kumar et al. (2008) proposed a mechanism for performance of image compression techniques Using DCT
(Discrete Cosine transform), SVD (Singular Value Decomposition), BTC (Block Truncation Code), STD
(Standard Deviation) methods depending upon the image properties and requirements. In this paper, the two
main compression techniques SVD and BTC associated to DCT in JPEG baseline coding to find its potential
and limitation of data redundancies. The advantages of SVD were due to the property of energy compaction and
ability to adapt the local statistical variation of an image. Moreover, BTC generally preserves the first and
second moments of movements of each block. Finally, they concluded that the obtained results of the proposed
work clearly depicts that the incorporation of SVD and BTC in age compression along with DCT was in an
adaptive manners which enhanced the compression performance significantly. The proposed work result was
quiet reliable in terms of PSNR and MSE [4].
Bairagi et al. (2009) proposed a methodology for selection of wavelets for medical image compression. They
mainly worked over various wavelets transform methods to analyze DICOM CT image over several metrics like
MSE (Mean Square Error), PSNR (Peak Signal Noise Ratio), Structural contents, Normalized Absolute Error
etc. Moreover, they found out quality of measure for different wavelet type on DICOM CT images. They read
the some of the limitation of SPIHT techniques in which there was no specification for particular wavelets type
used as image compression. Furthermore, they had done analysis over all the „db‟ wavelet series and compare
their results over several metrics. In this paper, comparison and analysis was made over the class of wavelets
like orthogonal and bi-orthogonal and the author concluded that the bi-orthogonal 4.4 wavelet was most suitable
for medical image compression application [1].
Dhawan et al. (2011) reviewed and discussed about the image compression, need of image compression, its
principles, classes and various algorithm of image compression. They mainly worked over gray scale images of
Lena and finger print images. The author discussed over various compression algorithms based upon wavelet,
JPEG/DCT, VQ, fractal and compare their results on the basis of PSNR values and CPU time over encoding and
decoding. Finally, the author concluded that the wavelet based compression algorithms are strongly
recommended. Discrete cosine transform should use an adaptive quantization table. VQ approach is not
appropriate for a low bit rate compression and fractal approach should utilize its resolution-free decoding
property for a low bit rate compression [7].
Morales et al. (2012) presented a method for finger print verification by correlation using wavelet compression
of pre-processing digital images. In this paper, they implemented a practical digital security system that
combines digital correlation, wavelet compression with digital images pre-processing. The proposed work had
been implemented a biometric system that used as pattern the recognition of digital fingerprint which satisfied
the characteristics of uniqueness, universally, performance and collectability. The biometric fingerprint system
mainly contains two inputs that were passwords and fingerprints. The fingerprint was captured by the biometric
sensor “APCBIOPOD”, then the image was pre-processed took skeleton itself and stored in a database for
making the comparison and to authenticate the user. The author used specific Fourier transform and wavelet
transform to perform the function using correlation, compression, compressed filter, transformation and finally
user interface implementation [3].
3. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 3 | Page
Dubey et al. (2012) presented a new JPEG 2000 based lossy image compression method based on 2D discrete
wavelet transform. They mainly worked over 3D medical image compression using Huffman encoding
technique. In the proposed method, 3D image was divided into smaller non-overlapping tiles on which 2D DWT
was applied and after that hard thresholding and Huffman coding were applied on each of the tiles to get
compressed image. Moreover, the author explained that the performance of the proposed work of 3D JPEG
2000 image compression algorithm was measured over various images and concluded that the proposed
algorithm had better performance than JPEG compression in low and high bit rates [2].
III. PROPOSED WORK
In proposed method, a advanced approach to image compression technique is used that upgrade or improve its
performance ,minimum data loss and also reconstructed that image, which is known as Discrete wavelet
transform in dcm. We are using lossless compression technique, one of the method used is HAAR process. This
process is a sequence of square shaped function that further forms wavelet and this wavelet is based on Fourier
Transform. HAAR uses orthogonal combination on unit interval. The technical, directional and anatomical
aspects of an image are compressed using wavelets transformation of HAAR process. HAAR Wavelet
decomposition is one of the best methods of decomposition of images.
Development of Discrete Wavelet Coding Algorithm
1. Development of a representative data set of DICOM CT images.
2. Study of various linguistic Image Compression Techniques.
3. Select an input image from the set of data set.
4. Development of a storage schema which maps the discrete wavelets Transformation techniques for image
compression using HAAR process.
5. Development of an interface for storing the step wise procedure of compression for wavelet transformation.
6. Analyze the results for approximation A1 with respect to input image for metrics like RMSE, PSNR.
Metrics like RMSE, PSNR.
7. Rescheduling the image compression procedure after approximation A1
8. Analyze the results for approximation A2 with respect to input image for requisite metrics.
9. Save the some sort of information of input images as according to their modality in the dataset. Results
analysis and comparison for all the inputs in the dataset.
10. Performance of DWT method with HAAR process is compared with 2D-DWT method using the quality
metrics of PSNR & RMSE.
IV. RESULTS AND DISCUSSION
HAAR Wavelet approximation is one of the best methods of discrete images. For input, approximated images
is passed to the HAAR process to compress and this process measure the best in compression for the DICOM
CT images. The input images can be the set of images from data set can be brain image then analyze the result
for approximation 1 and approximation 2 with respect to input images. The performance of compression images
is measured by PSNR and RMSE. MATLAB gives graphics tools and reference-standard algorithms for image
processing, analysis, to examine a region of pixels, to detect and measure features, and analyze shapes and
textures, visualization, and algorithm development. HAAR is proved to be best as compared to the other
processes. Original image formatted in DICOM format of size with 8 bit resolution is input to
software. The „Compressed image‟ is the image which is generated at the decoder side after reconstruction
process. This shows that our HAAR process gives better results. The Haar transform is the simplest of the
wavelet transforms. This transform cross-multiplies a function against the Haar wavelet with various shifts and
stretches, like the Fourier transform cross-multiplies a function against a sine wave with two phases and many
stretches. Evaluation of performance of any medical DICOM CT images can be made by the parameters such as
RMSE and PSNR. Root-mean-square error (RMSE) is a frequently used measure of the differences between
values predicted by a model or an estimator and the values actually observed.
PSNR is generally used to analyze quality of image, sound and video files in dB (decibels). In other words,
PSNR is to measure the quality of reconstructed images that have been compressed.
4. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 4 | Page
Where, MSE represents the squared error of the images defined as,
Figure 1: Input DICOM brain image at client side
The above image is the input image which is undertaken for the simulation and after which we may proceed for
further compression process to be carry out. This is the very first result of input image at client end.
Figure 2: Converting input
image into gray scale
The figure depicts the very first step of image compression in which we first convert the taken input image into
gray scale and other more properties like size and image contrast for poor and adjusted terms of nature of the
image. This is the basic step of compression of image which is performed according to the need and performing
nature of implementation of the requisite process.
Figure 3: Results for input image after Approximation A1 (decomposition level first)
The figure signifies the result of first level decomposition of image by approximation . In this step, we find
the results of compression according to their Horizontal detail compression termed as , vertical detail
compression termed as and diagonal detail compression termed as . From the analysis of above figure
5.5 we found that there is
5. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 5 | Page
Figure 4: Decomposition level-1 reconstructed image
The figure depicts the results of reconstructed image after first level decomposition in which we carry out
compression of image diagonally, vertically and horizontal. For the sake of further compression at that time, it
must require to reconstruct the image by applying inverse mechanism. From the results of reconstructed 1-level
decomposition level we may now in position to do further decomposition of the input image.
Figure 5: Comparison snap shot for both level of decomposition of the image
The figure depicts the comparison analysis for both level of decomposition namely approximation A1 and
approximation A2.The result signifies that the two level decomposition results are quiet good and vulnerable as
compare to decomposition level-1. The results analysis and comparison is totally based over wavelet transform
for the proposed process on different coefficients and labels. Furthermore, the evaluated results made
comparison over the analysis of horizontal, vertical and diagonal diagnosis.
Figure 6: 2-level reconstructed image for wavelet
decomposition
The above results in snap shot clearly signifies that the two level decomposition for image compression is quiet
reliable and can be seen and processed for the input image which are during the compression process. The 2-
level decomposition results found by applying inverse wavelet transform methods for requisite parameters under
the taken process.
6. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 6 | Page
Figure 7: Comparison chart of RMSE values for different images for Wavelet HAAR
As seen from the figure which clearly depicts the comparison chart of root mean square value for all the
different images taken and get results one by one. The results are totally base over before compression and after
compression values for the input images.
Figure 8: Comparison chart of PSNR values for different input images for Wavelet HAAR
As seen from the figure which clearly depict the comparison chart of peak signals to noise ratio for all the
different images taken and get results one by one. The results are totally based over before compression and
after compression values for the input images. The above results clearly depicts that the PSNR values after
compression for each images are quiet high as compare to before compression of image which is quiet reliable
and good for the entire simulation process.
Figure 9: Comparison of Wavelet2d and proposed (Wavelet HAAR) of RMSE values for different input images
The figure is the comparison chart analysis of RMSE values for Wavelet2d techniques and the proposed
Wavelet HAAR techniques. As seen from the above graph the results of RMSE values for different set of
datasets are quiet reliable and robust for image compression process.
7. A Comprehensive lossless modified compression in medical application on DICOM CT images
www.iosrjournals.org 7 | Page
Figure 10: Comparison of PSNR values for both Wavelet 2d and proposed method
The above figure clearly depicts the comparison of both techniques on the basis of PSNR values. The above
results quiet familiarize that the values of Wavelet HAAR transform is quiet reliable and optimist as compare to
Wavelet 2d transform.
V. CONCLUSIONS
In this work, we have stressed towards the compression of DICOM images using DWT. Digital
Imaging and Communications in Medicine (DICOM) is a standard image format for distribution and viewing of
medical images from different modalities .Nowadays, due to medical advancement the usage of medical images
became necessary for diagnosis of patients. It is the recommended format for medical images, which provides
information pertaining to an image, imaging modality and patient in a header The performance and comparison
was made over each images stored in the set of data set of DICOM images. Here, we have taken two metrics
namely PSNR and RMSE. The results clearly depicts that the value of PSNR for all the images stored in the set
of dataset is high after compression in compare to the value of before compression which is quiet good, reliable
and scalable for data storage and removes the data redundancy. The performance of our proposed method of
DWT using HAAR process is measured over various images and observed the results for both decomposition
levels. The proposed algorithm scheme has given superior image compression results for DICOM images.
REFERENCES
[1] Bairagi, V. K. and Sampkal, A. M., (2012-13), “ROI - based DICOM Image Compression for Telemedicine”, IEEE, Vol 38,
pp.123-131.
[2] Dubey, V. G. (2012), “3D Medical Image Compression Using Huffman Encoding Technique”, IJSRC, Vol 2, Issue no 9, pp.1-3.
[3] Morales, Y. and Diaz, F. V, (2012), “Fingerprint verification by correlation using wavelet compression of preprocessing digital
images”, IEEE, Vol 2, no 12, pp.1517-1520.
[4] Kumar, V., Kumar, A. and Bhardwaj, A., (2012), “Performance Evaluation of Image Compression Techniques, IEEE, pp.447-450.
[5] Kaur, P. and Singh, S., (2012), “Image Compression Using Multithresholding Technique to Enhance Huffman Technique”, Vol 3,
Issue 8, pp.1-5
[6] Ramola, E. and Manoharan, J.S., (2011), “An Area Efficient VLSI Realization of Discrete Wavelet Transform for Multiresolution
Analysis”, IEEE, Vol 2, no 11, pp.377-381.
[7] Dhawan, S., (2011), “A review of image compression and comparison of its algorithms,” International Journal of electronics &
Communication, Vol 2, Issue 1, pp.22-26.
[8] Talukder, KH. And Harada, K., (2009), “A Scheme of Model Verification of the Concurrent Discrete Wavelet Transform (DWT)
for Image Compression”, International Journal of Signal Processing, pp.1-10.
[9] S. S. Devi and K. Vidhya, “Development of Medical Image Compression Techniques,” Conference on Computational Intelligence
and Multimedia Applications, IEEE vol. 3, pp. 97–101, 2007.
[10] Graham, R.N.J., Perriss, R.W. and Scarsbrook, A.F, (2005), “DICOM demystified: A review of digital file formats and their use in
radiological practice”, ELSEVIEER, pp.1133-1140.
[11] Ramakrishnan, B. and Sriraam, N. (2005), “Compression of DICOM Images Based on Wavelets and SPIHT for Telemedicine
Applications,” Digital signal processing, pp.1-5.