This document summarizes a research paper on compressing uncompressed images from the cloud using k-means clustering and Lempel-Ziv-Welch (LZW) compression. It begins by introducing cloud computing and k-means clustering. It then describes using k-means to group uncompressed images and compressing the images using LZW coding to reduce file sizes while maintaining image quality. The document discusses advantages of LZW compression like achieving compression ratios around 5:1. It provides examples of applying k-means clustering and LZW compression to simplify image compression.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The aim of the proposed research will be to develop software for implementing a parallel solution for the RSA decryption algorithm. Multithread and distributed computing methods will be used to reach the aimed objective. This effort will include the development of a hybrid OpenMP/MPI program to maximize the use of computational resources and, consequently, decrease the time to decrypt large ciphertexts.
Elgamal signature for content distribution with network codingijwmn
Network coding is a slightly new forwarding technique which receives various applications in traditional
computer networks, wireless sensor networks and peer-to-peer systems. However, network coding is
inherently vulnerable to pollution attacks by malicious nodes in the network. If any fake node in the
network spreads polluted packets, the pollution of packets will spread quickly since the output of (even an)
honest node is corrupted if at least one of the incoming packets is corrupted. There have been adapted a
few ordinary signature schemes to network coding that allows nodes to check the validity of a packet
without decoding. In this paper, we propose a scheme uses ElGamal signature in network coding. Our
scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the
integrity of the packets received easily.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
This paper proposes a new compressive
sensing based method for simultaneous data
compression and convergent encryption for secure
deduplication to efficiently use for the cloud storage. It
performs signal acquisition, its compression and
encryption at the same time. The measurement matrix
is generated using a hash key and is exploited for
encryption. It seems that it is very suitable for the cloud
model considering both the data security and the
storage efficiently.
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A Tale of Data Pattern Discovery in ParallelJenny Liu
In the era of IoTs and A.I., distributed and parallel computing is embracing big data driven and algorithm focused applications and services. With rapid progress and development on parallel frameworks, algorithms and accelerated computing capacities, it still remains challenging on deliver an efficient and scalable data analysis solution. This talk shares a research experience on data pattern discovery in domain applications. In particular, the research scrutinizes key factors in analysis workflow design and data parallelism improvement on cloud.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The aim of the proposed research will be to develop software for implementing a parallel solution for the RSA decryption algorithm. Multithread and distributed computing methods will be used to reach the aimed objective. This effort will include the development of a hybrid OpenMP/MPI program to maximize the use of computational resources and, consequently, decrease the time to decrypt large ciphertexts.
Elgamal signature for content distribution with network codingijwmn
Network coding is a slightly new forwarding technique which receives various applications in traditional
computer networks, wireless sensor networks and peer-to-peer systems. However, network coding is
inherently vulnerable to pollution attacks by malicious nodes in the network. If any fake node in the
network spreads polluted packets, the pollution of packets will spread quickly since the output of (even an)
honest node is corrupted if at least one of the incoming packets is corrupted. There have been adapted a
few ordinary signature schemes to network coding that allows nodes to check the validity of a packet
without decoding. In this paper, we propose a scheme uses ElGamal signature in network coding. Our
scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the
integrity of the packets received easily.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
This paper proposes a new compressive
sensing based method for simultaneous data
compression and convergent encryption for secure
deduplication to efficiently use for the cloud storage. It
performs signal acquisition, its compression and
encryption at the same time. The measurement matrix
is generated using a hash key and is exploited for
encryption. It seems that it is very suitable for the cloud
model considering both the data security and the
storage efficiently.
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A Tale of Data Pattern Discovery in ParallelJenny Liu
In the era of IoTs and A.I., distributed and parallel computing is embracing big data driven and algorithm focused applications and services. With rapid progress and development on parallel frameworks, algorithms and accelerated computing capacities, it still remains challenging on deliver an efficient and scalable data analysis solution. This talk shares a research experience on data pattern discovery in domain applications. In particular, the research scrutinizes key factors in analysis workflow design and data parallelism improvement on cloud.
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
Data Accuracy Models under Spatio - Temporal Correlation with Adaptive Strate...IDES Editor
Wireless sensor nodes continuously observe and
sense statistical data from the physical environment. But what
degree of accurate data sensed by the sensor nodes
collaboratively is a big issue for wireless sensor networks.
Hence in this paper, we describe accuracy models of sensor
networks for collecting accurate data from the physical
environment under two conditions. First condition: we propose
accuracy model which requires a priori knowledge of statistical
data of the physical environment called Estimated Data
Accuracy (EDA) model. Simulation results shows that EDA
model can sense more accurate data from the physical
environment than the other information accuracy models in
the network. Moreover using EDA model, there exist an
optimal set of sensor nodes which are adequate to perform
approximately the same data accuracy level achieve by the
network. Finally we simulate EDA model under the thread of
malicious attacks in the network due to extreme physical
environment. Second condition: we propose another accuracy
model using Steepest Decent method called Adaptive Data
Accuracy (ADA) model which doesn’t require any a priori
information of input signal statistics. We also show that using
ADA model, there exist an optimal set of sensor nodes which
measures accurate data and are sufficient to perform the same
data accuracy level achieve by the network. Further in ADA
model, we can reduce the amount of data transmission for
these optimal set of sensor nodes using a model called Spatio-
Temporal Data Prediction (STDP) model. STDP model
captures the spatial and temporal correlation of sensing data
to reduce the communication overhead under data reduction
strategies. Finally using STDP model, we illustrate a
mechanism to trace the malicious nodes in the network under
extreme physical environment. Computer simulations
illustrate the performance of EDA, ADA and STDP models
respectively.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHYcsandit
This paper presents a parallel approach to improve the time complexity problem associated
with sequential algorithms. An image steganography algorithm in transform domain is
considered for implementation. Image steganography is a technique to hide secret message in
an image. With the parallel implementation, large message can be hidden in large image since
it does not take much processing time. It is implemented on GPU systems. Parallel
programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is
processed
Handwritten Digit Recognition and performance of various modelsation[autosaved]SubhradeepMaji
This presentation is all about handwritten digit recognition of different people using Convolution Neural Network and compare the performance of different models based on different sequence of layers.
Clustering Using Shared Reference Points Algorithm Based On a Sound Data ModelWaqas Tariq
A novel clustering algorithm CSHARP is presented for the purpose of finding clusters of arbitrary shapes and arbitrary densities in high dimensional feature spaces. It can be considered as a variation of the Shared Nearest Neighbor algorithm (SNN), in which each sample data point votes for the points in its k-nearest neighborhood. Sets of points sharing a common mutual nearest neighbor are considered as dense regions/ blocks. These blocks are the seeds from which clusters may grow up. Therefore, CSHARP is not a point-to-point clustering algorithm. Rather, it is a block-to-block clustering technique. Much of its advantages come from these facts: Noise points and outliers correspond to blocks of small sizes, and homogeneous blocks highly overlap. This technique is not prone to merge clusters of different densities or different homogeneity. The algorithm has been applied to a variety of low and high dimensional data sets with superior results over existing techniques such as DBScan, K-means, Chameleon, Mitosis and Spectral Clustering. The quality of its results as well as its time complexity, rank it at the front of these techniques.
A Literature Survey: Neural Networks for object detectionvivatechijri
Humans have a great capability to distinguish objects by their vision. But, for machines object
detection is an issue. Thus, Neural Networks have been introduced in the field of computer science. Neural
Networks are also called as ‘Artificial Neural Networks’ [13]. Artificial Neural Networks are computational
models of the brain which helps in object detection and recognition. This paper describes and demonstrates the
different types of Neural Networks such as ANN, KNN, FASTER R-CNN, 3D-CNN, RNN etc. with their accuracies.
From the study of various research papers, the accuracies of different Neural Networks are discussed and
compared and it can be concluded that in the given test cases, the ANN gives the best accuracy for the object
detection.
Comparision Of Various Lossless Image Compression TechniquesIJERA Editor
Today images are considered as the major information tanks in the world. They can convey a lot more information to the receptor then a few pages of written information. Due to this very reason image processing has become a field of research today. The processing are basically are of two types; lossy and lossless. Since the information is power, so having it complete and discrete is of great importance today. Hence in such cases lossless techniques are the best options. This paper deals with the comparison of different lossless image compression techniques available today.
A COMPARISON BETWEEN PARALLEL AND SEGMENTATION METHODS USED FOR IMAGE ENCRYPT...ijcsit
Preserving confidentiality, integrity and authenticity of images is becoming very important. There are so many different encryption techniques to protect images from unauthorized access. Matrix multiplication
can be successfully used to encrypt-decrypt digital images. In this paper we made a comparison study between two image encryption techniques based on matrix multiplication namely, segmentation and parallel methods.
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
Data Accuracy Models under Spatio - Temporal Correlation with Adaptive Strate...IDES Editor
Wireless sensor nodes continuously observe and
sense statistical data from the physical environment. But what
degree of accurate data sensed by the sensor nodes
collaboratively is a big issue for wireless sensor networks.
Hence in this paper, we describe accuracy models of sensor
networks for collecting accurate data from the physical
environment under two conditions. First condition: we propose
accuracy model which requires a priori knowledge of statistical
data of the physical environment called Estimated Data
Accuracy (EDA) model. Simulation results shows that EDA
model can sense more accurate data from the physical
environment than the other information accuracy models in
the network. Moreover using EDA model, there exist an
optimal set of sensor nodes which are adequate to perform
approximately the same data accuracy level achieve by the
network. Finally we simulate EDA model under the thread of
malicious attacks in the network due to extreme physical
environment. Second condition: we propose another accuracy
model using Steepest Decent method called Adaptive Data
Accuracy (ADA) model which doesn’t require any a priori
information of input signal statistics. We also show that using
ADA model, there exist an optimal set of sensor nodes which
measures accurate data and are sufficient to perform the same
data accuracy level achieve by the network. Further in ADA
model, we can reduce the amount of data transmission for
these optimal set of sensor nodes using a model called Spatio-
Temporal Data Prediction (STDP) model. STDP model
captures the spatial and temporal correlation of sensing data
to reduce the communication overhead under data reduction
strategies. Finally using STDP model, we illustrate a
mechanism to trace the malicious nodes in the network under
extreme physical environment. Computer simulations
illustrate the performance of EDA, ADA and STDP models
respectively.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHYcsandit
This paper presents a parallel approach to improve the time complexity problem associated
with sequential algorithms. An image steganography algorithm in transform domain is
considered for implementation. Image steganography is a technique to hide secret message in
an image. With the parallel implementation, large message can be hidden in large image since
it does not take much processing time. It is implemented on GPU systems. Parallel
programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is
processed
Handwritten Digit Recognition and performance of various modelsation[autosaved]SubhradeepMaji
This presentation is all about handwritten digit recognition of different people using Convolution Neural Network and compare the performance of different models based on different sequence of layers.
Clustering Using Shared Reference Points Algorithm Based On a Sound Data ModelWaqas Tariq
A novel clustering algorithm CSHARP is presented for the purpose of finding clusters of arbitrary shapes and arbitrary densities in high dimensional feature spaces. It can be considered as a variation of the Shared Nearest Neighbor algorithm (SNN), in which each sample data point votes for the points in its k-nearest neighborhood. Sets of points sharing a common mutual nearest neighbor are considered as dense regions/ blocks. These blocks are the seeds from which clusters may grow up. Therefore, CSHARP is not a point-to-point clustering algorithm. Rather, it is a block-to-block clustering technique. Much of its advantages come from these facts: Noise points and outliers correspond to blocks of small sizes, and homogeneous blocks highly overlap. This technique is not prone to merge clusters of different densities or different homogeneity. The algorithm has been applied to a variety of low and high dimensional data sets with superior results over existing techniques such as DBScan, K-means, Chameleon, Mitosis and Spectral Clustering. The quality of its results as well as its time complexity, rank it at the front of these techniques.
A Literature Survey: Neural Networks for object detectionvivatechijri
Humans have a great capability to distinguish objects by their vision. But, for machines object
detection is an issue. Thus, Neural Networks have been introduced in the field of computer science. Neural
Networks are also called as ‘Artificial Neural Networks’ [13]. Artificial Neural Networks are computational
models of the brain which helps in object detection and recognition. This paper describes and demonstrates the
different types of Neural Networks such as ANN, KNN, FASTER R-CNN, 3D-CNN, RNN etc. with their accuracies.
From the study of various research papers, the accuracies of different Neural Networks are discussed and
compared and it can be concluded that in the given test cases, the ANN gives the best accuracy for the object
detection.
Comparision Of Various Lossless Image Compression TechniquesIJERA Editor
Today images are considered as the major information tanks in the world. They can convey a lot more information to the receptor then a few pages of written information. Due to this very reason image processing has become a field of research today. The processing are basically are of two types; lossy and lossless. Since the information is power, so having it complete and discrete is of great importance today. Hence in such cases lossless techniques are the best options. This paper deals with the comparison of different lossless image compression techniques available today.
A COMPARISON BETWEEN PARALLEL AND SEGMENTATION METHODS USED FOR IMAGE ENCRYPT...ijcsit
Preserving confidentiality, integrity and authenticity of images is becoming very important. There are so many different encryption techniques to protect images from unauthorized access. Matrix multiplication
can be successfully used to encrypt-decrypt digital images. In this paper we made a comparison study between two image encryption techniques based on matrix multiplication namely, segmentation and parallel methods.
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
The tremendous growth of digital data has led to a high necessity for compressing applications either to minimize memory usage or transmission speed. Despite of the fact that many techniques already exist, there is still space and need for new techniques in this area of study. With this paper we aim to introduce a new technique for data compression through pixel combinations, used for both lossless and lossy compression. This new technique is also able to be used as a standalone solution, or with some other data compression method as an add-on providing better results. It is here applied only on images but it can be easily modified to work on any other type of data. We are going to present a side-by-side comparison, in terms of compression rate, of our technique with other widely used image compression methods. We will show that the compression ratio achieved by this technique tanks among the best in the literature whilst the actual algorithm remains simple and easily extensible. Finally the case will be made for the ability of our method to intrinsically support and enhance methods used for cryptography, steganography and watermarking.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
Highly secure scalable compression of encrypted imageseSAT Journals
Abstract A highly secure scalable compression method for stream cipher encrypted images is described in this journal. The input image first undergoes encryption and then shuffling. This shuffling in the image pixels enhances the security. For shuffling, Henon map is used. There are two layers for the scalable compression namely base layer and enhancement layer. Base layer bits are produced by coding a series of non-overlapping patches of uniformly down sampled version of encrypted image. In the enhancement layer pixels are selected by random permutation and then coded. From all the available pixel samples an iterative multi scale technique is used to reconstruct the image and finally performs decryption. The proposed method has high security. Key Words: Encryption, Decryption, Shuffling, Scalable compression
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
An Approach Towards Lossless Compression Through Artificial Neural Network Te...IJERA Editor
An image consists of significant info along with demands much more space within the memory. The particular significant info brings about much more indication moment from transmitter to device. Any time intake is usually lowered by utilizing info compression techniques. In this particular method, it's possible to eliminate the repetitive info within an image. The particular condensed image demands a lesser amount of storage along with a lesser amount of time for you to monitor by means of data from transmitter to device. Unnatural neural community along with give food to ahead back again propagation method can be utilized for image compression. In this particular cardstock, this Bipolar Code Method is offered along with executed for image compression along with received the higher results as compared to Principal Part Analysis (PCA) method. However, this LM protocol can be offered along with executed which will acts as being a powerful way of image compression. It is seen how the Bipolar Code along with LM protocol fits the very best for image compression along with control applications.
Image compression using negative formateSAT Journals
Abstract This project deals with the compression of digital images using the concept of conversion of original image to negative format. The colored image can be of larger size whereas the image can be converted into a negative form and compressed, by applying a compression algorithm on it. Image compression can improve the performance of digital systems by reducing the time and cost for the storage of images and their transmission, without significant reduction in quality and also to find a tool for compress a folder and selective image compression. Keywords: Image Processing, Pixels, Image Negatives, Colors, Color Models.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
Human activity recognition (HAR) is recently used in numerous applications including smart homes to monitor human behavior, automate homes according to human activities, entertainment, falling detection, violence detection, and people care. Vision-based recognition is the most powerful method widely used in HAR systems implementation due to its characteristics in recognizing complex human activities. This paper addresses the design of a 3D convolutional neural network (3D-CNN) model that can be used in smart homes to identify several numbers of activities. The model is trained using KTH dataset that contains activities like (walking, running, jogging, handwaving handclapping, boxing). Despite the challenges of this method due to the effectiveness of the lamination, background variation, and human body variety, the proposed model reached an accuracy of 93.33%. The model was implemented, trained and tested using moderate computation machine and the results show that the proposal was successfully capable to recognize human activities with reasonable computations.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHY cscpconf
This paper presents a parallel approach to improve the time complexity problem associated with sequential algorithms. An image steganography algorithm in transform domain is considered for implementation. Image steganography is a technique to hide secret message in an image. With the parallel implementation, large message can be hidden in large image since it does not take much processing time. It is implemented on GPU systems. Parallel programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is processed
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch
1. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
DOI : 10.5121/ijaia.2013.4413 145
Data Mining Un-Compressed Images from cloud
with Clustering Compression technique using
Lempel-Ziv-Welch
1
C. Parthasarathy 2
K.Srinivasan and 3
R.Saravanan
Assistant Professor, 1,2,3
Dept. of I.T, SCSVMV University, Enathur, Kanchipuram,
Pin–631 561
1
sarathy286089@rediffmail.com 2
kadamsrini21@gmail.com
3
saravanan_kpm1983@yahoo.co.in
ABSTRACT
Cloud computing is a highly discussed topic in the technical and economic world, and many of the big
players of the software industry have entered the development of cloud services. Several companies’ and
organizations wants to explore the possibilities and benefits of incorporating such cloud computing
services in their business, as well as the possibilities to offer own cloud services. We are going to mine the
un-compressed image from the cloud and use k-means clustering grouping the uncompressed image and
compress it with Lempel-ziv-welch coding technique so that the un-compressed images becomes error-free
compression and spatial redundancies.
KEYWORDS
Cloud, Compression, Image processing, Clustering
1. INTRODUCTION
In cloud computing, the word cloud is used as a metaphor for “the Internet” so the phrase cloud
computing means "a type of Internet-based computing," where different services -- such as
servers, storage and applications -- are delivered to an organization's computers and devices
through the Internet.
Figure 1. Cloud Computing Model
Cloud
service
(queue)
Cloud
Infrastructure
(Billing VMs)
Cloud Platform
(eg. web front end)
Cloud storage
(Eg. Data base)
2. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
146
Clustering can be considered the most important unsupervised learning technique; so, as every
other problem of this kind, it deals with finding a structure in a collection of unlabeled data.
Clustering is “the process of organizing objects into groups whose members are similar in some
way”. A cluster is therefore a collection of objects which are “similar” between them and are
“dissimilar” to the objects belonging to other clusters. The typical requirements of clustering in
data mining are 1. Scalability 2. Ability to deal with different types of attributes 3. Discovery of
clusters with arbitrary shape 4. Minimal requirements for domain knowledge to determine input
parameters 5. Ability to deal with noisy data 6. Incremental clustering and insensitivity to the
order of input records. 7. High dimensionality. Clustering is also called data segmentation
because clustering partitions large data sets into groups according to their similarity.
2. K-MEANS METHOD
In this cloud we are going to cluster the un-compressed Image by centroid based technique: The
K-means algorithm takes input parameter and partitions a set of n-objects into k clusters so that
the resulting intra-cluster similarity is high but the inter-cluster similarity is low. K-Means
algorithm proceeds as it randomly selects k of the objects, an object is assigned to the cluster it is
the most similar, based on the distance between the object and the cluster mean. It Computes the
new mean for each cluster it iterates until the criterion function converges.
The square-error criterion
k
E = ∑ ∑ | p-mi| 2 where E is the sum of the square error for all
objects in
i=1 pɛCi
the data set; p is the point in space representing a given object and mi is the mean of cluster Ci
Algorithm: The k-means algorithm for partitioning, where each cluster’s center is represented by
the mean value of the objects in the cluster.
Input :
k: the number of clusters
D: a data set containing n objects
Method:
(1) arbitrarily choose k objects from D as the initial cluster centers Repeat Assign each
object to the cluster to which the object is the most similar Based on the mean value of
the objects in the cluster; Update the cluster means, i.e., calculate the mean value of the
object for Each cluster; Until no change.
.
Figure 2. shows the mean m1,m2 is far away from the final boundary
3. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
147
Figure 3. shows how the means m1 and m2 move closer to the final boundary
Figure 4. shows different types of clusters
After the algorithm finishes, it produces these outputs:
• A label for each data point
• The center for each label
A label can be considered as “assigning a group”. For example, in the above image you can see
four “labels”. Each label is displayed with a different colour. All yellow points could have the
label 0, orange could have label 1, etc.
4. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
148
Figure 5. Grouping different set of uncompressed images
Step 0: Get the dataset
As an example, We will be using the data points on the left. We’ll assume K=5. And it’s
apparent that this dataset has 5 clusters: three spaced out and two almost merging.
Step 1: Assign random centers
The first step is to randomly assign K centers. We have marked them as red points in the image.
Note how they are all concentrated in the two “almost merging” clusters. These centers are just an
initial guess. The algorithm will iteratively correct itself. Finally, these centers will coincide with
the actual center of each cluster.
Step 2: “Own” datapoints
Each datapoint checks which center it is closest to. So, it “belongs” to that particular center. Thus,
all centers “own” some number of points.
Step 3: Shift the centers
Each center uses the points it “owns” to calculate a new center. Then, it shifts itself to that center.
If the centers actually shifted, we again go to Step 3. If not, then the centers are the final result.
We proceed to the next step
Now that the centers do not move, you can use the centers.
3. SCOPE OF THE RESEARCH
We know about the term compression ratio. This means as:
Compression Ratio , Cr= ((Data size of original message)/ (Data size of Encoded message))
Now-a-days, compression ratio is a great factor in transmission of data. By this research we can
have a better solution about how to make compression ratio higher, because data transmission
mostly depends on compression ratio. Images transmitted over the world wide web here data
compression is important. Suppose we need to download a digitized color photograph over a
computer's 33.6 kbps modem. If the image is not compressed (a TIFF file, for example), it will
contain about 600 kbytes of data. If it has been compressed using a LSW CODING technique
(such as used in the GIF format),it will be about one-half this size, or 300 kbytes. If LSW
CODING TECHNIQUE has been used (a JPEG file), it will be about 50 kbytes. The point is, the
download times for these three equivalent files are 142 seconds, 71 seconds, and 12 seconds,
respectively. Digitized photographs, while GIF is used with drawn images, such as company
logos that have large areas of a single color.
Disadvantage of Lossless Compression
Lossless compression, refers to the process of encoding data more efficiently so that it occupies
fewer bits or bytes but in such a way that the original data can be reconstructed, bit-for-bit, when
5. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
149
the data is decompressed. The advantage of lossless encoding techniques is that they produce an
exact duplicate of the original data but they also have some disadvantages when compared to
Lossy encoding techniques.
4. COMPRESSION RATIO
Lossless encoding techniques cannot achieve high levels of compression. Few lossless encoding
techniques can achieve a compression ratio higher than 8:1 which compares unfavorably with so-
called lossy encoding techniques. Lossy encoding techniques -- which achieve compression by
discarding some of the original data -- can achieve compression ratios of 10:1 for audio and 300:1
for video with little or no perceptible loss of quality. According to the New Biggin Photography
Group a 1,943 by 1,702 pixel 24-bit RGB color image with an original size of 9.9 megabytes can
only be reduced to 6.5 megabytes using the lossless PNG format but can be reduced to just 1
megabyte using the lossy JPEG format.
5. TRANSFER TIME
Any application that involves storing or distributing digital images, or both, presupposes that
these operations can be completed in a reasonable length of time. The time needed to transfer a
digital image depends on the size of the compressed image and as the compression ratios that can
be achieved by lossless encoding techniques are far lower than lossy encoding techniques,
lossless encoding techniques are unsuitable for these applications.
5.1. Lempel-Ziv-Welch (Lzw) Coding Technique
LZW compression is named after its developers, A. Lempel and J. Ziv, with later modifications
by Terry A. Welch. It is the foremost technique for general purpose data compression due to its
simplicity and versatility. Typically, you can expect LZW to compress text, executable code, and
similar data files to about one-half their original size. LZW also performs well when presented
with extremely redundant data files, such as tabulated numbers, computer source code, and
acquired signals. Compression ratios of 5:1 are common for these cases.
LZW compression uses a code table, as illustrated in Fig. 6. A common choice is to provide 4096
entries in the table. In this case, the LZW encoded data consists entirely of 12 bit codes, each
referring to one of the entries in the code table. Un compression is achieved by taking each code
from the compressed file, and translating it through the code table to find what character or
characters it represents. Codes 0-255 in the code table are always assigned to represent single
bytes from the input file. For example, if only these first 256 codes were used, each byte in the
original file would be converted into 12 bits in the LZW encoded file, resulting in a 50% larger
file size. During un compression, each 12 bit code would be translated via the code table back
into the single bytes. Of course, this wouldn't be a useful situation.
6. ADVANTAGE OF LZW COMPRESSION
The LZW compression can compress executable code, text, and similar data files to almost one-
half of their original size. It usually uses single codes to replace strings of characters, thereby
compressing the data. LZW also gives a good performance when extremely redundant data files
are presented to it like computer source code, tabulated numbers and acquired signals. The
common compression ratio for these cases is almost in the range of 5:1.
6. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
150
Figure 6 – Flow chart for LZW coding compression
Many methods of LZW coding compression have been developed; however, a family of
techniques called transform compression has proven the most valuable. The best example of
transform compression is embodied in the popular JPEG standard of image encoding. JPEG is
named after its origin, the Joint Photographers Experts Group. LZW compression is different
from other common compression algorithms (e.g. Huffman encoding) in that it does not require
an initial processing of the data file to determine the codes to be used. Instead, the code table is
generated “on the fly” at the same time that the file is compressed. The algorithm operates so that
at each basic step, either the code for a sequence of values already exists in the table, or the code
for a new sequence is generated and inserted into the table. Another interesting characteristic of
LZW compression is that it is not necessary to send a full code table along with the compressed
file. Only a table of the individual colors that exist in the image is needed. Then codes for
sequences of colors are generated during the decompression process such that the codes are
always available in the table when they are needed.
START
INPUT FIRST BYTE
STORE IN STRING
INPUT NEXT BYTE
STORE IN CHAR
IS STRING-
CHAR IN
TABLE
OUTPUT THE CODE
FOR STRING
STRING = STRING
+CHAR
STRING = CHAR
ADD ENTRY TABLE FOR
STRING + CHAR
MORE BYTES
TO INPUT ?
OUTPUT THE CODE
FOR STRING
STOP
7. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
151
The first step in LZW is initialization of the code table to include all the colors in the image file.
For an 8-bit image, this would be up to 256 colors. If all colors were used, the codes would be
the values from 0 through 255, each code requiring eight bits. The algorithm proceeds by
processing the pixels in the image file from left to right and top to bottom of the image. The basic
idea of the algorithm is that strings of colors are put into the code table as they are encountered.
Grouping strings of colors together into a single code results in compression.
The basic algorithm for LZW compression is given below. In the pseudocode that follows,
pixelString is a sequence of pixel values. pixel = next pixel value means “read the next pixel out
of the image file.” pixelString + pixel means “take the current pixelString value and concatenate
pixel onto the end of it.”
algorithm LZW
/*Input: A bitmap image
Output: A table of the individual colors in the image and a compressed version of the file
Note that + is concatenation*/
{
initialize table to contain the individual colors in bitmap
pixelString = first pixel value
while there are still pixels to process {
pixel = next pixel value
stringSoFar = pixelString + pixel
if stringSoFar is in the table then
pixelString = stringSoFar
else {
output the code for pixelString
add stringSoFar to the table
pixelString = pixel
}
}
output the code for pixelString
}
Consider an simplified problem a string BABAABAAA use the LZW Algorithm to compress the string.[
TABLE-1 EXAMPLE WORK FOR COMPRESSION
8. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
152
A simplified examplealgorithm LZW_decompress
/*Input: Compressed bitmap image and table of individual colors in image
Output: Decompressed image*/
{
/*Initialize table*/
stringSoFar = NULL
while there are still codes to process in the code string {
code = next code in the code string
colors = the colors corresponding to code in the table
if colors == NULL /*Case where code is not in the table*/
/*stringSoFar0 is the first color in stringSoFar*/
colors = stringSoFar + stringSoFar0
output colors
if stringSoFar != NULL
put stringSoFar + colors0 in the table
stringSoFar = colors
}
}
Basic LZW Decompression Algorithm
A simplified example for the string BABAABAAA the compressed code words are
<66><65><256><257><258><259><260> using the code word we are going to de-compress the
information
TABLE -2 EXAMPLE WORK FOR DE-COMPRESSION
Encoder output string String table code-word String
B
A 256 BA
BA 257 AB
AB 258 BAA
A 259 ABA
AA 260 AA
9. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
153
Figure 7. JPEG IMAGE DIVISION
JPEG image division. JPEG transform compression starts by breaking the image into 8×8 groups,
each containing 64 pixels. Three of these 8×8 groups are enlarged in this figure, showing the
valuesof the individual pixels, a single byte value between 0 and 255.
A. Original image
10. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
154
B. With 10:1 Compression
C. With 45:1 Compression
Figure 8 Example of JPEG distortion. Figure(a) shows the original image, while (b) and (c) shows
restored images using compression ratios of 10:1 and 45:1, respectively. The high compression
ratio used in (c) results in each 8X8 pixel group being represented by less than 12 bits.
Figure 9. A plot of the intensity data of line 266 of the original (uncompressed) image
11. International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 4, July 2013
155
Figure 10. A plot of the intensity data of line 266 of the 1.0 bpp NSI compressed image.
7. CONCLUSION
Image compression is a topic of much importance and employed in many applications. Methods
of Image compression have been studied for almost four decades. An enhanced scaling algorithm
was devised. This algorithm utilized the proximity of the sample points chosen to each other to
detect edges This paper has provided an overview of Image compression methods of general
utility. The algorithms have been evaluated in terms of the amount of compression they provide,
algorithm efficiency, and susceptibility to error. While algorithm efficiency and susceptibility to
error are relatively independent of the characteristics of the source ensemble, the amount of
compression achieved depends upon the characteristics of the source to a great extent.
REFERENCES
[1] “Digital Image Processing” Second paper Edition-(2006) ,Rafael C. Gonzalez and Richard E. Woods.
Pages: 411, 440-442, 459.
[2] “Lecture on Data Compression” , Patrick Karlsson Patrick. karlsson@cb.uu.se
[3] “Error-Resilient LZW Data Compression” , YonghuiWu Stefano Lonardi WojciechSzpankowski,
Proceedings of the Data Compression Conference IEEE(DCC’06) 0-7695-2545-8 /06 (2006)
[4] "LZW Data Compression" Mark Nelson's from the October, 1989 issue of Dr. Dobb's Journal.
[5] “A fast binary template matching algorithm for document image data compression in Pattern
Recognition”, Holt, M.J. (1988) J. Kittler (ed.) (Proc. Int. Conf.,Cambridge). Springer Verlag, Berlin.
[6] “Compression of black-white images with arithmetic coding,” Langdon, G.G. and Rissanen, J.
(1981)June IEEE Trans Communications COM-29(6): 858–867.
[7] “Two level context based compression of binary images”,in Proc. DCC’91, J.A. Storer and J.H. Reif
(eds.), Moffat, A. IEEE Computer Society Press, Los Alamitos (1991) pp. 382-391.
[8] “Combined symbol matching facsimile data compression system”, Pratt, W.K., Capitant, P.J., Chen,
W.H.,Hamilton, E.R. and Wallis, R.H. (1980) July Proc IEEE 68(7): 786-796;.
[9] “Textual Image Compression” in Proc.DCC’92, J.A. Storer and M. Cohn (eds.),Witten, I.H., Bell,
Harrison, James and Moffat, A. IEEE Computer Society Press, Los Alamitos, CA. (1992) pp. 42-51.
[10] “A universal algorithm for sequential data compression”, ZIV, J. AND LEMPEL, A.(197I)IEEE
Trans. Inform. Theory 23, 3, 337–343.
[11] “Text Compression English word” T. C. Bell, J. G. Cleary, and I. H. Witten,C1iffs:N. J. Prentice-
Hall, 1990.
[12] “The LEMPEL-ZIV-WEL (LZW) Data Compression Algorithm for Packet Radio”, Kinsner and R H.
Greenfield –IEEE
[13] “Enhanced LZW (Lempel-Ziv-Welch) Algorithm with Binary Search to Reduce Time Complexity
for Dictionary Creation in Encoding and Decoding”, published Proceedings of international
conference(ICICIC 2012) NISHAD PM and Dr. N. NALAYINI -PSG tech 5- January 2012.
[14] "PPM performance with BWT Complexity: A fast and effective data compression algorithm", M.
Effros,Proceedings of the IEEE, 88(11), 1703-1712, (2000).
[15] "Compression of two-dimensional data", A. Lempel and J. Ziv, IEEE Transactions on Information
Theory, 32(1), 2-8, (1986).