This document summarizes a research paper that proposes using 2D scaling transformations to compress grayscale images. It begins by defining scaling and different types of scaling transformations. It then describes how to represent 2D scaling mathematically using transformation matrices. The paper applies 2D scaling with different factors to compress the Lena test image and evaluates the compressed images using PSNR and MSE metrics. Scaling by factors of 2, 4, and 8 are tested, with higher scaling factors achieving better compression but lower image quality. In conclusions, the paper finds the proposed 2D scaling technique provides comparable or better performance than other image transformation methods for compression.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
STATE SPACE GENERATION FRAMEWORK BASED ON BINARY DECISION DIAGRAM FOR DISTRIB...csandit
This paper proposes a new framework based on Binary Decision Diagrams (BDD) for the graph distribution problem in the context of explicit model checking. The BDD are yet used to represent the state space for a symbolic verification model checking. Thus, we took advantage of high compression ratio of BDD to encode not only the state space, but also the place where each state will be put. So, a fitness function that allows a good balance load of states over the nodes of an homogeneous network is used. Furthermore, a detailed explanation of how to
calculate the inter-site edges between different nodes based on the adapted data structure is presented
Remotely Sensed Image (RSI) Analysis for feature extraction using Color map I...ijdmtaiir
Remote Sensing is the science and art of acquiring
information (spectral, spatial, and temporal) about material
objects, area, or phenomenon, without coming into physical
contact with them ad plays a significant role in feature
extraction. In the present paper, implementation of color
mapping index method is analyzed to extract features from RSI
in spectral domain. Color indexing is applied after fixing the
index value to the pixels of selected ROI (Region of Interest)
of RSI and there by clustering based on these index values.
Color mapping, which is also called tone mapping can be used
to apply color transformations on the final image colors of the
ROI. The process of color map indexing is a color map
approximation approach on RSI for feature extraction includes
designing appropriate algorithm, its implementation and
discussion on the results of such implementation on ROI.
Wavelet-Based Warping Technique for Mobile Devicescsandit
The role of digital images is increasing rapidly in
mobile devices. They are used in many
applications including virtual tours, virtual reali
ty, e-commerce etc. Such applications
synthesize realistic looking novel views of the ref
erence images on mobile devices using the
techniques like image-based rendering (IBR). Howeve
r, with this increasing role of digital
images comes the serious issue of processing large
images which requires considerable time.
Hence, methods to compress these large images are v
ery important. Wavelets are excellent data
compression tools that can be used with IBR algorit
hms to generate the novel views of
compressed image data. This paper proposes a framew
ork that uses wavelet-based warping
technique to render novel views of compressed image
s on mobile/ handheld devices. The
experiments are performed using Android Development
Tools (ADT) which shows the proposed
framework gives better results for large images in
terms of rendering time.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
STATE SPACE GENERATION FRAMEWORK BASED ON BINARY DECISION DIAGRAM FOR DISTRIB...csandit
This paper proposes a new framework based on Binary Decision Diagrams (BDD) for the graph distribution problem in the context of explicit model checking. The BDD are yet used to represent the state space for a symbolic verification model checking. Thus, we took advantage of high compression ratio of BDD to encode not only the state space, but also the place where each state will be put. So, a fitness function that allows a good balance load of states over the nodes of an homogeneous network is used. Furthermore, a detailed explanation of how to
calculate the inter-site edges between different nodes based on the adapted data structure is presented
Remotely Sensed Image (RSI) Analysis for feature extraction using Color map I...ijdmtaiir
Remote Sensing is the science and art of acquiring
information (spectral, spatial, and temporal) about material
objects, area, or phenomenon, without coming into physical
contact with them ad plays a significant role in feature
extraction. In the present paper, implementation of color
mapping index method is analyzed to extract features from RSI
in spectral domain. Color indexing is applied after fixing the
index value to the pixels of selected ROI (Region of Interest)
of RSI and there by clustering based on these index values.
Color mapping, which is also called tone mapping can be used
to apply color transformations on the final image colors of the
ROI. The process of color map indexing is a color map
approximation approach on RSI for feature extraction includes
designing appropriate algorithm, its implementation and
discussion on the results of such implementation on ROI.
Wavelet-Based Warping Technique for Mobile Devicescsandit
The role of digital images is increasing rapidly in
mobile devices. They are used in many
applications including virtual tours, virtual reali
ty, e-commerce etc. Such applications
synthesize realistic looking novel views of the ref
erence images on mobile devices using the
techniques like image-based rendering (IBR). Howeve
r, with this increasing role of digital
images comes the serious issue of processing large
images which requires considerable time.
Hence, methods to compress these large images are v
ery important. Wavelets are excellent data
compression tools that can be used with IBR algorit
hms to generate the novel views of
compressed image data. This paper proposes a framew
ork that uses wavelet-based warping
technique to render novel views of compressed image
s on mobile/ handheld devices. The
experiments are performed using Android Development
Tools (ADT) which shows the proposed
framework gives better results for large images in
terms of rendering time.
Performance Comparison of Density Distribution and Sector mean of sal and cal...CSCJournals
In this paper we have proposed two different approaches for feature vector generation with absolute difference as similarity measuring parameter. Sal-cal vectors density distribution and Individual sector mean of complex Walsh transform. The cross over point performance of overall average of precision and recall for both approaches on all applicable sectors sizes are compared. The complex Walsh transform is conceived by multiplying sal components by j= ã-1. The density distribution of real (cal) and imaginary (sal) values and individual mean of Walsh sectors in all three color planes are considered to design the feature vector. The algorithm proposed here is worked over database of 270 images spread over 11 different classes. Overall Average precision and recall is calculated for the performance evaluation and comparison of 4, 8, 12 & 16 Walsh sectors. The overall average of cross over points of precision and recall is of all methods for both approaches are compared. The use of Absolute difference as similarity measure always gives lesser computational complexity and Individual sector mean approach of feature vector has the best retrieval.
A new algorithm is presented which determines the dimensionality and signature of a measured space. The
algorithm generalizes the Map Maker’s algorithm
from 2D to n dimensions and works the same for 2D
measured spaces as the Map Maker’s algorithm but with better efficiency. The difficulty of generalizing the
geometric approach of the Map Maker’s algorithm from 2D to 3D and then to higher dimensions is
avo
ided by using this new approach. The new algorithm preserves all distances of the distance matrix and
also leads to a method for building the curved space as a subset of the N
-
1 dimensional embedding space.
This algorithm has direct application to Scientif
ic Visualization for data viewing and searching based on
Computational Geometry.
3D Graphics & Rendering in Computer GraphicsFaraz Akhtar
Computer graphics, 3d rendering,3d graphics,Components of a 3D Graphic System,3D Modeling,3D Rendering,Illumination for scan-line renderers, 3D Graphics and Physics
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Face Recognition using PCA-Principal Component Analysis using MATLABSindhi Madhuri
It describes about a biometric technique to recognize people at a particular environment using MATLAB. It simply forms EIGENFACES and compares Principal components instead of each and every pixel of an image.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new operations. Additionally, a series of volumetric primitives well suited to indirect visualization is defined.
Computer graphics are pictures and movies created using computers - usually referring to image data created by a computer specifically with help from specialized graphical hardware and software. It is a vast and recent area in computer science.The phrase was coined by computer graphics researchers Verne Hudson and William Fetter of Boeing in 1960. Another name for the field is computer-generated imagery, or simply CGI.
Important topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, and physics. Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, and graphic design generally.
Performance Comparison of Density Distribution and Sector mean of sal and cal...CSCJournals
In this paper we have proposed two different approaches for feature vector generation with absolute difference as similarity measuring parameter. Sal-cal vectors density distribution and Individual sector mean of complex Walsh transform. The cross over point performance of overall average of precision and recall for both approaches on all applicable sectors sizes are compared. The complex Walsh transform is conceived by multiplying sal components by j= ã-1. The density distribution of real (cal) and imaginary (sal) values and individual mean of Walsh sectors in all three color planes are considered to design the feature vector. The algorithm proposed here is worked over database of 270 images spread over 11 different classes. Overall Average precision and recall is calculated for the performance evaluation and comparison of 4, 8, 12 & 16 Walsh sectors. The overall average of cross over points of precision and recall is of all methods for both approaches are compared. The use of Absolute difference as similarity measure always gives lesser computational complexity and Individual sector mean approach of feature vector has the best retrieval.
A new algorithm is presented which determines the dimensionality and signature of a measured space. The
algorithm generalizes the Map Maker’s algorithm
from 2D to n dimensions and works the same for 2D
measured spaces as the Map Maker’s algorithm but with better efficiency. The difficulty of generalizing the
geometric approach of the Map Maker’s algorithm from 2D to 3D and then to higher dimensions is
avo
ided by using this new approach. The new algorithm preserves all distances of the distance matrix and
also leads to a method for building the curved space as a subset of the N
-
1 dimensional embedding space.
This algorithm has direct application to Scientif
ic Visualization for data viewing and searching based on
Computational Geometry.
3D Graphics & Rendering in Computer GraphicsFaraz Akhtar
Computer graphics, 3d rendering,3d graphics,Components of a 3D Graphic System,3D Modeling,3D Rendering,Illumination for scan-line renderers, 3D Graphics and Physics
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Face Recognition using PCA-Principal Component Analysis using MATLABSindhi Madhuri
It describes about a biometric technique to recognize people at a particular environment using MATLAB. It simply forms EIGENFACES and compares Principal components instead of each and every pixel of an image.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new operations. Additionally, a series of volumetric primitives well suited to indirect visualization is defined.
Computer graphics are pictures and movies created using computers - usually referring to image data created by a computer specifically with help from specialized graphical hardware and software. It is a vast and recent area in computer science.The phrase was coined by computer graphics researchers Verne Hudson and William Fetter of Boeing in 1960. Another name for the field is computer-generated imagery, or simply CGI.
Important topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, and physics. Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, and graphic design generally.
Scaling Transform Methods For Compressing a 2D Graphical imageacijjournal
Transformation is a process of converting the original picture coordinates into a different picture
coordinates either by adding some values with original coordinates(Translations) or Multiplying some
values with original coordinates(called Linear transformations like rotation, reflection, scaling, and
shearing). In this paper, we compress a two dimensional picture using 2D scaling transformation. In the
several scenarios, the utilization of this technique for image compression resulted in comparable or better
performance, when compared to the Different modes of image transformations. In this paper We tried a
new code for compressing an 2d gray scale image using scaling transform methods. Matlab concepts are
applied to compress the image. We have plan to apply The techniques and develop a code for compressing
a 3d image.
ONE-DIMENSIONAL SIGNATURE REPRESENTATION FOR THREE-DIMENSIONAL CONVEX OBJECT ...ijcga
A simple method to represent three-dimensional (3-D) convex objects is proposed, in which a onedimensional
signature based on the discrete Fourier transform is used to efficiently describe the shape of a
convex object. It has position-, orientation-, and scale-invariant properties. Experimental results with
synthesized 3-D simple convex objects are given to show the effectiveness of the proposed simple signature
representation.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
A Hybrid SVD Method Using Interpolation Algorithms for Image CompressionCSCJournals
In this paper the standard SVD method is used for image processing and is combined with some interpolation methods as linear and quadratic interpolation for reconstruction of compressed image.The main idea of the proposed method is to select a particular submatrix of main image matrix and compress it with SVD method, then reconstruct an approximation of original image by interpolation method. The numerical experiments illustrate the performance and efficiency of proposed methods.
Qcce quality constrained co saliency estimation for common object detectionKoteswar Rao Jerripothula
Despite recent advances in joint processing of images,
sometimes it may not be as effective as single image
processing for object discovery problems. In this paper while
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained
Co-saliency Estimation method. The approach here is to iteratively
update the saliency maps through co-saliency estimation
depending upon quality scores, which indicate the degree of
separation of foreground and background likelihoods (the easier
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality
of saliency maps. Moreover, the proposed method can be applied
to both unsupervised and supervised scenarios, unlike other
methods which are particularly designed for one scenario only.
Experimental results demonstrate superior performance of the
proposed method compared to the state-of-the-art methods.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Image segmentation is a computer vision task that involves dividing an image into multiple segments or regions, where each segment corresponds to a distinct object, region, or feature within the image. The goal of image segmentation is to simplify and analyze an image by partitioning it into meaningful and semantically relevant parts. This is a crucial step in various applications, including object recognition, medical imaging, autonomous driving, and more.
Key points about image segmentation:
Semantic Segmentation: This type of segmentation assigns each pixel in an image to a specific class, essentially labeling each pixel with the object or region it belongs to. It's commonly used for object detection and scene understanding.
Instance Segmentation: Here, individual instances of objects are separated and labeled separately. This is especially useful when multiple objects of the same class are present in the image.
Boundary Detection: Some segmentation methods focus on identifying the boundaries that separate different objects or regions in an image.
Methods: Image segmentation can be achieved through various techniques, including traditional methods like thresholding, clustering, and region growing, as well as more advanced techniques involving deep learning, such as using convolutional neural networks (CNNs) and fully convolutional networks (FCNs).
Challenges: Image segmentation can be challenging due to variations in lighting, color, texture, and object shape. Overlapping objects and unclear boundaries further complicate the task.
Applications: Image segmentation is used in diverse fields. For example, in medical imaging, it helps identify organs or abnormalities. In autonomous vehicles, it aids in identifying pedestrians, other vehicles, and obstacles.
Evaluation: Measuring the accuracy of segmentation methods can be complex. Metrics like Intersection over Union (IoU) and Dice coefficient are often used to compare segmented results to ground truth.
Data Annotation: Creating ground truth annotations for segmentation can be labor-intensive, as each pixel must be labeled. This has led to the development of datasets and tools to facilitate annotation.
Semantic Segmentation Networks: Deep learning architectures like U-Net, Mask R-CNN, and Deeplab have significantly improved the accuracy of image segmentation by effectively learning complex patterns and features.
Image segmentation plays a fundamental role in understanding and processing images, enabling computers to "see" and interpret visual information in ways that mimic human perception.
Image segmentation is a computer vision task that involves dividing an image into meaningful and distinct segments or regions. The goal is to partition an image into segments that represent different objects or areas of interest within the image. Image segmentation plays a crucial role in various applications, such as object detection, medical imaging, autonomous vehicles, and more.
WAVELET BASED AUTHENTICATION/SECRET TRANSMISSION THROUGH IMAGE RESIZING (WA...sipij
The paper is aimed for a wavelet based steganographic/watermarking technique in frequency domain
termed as WASTIR for secret message/image transmission or image authentication. Number system
conversion of the secret image by changing radix form decimal to quaternary is the pre-processing of the
technique. Cover image scaling through inverse discrete wavelet transformation with false Horizontal and
vertical coefficients are embedded with quaternary digits through hash function and a secret key.
Experimental results are computed and compared with the existing steganographic techniques like WTSIC,
Yuancheng Li’s Method and Region-Based in terms of Mean Square Error (MSE), Peak Signal to Noise
Ratio (PSNR) and Image Fidelity (IF) which show better performances in WASTIR.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
Texture is the term used to characterize the surface of a given object or phenomenon and is an
important feature used in image processing and pattern recognition. Our aim is to compare
various Texture analyzing methods and compare the results based on time complexity and
accuracy of classification. The project describes texture classification using Wavelet Transform
and Co occurrence Matrix. Comparison of features of a sample texture with database of
different textures is performed. In wavelet transform we use the Haar, Symlets and Daubechies
wavelets. We find that, thee ‘Haar’ wavelet proves to be the most efficient method in terms of
performance assessment parameters mentioned above. Comparison of Haar wavelet and Cooccurrence
matrix method of classification also goes in the favor of Haar. Though the time
requirement is high in the later method, it gives excellent results for classification accuracy
except if the image is rotated.
Similar to Scaling Transform Methods For Compressing a 2D Graphical image (20)
Advanced Computing: An International Journal (ACIJ) is a peer-reviewed, open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and a practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of computing.
Call for Papers - Advanced Computing An International Journal (ACIJ) (2).pdfacijjournal
Submit your Research Papers!!!
Advanced Computing: An International Journal ( ACIJ )
ISSN: 2229 -6727 [Online] ; 2229 - 726X [Print]
Webpage URL: http://airccse.org/journal/acij/acij.html
Submission URL: http://coneco2009.com/submissions/imagination/home.html
Submission Deadline : April 08, 2023
Here's where you can reach us : acijjournal@yahoo.com or acij@aircconline
Advanced Computing: An International Journal (ACIJ
)
is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advancedcomputing. The journal focuses on all technical and practical aspects of high performancecomputing, green computing, pervasive computing, cloud computing etc. The goal of this journalis to bring together researchers anda practitioners from academia and industry to focus onunderstanding advances in computing and establishing new collaborations in these areas
Submit your Research Papers!!!
Advanced Computing: An International Journal ( ACIJ )
ISSN: 2229 -6727 [Online] ; 2229 - 726X [Print]
Webpage URL: http://airccse.org/journal/acij/acij.html
Submission URL: http://coneco2009.com/submissions/imagination/home.html
Here's where you can reach us : acijjournal@yahoo.com or acij@aircconline.com
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
4thInternational Conference on Machine Learning & Applications (CMLA 2022)acijjournal
4thInternational Conference on Machine Learning & Applications (CMLA 2022)will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)acijjournal
7thInternational Conference on Data Mining & Knowledge Management (DaKM 2022)provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
3rdInternational Conference on Natural Language Processingand Applications (N...acijjournal
3rdInternational Conference on Natural Language Processing and Applications (NLPA 2022)will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Natural Language Computing and its applications. The Conference looks for significant contributions to all major fieldsof the Natural Language processing in theoretical and practical aspects.
4thInternational Conference on Machine Learning & Applications (CMLA 2022)acijjournal
4thInternational Conference on Machine Learning & Applications (CMLA 2022)will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Graduate School Cyber Portfolio: The Innovative Menu For Sustainable Developmentacijjournal
In today’s milieu, new demands and trends emerge in the field of Education giving teachers of Higher Education Institutions (HEI’s) no choice but to be innovative to cope with the fast changing technology. To be naturally innovative, a graduate school teacher needs to be technologically and pedagogically competent. One of the ways to be on this level is by creating his cyber portfolio to support students’ eportfolio for lifelong learning. Cyber portfolio is an innovative menu for teachers who seek out strategies to integrate technology in their lessons. This paper presents a straightforward preparation on how to innovate a cyber portfolio that has its practical and breakthrough solution against expensive and inflexible vended software which often saddle many universities. Additionally, this cyber portfolio is free and it addresses the 21st century skills of graduate students blended with higher order thinking skills, multiple intelligence, technology and multimedia.
Genetic Algorithms and Programming - An Evolutionary Methodologyacijjournal
Genetic programming (GP) is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming starts from a high-level statement of “what needs to be done” and automatically creates a computer program to solve the problem. In artificial intelligence, genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user defined task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is a machine learning technique used to optimize a population of computer programs according to a fitness span determined by a program's ability to perform a given computational task. This paper presents a idea of the various principles of genetic programming which includes, relative effectiveness of mutation, crossover, breeding computer programs and fitness test in genetic programming. The literature of traditional genetic algorithms contains related studies, but through GP, it saves time by freeing the human from having to design complex algorithms. Not only designing the algorithms but creating ones that give optimal solutions than traditional counterparts in noteworthy ways.
Data Transformation Technique for Protecting Private Information in Privacy P...acijjournal
Data mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. Data
Mining can be utilized in any organization that needs to find patterns or relationships in their data. A group of techniques that find relationships that have not previously been discovered. In many situations, the extracted patterns are highly private and it should not be disclosed. In order to maintain the secrecy of data,
there is in need of several techniques and algorithms for modifying the original data in order to limit the extraction of confidential patterns. There have been two types of privacy in data mining. The first type of privacy is that the data is altered so that the mining result will preserve certain privacy. The second type of privacy is that the data is manipulated so that the mining result is not affected or minimally affected. The aim of privacy preserving data mining researchers is to develop data mining techniques that could be
applied on data bases without violating the privacy of individuals. Many techniques for privacy preserving data mining have come up over the last decade. Some of them are statistical, cryptographic, randomization methods, k-anonymity model, l-diversity and etc. In this work, we propose a new perturbative masking technique known as data transformation technique can be used for protecting the sensitive information. An
experimental result shows that the proposed technique gives the better result compared with the existing technique.
Advanced Computing: An International Journal (ACIJ) acijjournal
Advanced Computing: An International Journal (ACIJ) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
E-Maintenance: Impact Over Industrial Processes, Its Dimensions & Principlesacijjournal
During the course of the industrial 4.0 era, companies have been exponentially developed and have
digitized almost the whole business system to stick to their performance targets and to keep or to even
enlarge their market share. Maintenance function has obviously followed the trend as it’s considered one
of the most important processes in every enterprise as it impacts a group of the most critical performance
indicators such as: cost, reliability, availability, safety and productivity. E-maintenance emerged in early
2000 and now is a common term in maintenance literature representing the digitalized side of maintenance
whereby assets are monitored and controlled over the internet. According to literature, e-maintenance has
a remarkable impact on maintenance KPIs and aims at ambitious objectives like zero-downtime.
10th International Conference on Software Engineering and Applications (SEAPP...acijjournal
10th International Conference on Software Engineering and Applications (SEAPP 2021) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering and Applications. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
10th International conference on Parallel, Distributed Computing and Applicat...acijjournal
10th International conference on Parallel, Distributed Computing and Applications (IPDCA 2021) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Parallel, Distributed Computing. Original papers are invited on Algorithms and Applications, computer Networks, Cyber trust and security, Wireless networks and mobile Computing and Bioinformatics. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
DETECTION OF FORGERY AND FABRICATION IN PASSPORTS AND VISAS USING CRYPTOGRAPH...acijjournal
In this paper, we present a novel solution to detect forgery and fabrication in passports and visas using
cryptography and QR codes. The solution requires that the passport and visa issuing authorities obtain a
cryptographic key pair and publish their public key on their website. Further they are required to encrypt
the passport or visa information with their private key, encode the ciphertext in a QR code and print it on
the passport or visa they issue to the applicant.
The issuing authorities are also required to create a mobile or desktop QR code scanning app and place it
for download on their website or Google Play Store and iPhone App Store. Any individual or immigration
authority that needs to check the passport or visa for forgery and fabrication can scan its QR code, which
will decrypt the ciphertext encoded in the QR code using the public key stored in the app memory and
displays the passport or visa information on the app screen. The details on the app screen can be
compared with the actual details printed on the passport or visa. Any mismatch between the two is a clear
indication of forgery or fabrication.
Discussed the need for a universal desktop and mobile app that can be used by immigration authorities and
consulates all over the world to enable fast checking of passports and visas at ports of entry for forgery
and fabrication.
Detection of Forgery and Fabrication in Passports and Visas Using Cryptograph...acijjournal
In this paper, wepresenta novel solution to detect forgery and fabrication in passports and visas using cryptography and QR codes. The solution requires that the passport and visa issuing authorities obtain a cryptographic key pair and publish their public key on their website. Further they are required to encrypt the passport or visa information with their private key, encode the ciphertext in a QR code and print it on the passport or visa they issue to the applicant.
The issuing authorities are also required to create a mobile or desktop QR code scanning app and place it for download on their website or Google Play Store and iPhone App Store. Any individual or immigration authority that needs to check the passport or visa for forgery and fabrication can scan its QR code, which will decrypt the ciphertext encoded in the QR code using the public key stored in the app memory and displays the passport or visa information on the app screen. The details on the app screen can be compared with the actual details printed on the passport or visa. Any mismatch between the two is a clear indication of forgery or fabrication.
Discussed the need for a universal desktop and mobile app that can be used by immigration authorities and consulates all over the world to enable fast checking of passports and visas at ports of entry for forgery and fabrication.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Scaling Transform Methods For Compressing a 2D Graphical image
1. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Scaling Transform Methods For
Compressing a 2D Graphical image
Ms. A. J. Rajeswari Joe
Research Scholar, Bharathiyar University
Assistant Professor, Department of MCA,
GSS Jain College for Women, Chennai
rajoes@yahoo.com
Dr. N. Rama
Research Supervisor, Bharathiyar University
Assistant Professor, Department of Computer Science,
Presidency College, Chennai
ABSTRACT
Transformation is a process of converting the original picture coordinates into a different picture
coordinates either by adding some values with original coordinates(Translations) or Multiplying some
values with original coordinates(called Linear transformations like rotation, reflection, scaling, and
shearing). In this paper, we compress a two dimensional picture using 2D scaling transformation. In the
several scenarios, the utilization of this technique for image compression resulted in comparable or better
performance, when compared to the Different modes of image transformations. In this paper We tried a
new code for compressing an 2d gray scale image using scaling transform methods. Matlab concepts are
applied to compress the image. We have plan to apply The techniques and develop a code for compressing
a 3d image.
KEYWORDS
Scaling factors (compression factors), two dimensional scaling transformation, decompression,
experimental result,.
1. Introduction
Scaling is the process of expanding or compressing the dimensions (i.e., size) of an object. An
important application of scaling is in the development of viewing transformation, which is a
mapping from a window used to clip the scene to a view port for displaying the clipped scene on
the screen. In Euclidean geometry, changing the size of an object is called a scale. We scale an
object by scaling the x and y coordinates of each vertex in the object uniform scaling (or isotropic
scaling,) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a
scale factor that is the same in all directions. The result of uniform scaling is similar (in the
DOI : 10.5121/acij.2013.4204 41
2. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes
are also classed as similar. More general is scaling with a separate scale factor for each axis
direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling
factors is different from the others; a special case is directional scaling or stretching (in one
direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a
rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the
angles between lines parallel to the axes are preserved, but not all angles). In this paper we have
compressed lena’s image which is a grayscale 2d image using Matlab coding. 2-D Dimensional
Cosine Transform is applied on Lena’s image (352*352) as a test image. DCT is applied to
lena’s image through scaling factor 2, scaling factor 4 and scaling factor 8, and the result is
analysed. The results are then compared with various compression methods. We used Peak
Signal-to Noise Ratio (PSNR) and Mean Square Error (MSE) to observe the quality of the
compressed image and original image.
2.Scaling factors(compression factors)
A scaling transformation alters the size of an object. The operation is accomplished by
multiplying each coordinate by scaling factors Sx, Sy . When (Sx Sy,) < 1 the object or image is
compressed. Where Sx , Sy are the scaling factors along each axis with respect to the local
coordinate system of the model. The scaling transformation allows a transformation matrix to
change the dimensions of an object by shrinking or stretching along the major axes centered on
the origin. It should be noted that the scaling is always about the origin along each dimension
with the respective scaling factors. This means that if the object being scaled does not overlap the
origin, it will move farther away if it is scaled up, and closer if it is scaled down. Scaling with
respect to a selected fixed position (Sx Sy) can be represented with the following transformation
sequence:
1. Translate the fixed point to the origin
2. Scale the object relative to the coordinate Origin
3. Translate the fixed point back to its original Position.
3. Two Dimensional Scaling
Let Sx , Sy be the scale in the positive x and y directions respectively. Then the scaled vertex is
given by
If, Sx = Sy=S then it is said to be homogenous or uniform scaling[4] transformation that
maintains relative proportions of the scaled objects. The Magnification factor is |s|. All points
move s times away from the origin. If |s| < 1, all the points move towards the origin, or
demagnetized.
42
3. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Fig1. Scaling the original picture with scale factors Sx=-1, Sy= 2
If Sx ≠ Sy then the scaling is called differential scaling. When either Sx or Sy equals to one,
simplest differential scaling. 2-Dimensional scaling in the matrix form is given by
In the case where Sx =Sy = k, scaling increases the area of any surface by a factor of k2 and the
volume of any solid object by a factor of k3. Such a scaling changes the diameter of an object by a
factor between the scale factors, the area by a factor between the smallest and the largest product
of two scale factors, and the volume by the product of all two.
We can represent a triangle, shown in matrix form, [5] using homogeneous coordinates of the
vertices as :
A0 0 1
B1 1 1
C5 2 1
By choosing the scaling factor s=1/2
The matrix of scaling for compressing an image is:
½ 0 0
(S(1/2,1/2) ) 0 ½ 0
0 0 1
Multiply original matrix coordinates with matrix of scaling results the coordinates of either
compressed image or decompressed image based on the scaling factors selected.
So the new coordinates A’B’C’ of the scaled triangle ABC can be found as:
A0 0 1 ½ 0 0 A 0 0 1
B1 1 1 0 ½ 0 B ½ ½ 1
C5 2 1 0 0 1 C 5/2 1 1
43
4. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Thus, the new coordinates are A’=(0,0), B’=(1/2, 1/2), C’= (5/2, 1)
Figure2: Object after scaling with Sx = Sy = 1/2 (Compressed picture)
4. Decompression
The inverse scaling matrix[4] is obtained by replacing sx and sy with 1/sx and 1/sy.
The scaling discussed so far has the origin (0,0) as the fixed point and scaling is about the origin.
When an object is scaled, it is moved sx and sy times away from the origin. It is also possible to
have any arbitrary point as a fixed point, and scale about that point.
We can represent a triangle, shown in matrix form, using homogeneous coordinates of the
vertices as :
A0 0 1
B1 1 1
C5 2 1
choosing scaling factor s=2
The matrix of scaling for decompression is:
2 0 0
(S(2,2) ) 0 2 0
0 0 1
So the new coordinates A’B’C’ of the scaled triangle ABC can be found as:
A0 0 1 2 0 0 A 0 0 1
B1 1 1 0 2 0 B 2 2 1
C5 2 1 0 0 1 C 10 4 1
44
5. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Thus, A’=(0,0), B’=(2,2), C’= (10, 4)
The following figure (3) shows the effect of scaling with sx=sy =2.
Figure2: Object after scaling with 1/sx and 1/sy. (i.e) Sx = Sy = 2
(DeCompressed picture)
5. Related Work
5.1 The Discrete Cosine Transform
DCT Attempts to decorrelate the image data after decorrelation each transform coefficient can be
encoded without dropping off compression efficiency[7]. DCT separates images into parts of
different frequencies where less important frequencies are discarded through quantization and
important frequencies are used to retrieve the image during decompression. Compared to other
input dependent transforms, DCT has many advantages: (1) It has been implemented in single
integrated circuit; (2) It has the ability to pack most information in fewest coefficients; (3) It
minimizes the block like appearance called blocking artifact that results when boundaries
between sub-images become visible
5.2 The One-Dimensional DCT
The DCT of a list of n real numbers s(x), [8]
where x=0, 1, ….., n-1, is the list of length n given by:
For u= 0, 1, 2, … N-1.
Similarly, the inverse transform is [5]defined as
45
6. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Thus, the first transform coefficient is the coefficient is the average value of the sample sequence
5.3 The Two-Dimensional DCT
The Discrete Cosine Transform (DCT) is one of many transforms that takes its input and
transforms it into a linear combination of weighted basis functions. These basis functions [11]are
commonly the frequency. The 2-D Discrete Cosine Transform is just a one dimensional DCT
applied twice, once in the x direction, and again in the y direction. One can imagine the
computational complexity of doing so for a large image. Thus, many algorithms, such as the Fast
Fourier Transform (FFT), have been created to speed the computation.
The DCT computes the i, jth entry of the DCT of an image.
N is the size of the block that the DCT is applied on. The equation [7]calculates one entry (i, jth)
of the transformed image from the pixel values of the original image matrix. For the standard 8*8
block that JPEG compression uses, N equals * and x and y range from 0 to 7. Therefore D (I,j)
would be as in equation:
Because the DCT uses cosine functions, the resulting matrix depends on the horizontal and
vertical frequencies. Therefore an image block with a lot of change in has a very random looking
resulting matrix of a large value for the first element and zeroes for the other element.
6. Experimental Result
• Original image is divided into blocks of 8 x 8.
• Pixel values of a black and white image range from 0-255 but DCT is designed to work
on pixel values ranging from -128 to 127 .Therefore each block is modified to work in
the range.
46
7. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
• DCT is applied to each block by multiplying the modified block with DCT matrix on the
left and transpose of DCT matrix on its right.
• Each block is then compressed through quantization.
• Quantized matrix is then entropy encoded.
• Compressed image is reconstructed through reverse process.
• Inverse DCT is used for decompression.
6.1 Quantization
Quantization is achieved by compressing a range of values to a single quantum value. When the
number of discrete symbols in a given stream is reduced, the stream becomes more compressible.
A quantization matrix is used in combination with a DCT coefficient matrix to carry out
transformation. Quantization is the step where most of the compression takes place.DCT really
does not compress the image because it is almost lossless. Quantization makes use of the fact that
higher frequency components are less important than low frequency components. It allows
varying levels of image compression and quality through selection of specific quantization
matrices. Thus quality levels ranging from 1 to 100 can be selected, where 1 gives the poorest
image quality and highest compression, while 100 gives the best quality and lowest compression.
As a result quality to compression ratio can be selected to meet different needs. JPEG committee
suggests matrix with quality level 50 as standard matrix. For obtaining quantization matrices with
other quality levels, scalar multiplications of standard quantization matrix are used. Quantization
is achieved by dividing transformed image matrix by the quantization matrix used. Values of the
resultant matrix are then rounded off. In the resultant matrix coefficients situated near the upper
left corner have lower frequencies .Human eye is more sensitive to lower frequencies .Higher
frequencies are discarded. Lower frequencies are used to reconstruct the image[18].
6.2. Entropy Encoding
After quantization, most of the high frequency coefficients are zeros. To exploit the number of
zeros, a zig-zag scan of the matrix is used yielding to long string of zeros.[18].
To evaluate the performance of the proposed scheme,[13] 2-D DCT is applied on Lena’s image
(352*352) as a test image. DCT is applied to lena’s image through scaling factor 2, scaling factor
4 and scaling factor 8, and observe the result. The results are then compared with various
compression methods. We used Peak Signal-to Noise Ratio (PSNR) and Mean Square Error
(MSE) for a compressed image. This ratio is often used as a quality measurement between the
original and compressed image.
To compute the PSNR, first calculate the mean-squared error using the following equation:
47
8. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Where x (m, n) and y (m, n) are the two images of the size m*n. In this case x is the original
image and y is the compressed image
Figure :3 Original image
Figure :4 scaling factor=2
48
9. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
Figure : 5 scaling factor=4
Figure : 6 scaling factor=8
Table 1: Ratio of MSE & PSNR
Scaling Scaling Scaling
factor=2 factor=4 factor=8
MSE 48.6 43.3 39.0
PSNR 0.89 3.03 8.2
7. Conclusion
From the above experiments, high compression ratio and better image quality accomplished
which is better than existing methods. This paper has concentrated on development of efficient
and effective algorithm for still image compression. Fast and lossless compression algorithm
using 2-D DCT is developed. From the Results it is observed that the encoding time is reduced
with little degradation in image quality compare to anticipated method. Compression ratio is also
increased, while comparing the wished-for method with other methods. Our future work involves
improving image quality by increasing PSNR value and lowering MSE value.
References
[1] Intoduction to Data compression by Khalid sayood, Third edition, (Morgan Kaufmann series in
multimedia information and systems)
[2] W. Yan, P. Hao, C. Xu, ”Matrix factorization for fast DCT algorithms”,IEEE International
Conference on Acoustics, Speech, and Signal Processing, vol. 3, 2006.
[3] http://vedyadhara.ignou.ac.in/wiki/images/5/58/B2U1mcs-053.pdf
[4] Mathematical elements for computer graphics By David. F. Rogers, J.Alan Adams, McGraw-Hill
Science/Engineering/Math; 2 edition (August 1, 1989)
49
10. Advanced Computing: An International Journal ( ACIJ ), Vol.4, No.2, March 2013
[5] Computer graphics-Donald Hearn, M. Pauline Baker, Prentice-Hall, 1986, University of Minnesota,
20 Jan 2010
[6] http://en.wikipedia.org/wiki/Data_compression
[7] Andrew B. Watson, NASA Ames Research, “Image Compression Using the Discrete Cosine
Transform”, Mathematica Journal, 4(1),1994, p. 81-88.
[8] Nageswara Rao Thota, and Srinivasa Kumar Devireddy, “Image Compression Using Discrete
Cosine Transform”, GeorgianElectronic Scientific Journal: Computer Science and
Telecommunications 2008|No.3(17).
[9] Swastik Das and Rashmi Ranjan Sethy, “A Thesis on Image Compression using Discrete Cosine
Transform and Discrete Wavelet Transform”, Guided By: Prof. R. Baliarsingh, dept of
Computer Science & Engineering, National Institute of Rourkela.
[10] F. Feing, S. Winograd, “Fast algorithms for the discrete cosine transform”, IEEE Transactions on
Signal Processing, vol. 40, no. 9,September, 1992.
[11] N. Ahmed, T. Natarajan, and K.R. Rao,”Discrete Cosine Transform”, IEEE Trans. Computers, 90-93,
Jan 1974.
[12] Wallace, G. 1991. The JPEG still picture compression standard communications of the ACM 34(4):
30-44.
[13] Chih-chang chen, Oscal T-C. Chen “A Low complexity computation scheme of discrete cosine
transform and quantization with adaptation to block content”, Dept of Electrical Engineering, 2000
IEEE.
[14] Chih-chang chen, Oscal T-C. Chen “signal and Media Laboratories”, Dept of Electrical Engineering,
2000 IEEE.
[15] Yung-Gi Wu, “Medical Image compression by sampling DCT coefficient”, 2002 IEEE.
[16] V. Kober, “Fast algorithm for the computation of sliding discrete Hartley transform”, IEEE
Transactions on Signal Processing, vol.55, Issue 6, pp.2937-2944, June, 2007.
[17] Rafael C. Gonzalez and Richard E. Woods, “Digital Image Processing”, 2nd Edition, Pearson
Education, ISBN-81-7808-629-8, 2002.
[18] Ken cabeen and Peter Gent,"Image Compression and the Descrete Cosine Transform"Math
45,College of the Redwoods.
50