Design and Implementation of VLSI Architecture for Image Scaling ProcessorIJMER
In many Digital image processing applications, for processing of images or displaying the
images on different size displays, they are use image scaling techniques in order to scale up/scale down
of an image. In this paper, we are implement the 7 stage VLSI architecture at low cost with the edge
oriented area-pixel scaling technique for achieve better image quality compared to other techniques. This method achieves better visual quality compared to previous methods. The seven-stage VLSI architecture of our image scaling processor yields a processing rate of about 200 MHz by using TSMC 0.18- m technology
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
This is the slides about DTAM for my group meeting report, hope it does help to anyone who will want to implement DTAM and need to understand it deeply.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
Design and Implementation of VLSI Architecture for Image Scaling ProcessorIJMER
In many Digital image processing applications, for processing of images or displaying the
images on different size displays, they are use image scaling techniques in order to scale up/scale down
of an image. In this paper, we are implement the 7 stage VLSI architecture at low cost with the edge
oriented area-pixel scaling technique for achieve better image quality compared to other techniques. This method achieves better visual quality compared to previous methods. The seven-stage VLSI architecture of our image scaling processor yields a processing rate of about 200 MHz by using TSMC 0.18- m technology
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
This is the slides about DTAM for my group meeting report, hope it does help to anyone who will want to implement DTAM and need to understand it deeply.
An efficient VLSI architecture implementation for barrel distortion correction in surveillance camera images is presented. The distortion correction model is based on least squares estimation method. To reduce the computing complexity, an odd-order polynomial to approximate the back-mapping expansion polynomial is used. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Horner’s algorithm. The proposed VLSI architecture can achieve frequency 218MHz with 1490 logic elements by using 0.18μm technology. Compared with previous techniques, the circuit reduces the hardware cost and the requirement of memory usage.
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...ijsrd.com
A low-cost very large scale integration (VLSI) implementation of real-time correction of barrel distortion for video- endoscopic images is presented in this paper. The correcting mathematical model is based on least-squares estimation. To decrease the computing complexity, we use an odd-order polynomial to approximate the back-mapping expansion polynomial. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Hornor's algorithm. With the iterative characteristic of Hornor's algorithm, the hardware cost and memory requirement can be conserved by time multiplexed design. In addition, a simplified architecture of the linear interpolation is used to reduce more computing resource and silicon area. The VLSI architecture of this paper contains 13.9-K gates by using a 0.18 μm CMOS process. Compared with some existing distortion correction techniques, this paper reduces at least 69% hardware cost and 75% memory requirement.
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONecij
Histogram equalization is a nonlinear technique for adjusting the contrast of an image using its
histogram. It increases the brightness of a gray scale image which is different from the mean brightness of
the original image. There are various types of Histogram equalization techniques like Histogram
Equalization, Contrast Limited Adaptive Histogram Equalization, Brightness Preserving Bi Histogram
Equalization, Dualistic Sub Image Histogram Equalization, Minimum Mean Brightness Error Bi
Histogram Equalization, Recursive Mean Separate Histogram Equalization and Recursive Sub Image
Histogram Equalization. In this paper, the histogram equalization approach of gray-level images is
extended for colour images. The acquired image is converted into HSV (Hue, Saturation, Value). The
image is then decomposed into two parts by using exposure threshold and then equalized them
independently Over enhancement is also controlled in this method by using clipping threshold. For
measuring the performance of the enhanced image, entropy and contrast are calculated.
A Comparative Study of Histogram Equalization Based Image Enhancement Techniq...Shahbaz Alam
Four widely used histogram equalization techniques for image enhancement namely GHE, BBHE, DSIHE, RMSHE are discussed. Some basic definitions and notations are also attached. All analysis are done by using MATLAB . Pictures are taken from the book "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The presentation slide was made for my B.Sc project purpose.
At the end of this lesson, you should be able to;
describe spatial resolution
describe intensity resolution
identify the effect of aliasing
describe image interpolation
describe relationships among the pixels
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Histogram equalization can be used to improve the visual appearance of an image. Peaks in the image histogram (indicating commonly used grey levels) are widened, while the valleys are compressed.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Time Multiplexed VLSI Architecture for Real-Time Barrel Distortion Correction...ijsrd.com
A low-cost very large scale integration (VLSI) implementation of real-time correction of barrel distortion for video- endoscopic images is presented in this paper. The correcting mathematical model is based on least-squares estimation. To decrease the computing complexity, we use an odd-order polynomial to approximate the back-mapping expansion polynomial. By algebraic transformation, the approximated polynomial becomes a monomial form which can be solved by Hornor's algorithm. With the iterative characteristic of Hornor's algorithm, the hardware cost and memory requirement can be conserved by time multiplexed design. In addition, a simplified architecture of the linear interpolation is used to reduce more computing resource and silicon area. The VLSI architecture of this paper contains 13.9-K gates by using a 0.18 μm CMOS process. Compared with some existing distortion correction techniques, this paper reduces at least 69% hardware cost and 75% memory requirement.
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONecij
Histogram equalization is a nonlinear technique for adjusting the contrast of an image using its
histogram. It increases the brightness of a gray scale image which is different from the mean brightness of
the original image. There are various types of Histogram equalization techniques like Histogram
Equalization, Contrast Limited Adaptive Histogram Equalization, Brightness Preserving Bi Histogram
Equalization, Dualistic Sub Image Histogram Equalization, Minimum Mean Brightness Error Bi
Histogram Equalization, Recursive Mean Separate Histogram Equalization and Recursive Sub Image
Histogram Equalization. In this paper, the histogram equalization approach of gray-level images is
extended for colour images. The acquired image is converted into HSV (Hue, Saturation, Value). The
image is then decomposed into two parts by using exposure threshold and then equalized them
independently Over enhancement is also controlled in this method by using clipping threshold. For
measuring the performance of the enhanced image, entropy and contrast are calculated.
A Comparative Study of Histogram Equalization Based Image Enhancement Techniq...Shahbaz Alam
Four widely used histogram equalization techniques for image enhancement namely GHE, BBHE, DSIHE, RMSHE are discussed. Some basic definitions and notations are also attached. All analysis are done by using MATLAB . Pictures are taken from the book "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The presentation slide was made for my B.Sc project purpose.
At the end of this lesson, you should be able to;
describe spatial resolution
describe intensity resolution
identify the effect of aliasing
describe image interpolation
describe relationships among the pixels
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Histogram equalization can be used to improve the visual appearance of an image. Peaks in the image histogram (indicating commonly used grey levels) are widened, while the valleys are compressed.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Optimized linear spatial filters implemented in FPGAIOSRJVSP
Linear spatial filters (LSF) are used for filtering of digital images with the purpose of blurring, noise reduction, detail enhancement etc. The realization of LSF confronts the capital problem of a lot of operations needed for their computation. In this paper, described is an approach for optimizing of LSF by utilizing parallel algorithms and their hardware implementation on FPGA. A model and an algorithm based on partial sums and aimed at calculating the filtered pixels are presented. Defined are criteria for comparing of the different types of linear filters. A schematic diagram of an FPGA-based DSP operational block is shown. VHDL is utilized for the hardware design. Conducted are studies focused on comparing the partial sums and the non-partial sums based methods of filtering. It is ascertained that the methods employing partial sums reduce the number of operations to the size of the window (3, 5, 7,...) . These FPGA-based LSF are suitable for applications using threshold detecting, edge detection or image detail enhancement.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Many applications such as robot navigation, defense, medical and remote sensing perform
various processing tasks, which can be performed more easily when all objects in different images of the
same scene are combined into a single fused image. In this paper, we propose a fast and effective
method for image fusion. The proposed method derives the intensity based variations that is large and
small scale, from the source images. In this approach, guided filtering is employed for this extraction.
Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained.
Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Seam Carving Approach for Multirate Processing of Digital ImagesIJLT EMAS
This paper presents a new approach called S eam Carving for multirate signal processing of digital images. Increasing and decreasing the sizes of digital images are common place in day-today image processing. These methods involve long procedures and sometimes consume more time for getting implemented. Whereas, using the S eam Carving method eradicates the excess time involved in upsampling and downsampling a digital image considerably as this method is straightforward and simple to implement. This method comes in handy when we are dealing with large medical images and remotely sensed images. This technique is applied on standard images and its performance is analyzed. The entire work was implemented using Matlab R2017a software package.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
F0255046056
1. The International Journal Of Engineering And Science (IJES)
||Volume||2 ||Issue|| 5 ||Pages|| 46-56||2013||
ISSN(e): 2319 – 1813 ISSN(p): 2319 – 1805
www.theijes.com The IJES Page 46
A New Jpeg Image Scaling Algorithm Based on the Area Pixel
Model
1,
Sourabh Jain MTech , 2,
Prof D. Suresh
1,
Sem, Set, Jain University
Under The Guidance Of
2
,Dept Of Ece ,Set ,Jain University
--------------------------------------------------ABSTRACT--------------------------------------------------------
Image scaling is a very important technique and has been widely used in many image processing applications.
In this paper, we present an edge-oriented area-pixel scaling processor. To achieve the goal of low cost, the
area-pixel scaling technique is implemented with a low-complexity VLSI architecture in our design. A simple
edge catching technique is adopted to preserve the image edge features effectively so as to achieve better image
quality. Compared with the previous low-complexity techniques, our method performs better in terms of both
quantitative evaluation and visual quality. The seven-stage VLSI architecture of our image scaling processor
contains 10.4-K gate counts and yield a processing rate of about 200 MHz by using TSMC 0.18- m technology.
--------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: 01,May ,2013 Date of Publication: 5.June,2013
--------------------------------------------------------------------------------------------------------------------------------------
I. INTRODUCTION
IMAGE scaling is widely used in many fields ranging from consumer electronics to medical imaging.
It is indispensable when the resolution of an image generated by a source device is different from the screen
resolution of a target display. For example, we have to enlarge images to fit HDTV or to scale them down to fit
the mini-size portable LCD panel. The most simple and widely used scaling methods are the nearest neighbour
and bilinear techniques. In recent years, many efficient scaling methods have been proposed in the literature .
According to the required computations and memory space, we can divide the existing scaling methods into two
classes: lower complexity and higher complexity scaling techniques. The complexity of the former is very low
and comparable to conventional bilinear method. The latter yields visually pleasing images by utilizing more
advanced scaling methods. In many practical real-time applications, the scaling process is included in end-user
equipment, so a good lower complexity scaling technique, which is simple and suitable for low-cost VLSI
implementation, is needed. In this paper, we consider the lower complexity scaling techniques only.
Kim et al. presented a simple area-pixel scaling method. It uses an area-pixel model instead of the
common point-pixel model and takes a maximum of four pixels of the original image to calculate one pixel of a
scaled image. By using the area coverage of the source pixels from the applied mask in combination with the
difference of luminosity among the source pixels, Andreadis et al. [8] proposed a modified area-pixel scaling
algorithm and its circuit to obtain better edge preservation. Both Bilinear and Bicubic obtain better edge-
preservation but require about two times more of computations than the bilinear method. To achieve the goal of
lower cost, we present an edge-oriented area-pixel scaling processor in this paper. The area-pixel
scaling technique is approximated and implemented with the proper and low-cost VLSI circuit in our
design. The proposed scaling processor can support floating-point magnification factor and preserve the edge
features efficiently by taking into account the local characteristic existed in those available source pixels around
the target pixel. Furthermore, it handles streaming data directly and requires only small amount of memory: one
line buffer rather than a full frame buffer. The experimental results demonstrate that the proposed design
performs better than other lower complexity image scaling methods in terms of both quantitative evaluation and
visual quality.The seven-stage VLSI architecture for the proposed design was implemented and synthesized by
using Verilog HDL and synopsys design compiler, respectively. In our simulation, the circuit can achieve 200
MHz with 10.4-K gate counts by using TSMC 0.18- m technology. Since it can process one pixel per clock
cycle, it is quick enough to process a video resolution
of WQSXGA (3200 2048) at 30 f/s in real time.
2. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 47
This paper is organized as follows. In Section II, the area- pixel scaling technique is introduced briefly. Our
method is presented in Section III. Section IV describes the proposed VLSI architecture in detail. Section V
illustrates the simulation results and chip implementation. The conclusion is provided in Section VI.
II. AREA-PIXEL SCALING TECHNIQUE
A. An Overview of Area-Pixel Scaling Technique
Assume that the source image represents the original image to be scaled up/down and target image
represents the scaled image. The area-pixel scaling technique performs scale-up scale-down transformation by
using the area pixel model instead of the common point model. Each pixel is treated as one small rectangle but
not a point; its intensity is evenly distributed in the rectangle area. Fig. 1 shows an example of image scaleup
process of the area-pixel model where a source image of 4×4 pixels is scaled up to the target image of 5×5
pixels. Obviously, the area of a target pixel is less than that of a source pixel. A window is put on the current
target pixel to calculate its estimated luminance value. As shown in Fig. 1(c), the number of source pixels
overlapped by the current target pixel window is one, two, or a maximum of four. Let the luminance values of
four source pixels overlapped by the window of current target pixel at coordinate (k,l)be denoted as FS(m,n),
FS(m+1,n) and FS(m+1,n+1) respectively. The estimated value of current target pixel, denoted FT ((k,l) can be
calculated by weighted averaging the luminance values of four source pixels with area coverage ratio as
FT ((k,l) = 0
1
∑∑[Fs(m+I,n+j)×W(m+I,n+j)]
Fig 1 (a) source image of 4×4 pixels. (b) A target image of 5×5 pixels. (c) Relations of the target pixel and
source pixels
where W(m,n),W(m+1,n),W(m,n+1) and W(m+1,n+1) represent the weight factors of neigh boring source
pixels for the current target pixel at(k,l) . Assume that the regions of four source pixels overlapped by current
target pixel window are denoted as A(m,n) , A(m+1,n) ,A(m,n+1) and A(m+1,n+1) respectively, and the area of
the target pixel window is denoted as Asum . The weight factors of four source pixels can be given as
[W(m,n),W(m+1,n),W(m,n+1) and W(m+1,n+1) ] = [A(m,n)/ Asum,A(m+1,n)/ Asum,A(m,n+1)/ Asum ,
A(m+1,n+1)/ Asum ] (2)
Where Asum = A(m,n)+A(m+1,n)+A(m,n+1) + A(m+1,n+1)
Let the width and height of the overlapped region A(m,n) be denoted as left‟(k,l)and top‟(k,l) , and the width
and height of A(m+1,n+1) be denoted as right‟(k,l) and bottom‟(k,l), respectively, as shown in Fig. 1(c). Then,
the areas of the overlapped region can be calculated
[A(m,n),A(m+1,n),A(m,n+1) , A(m+1,n+1)] = [left‟(k,l)×top‟(k,l) , right‟(k,l)×top‟(k,l),left‟(k,l)×bottom‟(k,l),
right‟(k,l) × bottom‟(k,l)]. (3)
Obviously, many floating-point operations are needed to determine the four parameters left‟(k,l)top‟(k,l) ,
right‟(k,l) and bottom‟(k,l), if the direct area-pixel implementation is adopted.
3. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 48
III THE PROPOSED LOW COMPLEXITY ALGORITHM
Observing (1)–(3), we know that the direct implementation of area-pixel scaling requires some
extensive floating-point computations for the current target pixel at (k,l) to determine the four parameters,
left‟(k,l),top‟(k,l) , right‟(k,l) and bottom‟(k,l), . In the proposed processor, we use an approximate technique
suitable for low-cost VLSI implementation to achieve that goal properly. We modify (3) and implement the
calculation of areas of the overlapped regions as
[A‟(m,n),A‟(m+1,n),A‟(m,n+1) , A‟(m+1,n+1)] = [left‟(k,l)×top‟(k,l) ,
right‟(k,l)×top‟(k,l),left‟(k,l)×bottom‟(k,l), right‟(k,l) × bottom‟(k,l)]. (4)
Those left‟(k,l),top‟(k,l) , right‟(k,l) and bottom‟(k,l)are all 6-b integer and given as
[left‟(k,l),top‟(k,l) , right‟(k,l) , bottom‟(k,l)]=Appr [left‟(k,l),top‟(k,l) ,
right‟(k,l), bottom‟(k,l)] (5)
Where Appr represents the approximate operator adopted in our design and will be explained in detail later. To
obtain better visual quality, a simple low-cost edge catching technique is employed to preserve the edge features
effectively by taking into account the local characteristic existed in
those available source pixels around the target pixel. The final areas of the overlapped regions are given as
[A‟‟(m,n),A‟‟(m+1,n),A‟‟(m,n+1) , A‟‟(m+1,n+1)]= ( [A‟(m,n),A‟(m+1,n),A‟(m,n+1) , A‟(m+1,n+1)]
(6)where we adopt a tuning operator to tune the areas of four overlapped regions according to the edge
features obtained by our edge-catching technique. By applying (6) to (1) and (2),
we can determine the estimated luminance value of the current target pixel. Fig. 2 shows the pseudo code of our
scaling method. In the rest of this section, the approximate technique adopted in our design is introduced first.
Then we describe the low-cost edge-catching technique in detail.
4. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 49
Fig 3. Example of image enlargement for our method (a) A source image of SW×SH pixels (b) A target image
of TW× TH pixels (c) Relations of the target pixels and source pixels.
3.1.The Approximate Technique
Fig. 3 shows an example of our image scale up process where a source image of SW×SH pixels is
scaled up to the target image of TW×TH pixels and every pixel is treated as one rectangle. As shown in Fig.
3(c), we align those centers of four corner rectangles (pixels) in the source and target images. For simple
hardware implementation, each rectangular target pixel is treated as 2n
×2n
grids with uniform size (n is 3 for Fig.
3). Assume that the width and the height of the target pixel window are denoted as winw and winh, then the area
of the current target pixel window(Asum) can be calculated as winw
and winh . In our design, the values of winw and winh are determined based on the
current magnification factors, mf_w for x direction and mf_h for y direction where mf_w = TW/SW and
mf_h=TH/SH. In the case of image enlargement winw=2n
when 100%<mf_w< 200%. When 200% <mf_w <
400%, winw will be enlarged to 2n+1
and so on. In the case of image reduction winw is reduced to 2n-1
when
50%<mf_w<100%. When 25% <mf_w< 50%, winw is 2n-2
, and so on. With the similar way, we can also
determine the value of winh by using mf_h. In the design, the division operation in (2) can be implemented
simply with a shifter As shown in Fig. 3(c), the relationships among left‟(k,l),top‟(k,l) , and bottom‟(k,l) can be
denoted as follows:
Right‟(k,l) = winw – left‟(k,l) (7)
Bottom‟(k,l) = winh – top‟(k,l) (8)
As soon as left‟(k,l),and top‟(k,l) are determined, Right‟(k,l) and Bottom‟(k,l) can be calculated easily. Thus,
we focus on finding the values of left‟(k,l) and top‟(k,l) only. In our design, left‟(k,l) is calculated as
left‟(k,l) = min(srcright(m,n) - winleft(k,l),winw (9)
where min represents the minimum operation, and winleft (k,l) represents the horizontal displacement (in the unit
of grid) from the left boundary of the source image to the left side of the current pixel window at coordinate
(k,l), and can be
calculated as
winleft(k,l) = winleft(k-1,l) + 2n
srcright(m,n) represents the horizontal displacement (in the unit of grid) from the
left boundary of the source image to the right side of the top-left source pixel overlapped by the current pixel
window at (k,l), and can be calculated as
srcright(m,n) = srcright(m-1,n)+sW + Tw (11)
Fig 4 Two possible cases for left‟(k,l).(a) left‟(k,l)= srcright(m,n) – winleft‟(k,l)(b) left‟(k,l)= winw
5. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 50
where is sw the width of a source pixel relative to a 2n
× 2n
target pixel and Tw is the regulating value used to
reduce the accumulated error caused by rounding sw. Both sw and Tw are in the unit of grid. As shown in fig .4(a)
if srcright(m,n) – winleft‟(k,l) is smaller then winw,t the current pixels „s left‟(k,l)is equal to srcright(m,n) –
winleft‟(k,l). otherwise left‟(k,l) is equal tp winw , as shown in Fig. 4(b).
Similarly,top‟(k,l) is given as
Top‟(k,l) = min(srcbtm(m,n)-wintop(k,l),winh) (12)
where wintop(k,l), represents the vertical displacement from the top boundary of the source image to the top side
of the target
pixel window at(k,l) , and can be calculated as
wintop(k,l) = wintop (k,l-1) + 2n
(13)
srcbtm(m,n) represents the vertical displacement from the top boundary of the source image to the bottom side of
the top-left source pixel overlapped by the target pixel window at coordinate ,(k,l) and can be calculated as
(srcbtm(m,n) = (srcbtm(m,n-1)+ sh + Th (14)
Where sh is the height of a source pixel in the unit of grid, and Th is the regulating value used to reduce the
accumulating error caused by rounding sh. Initially, winleft(0,0) = (sw – winw )/2,srcright (0,0)= sw and
wintop(0,0)=(sh- winh)/2 and srcbtm(0,0) = sw . All variables among (9)–(14) are integers. We use a few low-cost
integer addition/subtraction operations rather than extensive floating-point multiplication /division computations
to obtain the approximated values of left‟(k,l) and top‟(k,l). In the following paragraph, the steps to determine
Tw and Th are described. Since we set each target pixel as 2n
× 2n
grids, sw and sh can be denoted and calculated
as follows:
Sw = [(TW-1)/SW-1)]×2n
(15)
Sh= [(TH-1)/SH-1)]×2n
(16)
In the design, both Sw and Sh are rounded to integers. The rule is that a fractional part
of less than 0.5 is dropped, while a fractional part of 0.5 or more leads to be rounded to the next bigger integer.
The former will produce the rounding-down error and the latter will produce the rounding-up error. Each kind of
errors is accumulated and might cause the values of left‟(k,l)or top‟(k,l) to be incorrect. To reduce accumulated
rounding error, we adopt Tw and Th to regulate and left‟(k,l) and top‟(k,l) respectively. There are two working
modes existed in our processor. At normal mode, the accumulation of rounding-up/down error of left‟(k,l) is less
than one grid, so no regulation is needed and Tw will be set to zero. As soon as the accumulation of rounding-
up/down error of left‟(k,l) is greater than or equal to one grid, the processor will enter regulating mode and set
the value of to . The same idea can be applied to top‟(k,l) and rh .
Let rw represent the regulating times required for each row, thus it can be given as
rw = 2n
×(TW-1)- Sw×(SW-1) if Sw is rounded up to an integer (17)
In other words, there are times that is set as or for each row. If is rounded down, the total sum of grids
at direction in the target image is larger than that in the source image without regulation. Therefore, it is
necessary to “compress” the target image by overlapping grids. We choose pixels in a row of the target image
regularly, and shift left each pixel of them with one grid to finish aligning. Fig. 5 shows an example of the
image scaleup process where a source image of 8 8 pixels is scaled up to the target image of 11 11 pixels.
According to (15), is since , and . Then, is rounded down to the integer 11 and is 3. Therefore, we overlap three
grids via shifting left three target pixels with one grid in this row to do the job of aligning. The accumulation
effect of rounding errors can be reduced. On the contrary, if is rounded up, we need to “expand” the target
image by inserting grids. We choose pixels in a row of the target image regularly, and shift right each pixel of
them with one grid to do aligning. shows another example of the image scaleup process where a source image of
8×8 pixels is scaled up to the target image of 13×13 pixels. According to (15), is since , and . In the example, is
rounded up to the integer 14 and is 2. Therefore, two grids are inserted via shifting right two target pixels with
one grid in this row to do aligning. The same way is also applied to the vertical-direction process. Using (7)–
(17), we can realize the approximate operator in (5) with the low-complexity computations
B. The Low-Cost Edge-Catching Technique
In the design, we take the sigmoidal signal [15] as the image edge model for image scaling. Fig. 7(a)
shows an example of the 1-D sigmoidal signal. Assume that the pixel to be interpolated is located at coordinate
k and its nearest available neighbors in the image are located at coordinate m for the left and m+1 for the right.
6. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 51
Let s = k-m and E(k) represent the luminance value of the pixel at coordinate k. If t he estimated value E(k) of
the pixelto be interpolated is determined by using linear interpolation, it can be calculated as
E(k) =(1-s) ×E(m) + s×E(m+1) (18)
Fig 5 Local characteristic of the data in the neighbourhood of k .(a) An image edge model (b) Local c/s
As shown in Fig. 5(a), E(k) and E(k) might not match greatly. To solve the problem, we modify the distance s
to make E(k) approach E(k) for a better estimation.
Assume that the coordinates of the four nearest available neighbors around the current pixel are located at ,m-
1,m,m+1 and m+2, respectively, as shown in Fig. 5(b). In our design, we define an evaluating parameter L to
estimate the local characteristic of the data in the neighbourhood of k . It is given as
L= │E(m+1) – E(m-1) │ - │E(m+2) –E(m) │ (19)
If the image data are sufficiently smooth and the luminance changes at object edges can be approximated by
sigmoidal functions, we can come to the following conclusion. Indicates symmetry, so s is unchanged.L>0
indicates that the variation
between and is quicker than that between E(m+1) and E(m-1) is quicker than thatbetweenE(m+2) and E(m) . it
mean that the edge is more homepageous on theright side ,the pixels located at coordinate m+1 should affect the
interpolated valuer more than the pixels located at coordinate m does . Hence ,we can increase s in order to
make the estimated value close to the expected value on the contrary, indicates the edge is more homogeneous
on the left side. Thus, we must decrease to obtain a better estimation. Based on the above idea, we modify (18)
and calculate the estimated value of current pixels as
E(k) = (1-s‟)×E(m) +s‟×E(m+1) (20)
Whewre s‟is calculated with a simple way and given as
s‟= { s+ L×(1-s)/28
if L>0
{ s+L×s/28
, if L<0 (21)
A small amount of operations is required to catch the local characteristic of the current pixel. By using the
concept of 1-D edge-catching technique shown in (19)–(21), we can tune the areas of four overlapped regions
adaptively in the proposed 2-D scaling processor to obtain better image quality. Let LA represent the evaluating
parameter to estimate the local characteristic of the current pixel at coordinate (k,l) . If top‟(k,l) is greater than or
equal to winh/2 it means that A‟(m,n)is bigger than or equal to A‟(m,n+1). Hence,he upper row (n) is more
important
than the lower row (n+1)to catch edge features. Thus, LA is given as
LA = │E(m+1,n)-E(m-1,n)│-│E(m+2,n)-E(m,n)│ (22)
LA =0 indicates symmetry, so A‟(m,n) is unchanged. LA >0indicates that the variation between E(m+1,n)and
E(m-1,n) is quicker than that between E(m+2,n)and E(m,n). It means that the edge is more homogeneous on the
right-hand side, so we can increase A‟(m+1,n)in order to make the estimated value close to the expected one.
On the contrary, LA <0 indicates the edge is more homogeneous on the left-hand side, thus we decrease
A‟(m+1,n) to obtain a better estimate. Applying the above idea to (6), we can calculate the final areas of the
overlapped regions as
7. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 52
[A” (m,n), A” (m+1,n), A” (m,n+1), A” (m+1,n+1)] = [A‟(m,n) - LA×AC /28
,A‟(m+1,n)+
LA×AC /28
,A‟(m,n+1),A‟(m+1,n+1)] (23)
where AC = A‟(m,n) if LA >0 and AC =A‟(m+1,n) if LA < 0..
On the contrary, if top‟(k,l) is less than winh/2 , it means that A‟(m,n)is smaller than A‟(m,n+1). Hence, the
lower row (n+1)is more important than the upper row (n) to catch edge features. Thus, LA is given as
LA = │E(m+1,n+1)-E(m-1,n+1)│-│E(m+2,n+1)-E(m,n+1)│ (24)
The final areas of the overlapped regions are given as
[A” (m,n), A” (m+1,n), A” (m,n+1), A” (m+1,n+1)] = [A‟(m,n) ,A‟(m+1,n),A‟(m,n+1),- LA×AC
/28
,A‟(m+1,n+1)+ LA×AC /28
] (25)
IV. VLSI ARCHITECTURE
Our scaling method requires low computational complexity and only one line memory buffer, so it is
suitable for low-cost VLSI implementation. Fig. 6 shows block diagram of the seven stage VLSI architecture for
our scaling method. The architecture consists of seven main blocks: approximate module (AM), register bank
(RB), area generator (AG), edge catcher (EC), area tuner (AT), target generator (TG), and the controller. Each
of them is described briefly in the following subsections.
Fig 6 Block diagram of VLSI architecture for our scaling methods
When a source image of SW×SH pixels is scaled up or down to the target image of TW×TH pixels, the
AM performs (7)–(17) mentioned in Section III-A, and generates left‟(k,l),top‟(k,l) right‟(k,l) and bottom‟(k,l)
respectively, for each target pixel from left to right and from top to bottom. In our VLSI implementation, n is set
to 3, so each rectangular target pixel is treated as 23
×23
uniform-sized grids winw . and winh are both 6-b
integers and their values are restricted to power of 2, so . winw , winh € (1,2,4,8,16,32) Based on the
approximate technique mentioned in the Section III-A, the minimum and the maximum of magnification factors
(mf_w and mf_h) supported by the design are 0.125 and 8, respectively. Hence, the minimum and the maximum
of magnification factor (mf = mf_w×mf_h) supported by the design are 1/64 and 64, respectively.
AM is composed of two-stage pipelined architecture. In the first stage, the coordinate (k,l) of the
current target pixel and the coordinate (m,n)of the top-left source pixel overlapped by the current window are
determined. In the second stage, AM first calculates winleft (k,l) ,srcright (m,n), wintop (k,l) and srcbtm(m,n)
according to (10)–(11) and (13)–(14), and then generates left‟(k,l) right‟(k,l) ,top‟(k,l) , and
bottom‟(k,l)according to (7)–(9) and (12).
B. Register Bank
In our design, the estimated value of the current target pixel FT(k,l) is calculated by using the
luminance values of 2×4 neighboring source pixels FS(m-1,n), FS (m,n), FS(m+1), FS(m+2,n), FS(m-1,n+1),
FS(m,n+1), FS(m+1,n+1),and FS(m+2,n+1) . The register bank, consisting of eight registers, is used to provide
those source luminance values at exact time for the estimated process of current target pixel. Fig. 10 shows the
internal connections of RB where every four registers are connected serially in a chain to provide four pixel
values of a row in current pixel window, and the line buffer is used to store the pixel values of one row in the
source image. When the controller enables the shift operation in RB, two new values are read into RB (Reg3 and
Reg7) and the rest 6-pixel values are shifted to their right registers one by one. The 8-pixel values stored in RB
will be used by EC for edge catching and by TG for target pixel estimating.
8. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 53
Fig 7 Architecture of register bank
C. Area Generator
For each target pixel, AG calculates the areas of the overlapped regions A‟(m,n) ,A‟(m,n+1),A‟(m,n+1)
and A‟(m+1,n+1) according to (4). Fig. 11 shows the architecture of AG where represents the pipeline register
and MULT is the 4×4 integer multiplier
Fig 8 Architecture of area generator.
D. Edge Catcher
EC implements the proposed low-cost edge-catching technique and outputs the evaluating parameter
LA , which represents the local edge characteristic of current pixel at coordinate (k,l). Fig. 12 shows the
architecture of EC where SUB unit generates the difference of two inputs and │SUB│unit generates the two
inputs‟ absolute value of difference. The comparator CMP outputs logic 1 if the input value is greater than or
equal to winh/2 . The binary compared result, denoted as U_GE, is used to decide whether the upper row(row n)
in current pixel window is more important than the lower row (row n+1) in regards to catch edge features.
According to (22) and (24), EC produces the final result LA and sends it to the following AT.
Fig 9 Architecture of edge catcher.
9. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 54
E. Area Tuner
AT is used to modify the areas of the four overlapped regions based on the current local edge
information (LA and U _GE provided by EC). Fig. 13 shows the two-stage pipeline architecture of AT. If U_
GE is equal to 1, the upper row (row n) in current pixel window is more important, so A‟(m,n)and A‟(m+1,n)are
modified according to (23). On the contrary, ifU GE is equal to 0, the lower row (row n+1) is more important,
so A‟(m,n+1) and A‟(m+1,n+1) are modified according to (25). Finally, the tuned areas A‟‟(m,n) , A‟‟(m+1,n) ,
A‟‟(m,n+1) and A‟‟(m+1,n+1) are sent to TG.
Fig 9 Architecture of area tuner
F. Target Generator
By weighted averaging the luminance values of four source pixels with tuned-area coverage ratio, TG
implements (1) and (2) to determine the estimated value FT(k,l). Fig. 14 shows the two-stage pipeline
architecture of TG. Four MULT units and three ADD units are used to perform (1). Since the value of Asum is
equal to the power of 2, the division operation in (2) can be implemented by the shifter easily.
Fig 10 Architecture of target generator .
G. Controller
The controller, realized with a finite-state machine, monitors the data flow and sends proper control
signals to all other components. In the design, AM, AT, and TG require two clock cycles to complete their
functions, respectively. Both AG and EC need one clock cycle to finish their tasks, and they work in parallel
because no data dependency between them exists. For each target pixel, seven clock cycles are needed to output
the estimated value FT(k,l).
V. SIMULATION RESULTS
To evaluate the performance of our image-scaling algorithm, we use 6 gray-scale test images , shown
in. For each single test image, we reduce/enlarge the original image by using the well-known bilinear method,
and then employ various approaches to scale up/down the bilinear-scaled image back to the size of the original
test image. Thus, we can compare the image quality of the reconstructed images for various scaling methods.
Three well-known scaling methods, nearest neighbour (NN) , bilinear (BL) [6], and bicubic (BC) [9], two area-
pixel scaling methods, Win (winscale in [7]) and M Win (the modified winscale in [8]), and our method are used
for comparison in terms of computational complexity, objectivetesting (quantitative evaluation), and subjective
testing (visualquality), respectively. To reduce hardware cost, we adopt the low-cost technique suitable for VLSI
implementation to perform area-pixel scaling.
10. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 55
Table I shows the computing time (in the unit of second) of enlarging image “Lena” for the two
processors. Obviously, our method requires less computing time than [7]–[9], and BC needs much longer time
due to extensive computations.To explore the performance of quantitative evaluation for image enlargement and
reduction, first we scale the some test images to the different size of by using the bilinear method. Then, we
scale up/down these images back to the original size and show the results of peak signal-to-noise ratio (PSNR)
in Tables II, III, and IV, respectively. Here, the output images of our scaling method are generated by the
proposed VLSI circuit after post-layout transistor-level simulation. Simulation results show that our design
achieves better quantitative quality than the previous low-complexity scaling methods [5]–[8]. However, the
exact degree of improvement is dependent on the content of different images processed
The proposed VLSI architecture of the proposed design was implemented by using Verilog HDL.We
used SYNOPSYS Design Vision to synthesize the design with TSMC‟s 0.18- m cell library. The layout for the
design was generated with SYNOPSYS Astro (for auto placement and routing), and verified by MENTOR
GRAPHIC Calibre (for DRC and LVS checks), respectively.Modelsim was used for post-layout transistor-level
simulation. Finally, SYNOPSYS Prime Power was employed to measure the total power consumption.
Synthesis results show that the scaling processor contains 10.4-K gate counts and its core size is about 532 521
m . It works with a clock period of 5 ns .
WIN M_WIN Bicubic OUR
Line Buffer 1 1 6 1
Area 29k gate NA 890CLBs 10.4K gate
counts
Max Frquency 65 MHz 55 MHz 100MHz 200MHz
Computation Time 4.74ms 5.60ms 3.50ms 1.54ms
Features of some scaling methods
Area method input image (64 Bit
Area method Scale up image (128 Bit
11. A New Jpeg Image Scaling Algorithm...
www.theijes.com The IJES Page 56
VI. CONCLUSION
A low-cost image scaling processor is proposed in this paper. The experimental results demonstrate
that our design achieves better performances in both objective and subjective image quality than other low-
complexity scaling methods. Furthermore, an efficient VLSI architecture for the proposed method is presented.
In our simulation, it operates with a clock period of 5 ns and achieves a processing rate of 200
megapixels/second.The architecture works with monochromatic images, but it can be extended for working with
RGB color images easily.
REFERENCES
[1] R. C. Gonzalez and R. E.Woods, Digital Image Processing. Reading, MA: Addison-Wesley, 1992.
[2] W. K. Pratt, Digital Image Processing. New York: Wiley-Interscience, 1991.
[3] T. M. Lehmann, C. Gonner, and K. Spitzer, “Survey: Interpolation methods in medical image processing,” IEEE Trans. Med.
Imag., vol. 18, no. 11, pp. 1049–1075, Nov. 1999.
[4] C.Weerasnghe, M. Nilsson, S. Lichman, and I. Kharitonenko, “Digital
zoom camera with image sharpening and suppression,” IEEE Trans. Consumer Electron., vol. 50, no. 3, pp. 777–786, Aug. 2004.
[5] S. Fifman, “Digital rectification of ERTS multispectral imagery,” in
Proc. Significant Results Obtained from Earth Resources Technology Satellite-1, 1973, vol. 1, pp. 1131–1142.
[6] J. A. Parker, R. V. Kenyon, and D. E. Troxel, “Comparison of interpolation
methods for image resampling,” IEEE Trans. Med. Imag., vol. MI-2, no. 3, pp. 31–39, Sep. 1983.
[7] C. Kim, S. M. Seong, J. A. Lee, and L. S. Kim, “Winscale: An image scaling algorithm using an area pixel model,” IEEE Trans.
Circuits Syst. Video Technol., vol. 13, no. 6, pp. 549–553, Jun. 2003.
[8] I. Andreadis and A. Amanatiadis, “Digital image scaling,” in Proc. IEEE Instrum. Meas. Technol. Conf.,May 2005, vol. 3, pp.
2028–2032.
[9] H. S. Hou and H. C. Andrews, “Cubic splines for image interpolation and digital filtering,” IEEE Trans. Acoust. Speech Signal
Process., vol. ASSP-26, no. 6, pp. 508–517, Dec. 1978.
[10] J. K. Han and S. U. Baek, “Parametric cubic convolution scalar for enlargement
and reduction of image,” IEEE Trans. Consumer Electron., vol. 46, no. 2, pp. 247–256, May 2000.
[11] L. J.Wang, W. S. Hsieh, and T. K. Truong, “A fast computation of 2-D
cubic-spline interpolation,” IEEE Signal Process. Lett., vol. 11, no. 9, pp. 768–771, Sep. 2004.
[12] H. A. Aly and E. Dubois, “Image up-sampling using total- variation regularization with a new observation model,” IEEE Trans.
Image Process., vol. 14, no. 10, pp. 1647–1659, Oct. 2005.
[13] T. Feng, W. L. Xie, and L. X. Yang, “An architecture and implementation of image scaling conversion,” in Proc. IEEE Int. Conf.
Appl. Specific Integr. Circuits, 2001, pp. 409–410.
[14] M. A. Nuno-Maganda and M. O. Arias-Estrada, “Real-time FPGAbased
architecture for bicubic interpolation: An application for digital image scaling,” in Proc. IEEE Int. Conf. Reconfigurable
Computing FPGAs, 2005, pp. 8–11.
[15] G. Ramponi, “Warped distance for space-variant linear image interpolation,”
IEEE Trans. Image Process., vol. 8, no. 5, pp. 629–639, May