The document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a two-dimensional image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document outlines the history of digital image processing and provides examples of its use in applications such as image enhancement, medical imaging, satellite imagery, and industrial inspection. It also describes common stages in digital image processing like image acquisition, enhancement, restoration, segmentation, and compression.
Lecture 1 for Digital Image Processing (2nd Edition)Moe Moe Myint
-What is Digital Image Processing?
-The Origins of Digital Image Processing
-Examples of Fields that Use Digital Image Processing
-Fundamentals Steps in Digital Image Processing
-Components of an Image Processing System
This document provides an overview of key concepts in digital image processing, including:
1. It discusses fundamental steps like image acquisition, enhancement, color image processing, and wavelets and multiresolution processing.
2. Image enhancement techniques process images to make them more suitable for specific applications.
3. Color image processing has increased in importance due to more digital images on the internet. Wavelets allow images to be represented at various resolution levels.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
Digital image processing involves performing operations on digital images using computer algorithms. It has several functional categories including image restoration to remove noise and distortions, enhancement to modify the visual impact, and information extraction to analyze images. The main steps are acquisition, enhancement, restoration, color processing, compression, segmentation, and filtering using techniques like pixelization, principal components analysis, and neural networks. It has applications in medical imaging, film, transmission, sensing, and robotics. The advantages are noise removal, flexibility in format and manipulation, and easy storage and retrieval. The disadvantages can include high initial costs and potential data loss if storage devices fail.
This document discusses image restoration techniques for noise removal, including:
- Spatial domain filtering techniques like mean, median, and order statistics filters to remove random noise.
- Frequency domain filtering like band reject filters to remove periodic noise.
- Adaptive filtering techniques where the filter size changes depending on image characteristics within the filter region to better handle impulse noise.
1) Digital image processing involves processing digital images using computer software and algorithms. It includes techniques like image enhancement, restoration, compression, and segmentation.
2) The key stages in digital image processing are image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
3) Digital image processing has various applications including medical imaging, space exploration, document processing, photography, remote sensing, and video/film special effects. It covers almost the entire electromagnetic spectrum from gamma to radio waves.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
The document discusses digital image processing. It begins by defining an image and describing how images are represented digitally. It then outlines the main steps in digital image processing, including acquisition, enhancement, restoration, segmentation, representation, and recognition. It also discusses the key components of an image processing system, including hardware, software, storage, displays, and networking. Finally, it provides examples of application areas for digital image processing such as medical imaging, satellite imaging, and industrial inspection.
Lecture 1 for Digital Image Processing (2nd Edition)Moe Moe Myint
-What is Digital Image Processing?
-The Origins of Digital Image Processing
-Examples of Fields that Use Digital Image Processing
-Fundamentals Steps in Digital Image Processing
-Components of an Image Processing System
This document provides an overview of key concepts in digital image processing, including:
1. It discusses fundamental steps like image acquisition, enhancement, color image processing, and wavelets and multiresolution processing.
2. Image enhancement techniques process images to make them more suitable for specific applications.
3. Color image processing has increased in importance due to more digital images on the internet. Wavelets allow images to be represented at various resolution levels.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
Digital image processing involves performing operations on digital images using computer algorithms. It has several functional categories including image restoration to remove noise and distortions, enhancement to modify the visual impact, and information extraction to analyze images. The main steps are acquisition, enhancement, restoration, color processing, compression, segmentation, and filtering using techniques like pixelization, principal components analysis, and neural networks. It has applications in medical imaging, film, transmission, sensing, and robotics. The advantages are noise removal, flexibility in format and manipulation, and easy storage and retrieval. The disadvantages can include high initial costs and potential data loss if storage devices fail.
This document discusses image restoration techniques for noise removal, including:
- Spatial domain filtering techniques like mean, median, and order statistics filters to remove random noise.
- Frequency domain filtering like band reject filters to remove periodic noise.
- Adaptive filtering techniques where the filter size changes depending on image characteristics within the filter region to better handle impulse noise.
1) Digital image processing involves processing digital images using computer software and algorithms. It includes techniques like image enhancement, restoration, compression, and segmentation.
2) The key stages in digital image processing are image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
3) Digital image processing has various applications including medical imaging, space exploration, document processing, photography, remote sensing, and video/film special effects. It covers almost the entire electromagnetic spectrum from gamma to radio waves.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
The document discusses digital image processing. It begins by defining an image and describing how images are represented digitally. It then outlines the main steps in digital image processing, including acquisition, enhancement, restoration, segmentation, representation, and recognition. It also discusses the key components of an image processing system, including hardware, software, storage, displays, and networking. Finally, it provides examples of application areas for digital image processing such as medical imaging, satellite imaging, and industrial inspection.
This document outlines the syllabus for a digital image processing course. It introduces key concepts like what a digital image is, areas of digital image processing like low-level, mid-level and high-level processes, a brief history of the field, applications in different domains, and fundamental steps involved. The course will cover topics in digital image fundamentals and processing techniques like enhancement, restoration, compression and segmentation. It will be taught using MATLAB and C# in the labs. Assessment will include homework, exams, labs and a final project.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
This document discusses various digital image processing techniques. It covers connected component labeling, intensity transformations including linear, logarithmic and power law functions. It also describes spatial domain vs transform domain processing and examples of enhancement techniques like contrast stretching and intensity-level slicing. Finally, it discusses geometric transformations and image registration to align images.
Presentation on Digital Image ProcessingSalim Hosen
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
In the past two decades, the technique of image processing has made its way into every aspect of today’s tech-savvy society. Its applications encompass a wide variety of specialized disciplines including medical imaging, machine vision, remote sensing and astronomy. Personal images captured by various digital cameras can easily be manipulated by a variety of dedicated image processing algorithms. Image restoration can be described as an important part of image processing technique. The basic objective is to enhance the quality of an image by removing defects and make it look pleasing. The method used to carry out the project was MATLAB software. Mathematical algorithms were programmed and tested for the result to find the necessary output. In this project mathematical analysis was the basic core. Generally the spatial and frequency domain methods were both important and applicable in different technologies. This project has tried to show the comparison between spatial and frequency domain approaches and their advantages and disadvantages. This project also suggested that more research have to be done in many other image processing applications to show the importance of those methods.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
This document summarizes a course on embedded systems presented by Md Nazmul Hossain Mir to Sraboni Dhar. The course code is CSE 423 and name is Embedded System. It discusses biomedical image processing, which involves gathering biomedical signals, image formation, processing pictures, and image display for medical diagnosis by extracting features from images. It also discusses the need for image processing in medicine such as changing density ranges, restoring images, registering multiple images, constructing 3D images, and removing artifacts. Finally, it covers the main areas of image processing which are image formation, visualization, analyzing images, and managing acquired information.
Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram processing
Using histogram statistics for image enhancement
Uses for Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram Processing
Basics of Spatial Filtering
Morphology fundamentals consist of erosion and dilation, which are basic morphological operations. Erosion removes pixels from object boundaries, shrinking object sizes and enlarging holes. Dilation adds pixels to boundaries, enlarging object sizes and shrinking holes. Both operations use a structuring element to determine how many pixels are added or removed. Erosion compares the structuring element to the image, removing pixels where it is not contained. Dilation compares overlaps, adding pixels where the structuring element and image overlap by at least one element.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes within images. It works by detecting imperfect instances of objects of a certain class of shapes via a voting procedure. Specifically, the Hough transform can be used to detect lines, circles, and other shapes in an image if their parametric equations are known, and it provides robust detection even under noise and partial occlusion. It works by quantizing the parameter space that describes the shape and counting the number of votes each parametric description receives from edge points in the image.
This document discusses image thresholding techniques for image segmentation. It describes thresholding as the basic first step for segmentation that partitions an image into foreground and background pixels based on intensity value. Simple thresholding uses a single cutoff value but can fail for complex histograms. Adaptive thresholding divides an image into sub-images and thresholds each individually to handle varying intensities better than simple thresholding. The document provides examples and algorithms to illustrate thresholding and its limitations and adaptations.
Basic Introduction about Image Restoration (Order Statistics Filters)
Median Filter
Max and Min Filter
MidPoint Filter
Alpha-trimmed Mean filter.
and Brief Introduction to Periodic Noise
Any Question contact kalyan.acharjya@gmail.com
SOC Application Studies: Image CompressionA B Shinde
This document discusses application studies of AES encryption and JPEG image compression on SOC designs. It provides an overview of the AES algorithm and requirements, describing the encryption process. An initial SOC design for AES is proposed using an ARM7 processor, and performance is evaluated. JPEG compression is also summarized, outlining the color space transformation, discrete cosine transform, and entropy coding steps. Finally, an example JPEG system for a digital still camera is presented using a TMS320C54x processor to implement the imaging pipeline and compression.
This presentation discusses digital image processing. It begins with definitions of digital images and digital image processing. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The history of digital image processing is then reviewed from the 1920s to today. Key examples of applications like medical imaging, satellite imagery, and industrial inspection are provided. The main stages of digital image processing are outlined, including image acquisition, enhancement, restoration, segmentation, and compression. The document concludes with an overview of a system for automatic face recognition using color-based segmentation.
This document outlines the syllabus for a digital image processing course. It introduces key concepts like what a digital image is, areas of digital image processing like low-level, mid-level and high-level processes, a brief history of the field, applications in different domains, and fundamental steps involved. The course will cover topics in digital image fundamentals and processing techniques like enhancement, restoration, compression and segmentation. It will be taught using MATLAB and C# in the labs. Assessment will include homework, exams, labs and a final project.
The document discusses the fundamental steps in digital image processing. It describes 7 key steps: (1) image acquisition, (2) image enhancement, (3) image restoration, (4) color image processing, (5) wavelets and multiresolution processing, (6) image compression, and (7) morphological processing. For each step, it provides brief explanations of the techniques and purposes involved in digital image processing.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
This document discusses various digital image processing techniques. It covers connected component labeling, intensity transformations including linear, logarithmic and power law functions. It also describes spatial domain vs transform domain processing and examples of enhancement techniques like contrast stretching and intensity-level slicing. Finally, it discusses geometric transformations and image registration to align images.
Presentation on Digital Image ProcessingSalim Hosen
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
In the past two decades, the technique of image processing has made its way into every aspect of today’s tech-savvy society. Its applications encompass a wide variety of specialized disciplines including medical imaging, machine vision, remote sensing and astronomy. Personal images captured by various digital cameras can easily be manipulated by a variety of dedicated image processing algorithms. Image restoration can be described as an important part of image processing technique. The basic objective is to enhance the quality of an image by removing defects and make it look pleasing. The method used to carry out the project was MATLAB software. Mathematical algorithms were programmed and tested for the result to find the necessary output. In this project mathematical analysis was the basic core. Generally the spatial and frequency domain methods were both important and applicable in different technologies. This project has tried to show the comparison between spatial and frequency domain approaches and their advantages and disadvantages. This project also suggested that more research have to be done in many other image processing applications to show the importance of those methods.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
This document summarizes a course on embedded systems presented by Md Nazmul Hossain Mir to Sraboni Dhar. The course code is CSE 423 and name is Embedded System. It discusses biomedical image processing, which involves gathering biomedical signals, image formation, processing pictures, and image display for medical diagnosis by extracting features from images. It also discusses the need for image processing in medicine such as changing density ranges, restoring images, registering multiple images, constructing 3D images, and removing artifacts. Finally, it covers the main areas of image processing which are image formation, visualization, analyzing images, and managing acquired information.
Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram processing
Using histogram statistics for image enhancement
Uses for Histogram Processing
Histogram Equalization
Histogram Matching
Local Histogram Processing
Basics of Spatial Filtering
Morphology fundamentals consist of erosion and dilation, which are basic morphological operations. Erosion removes pixels from object boundaries, shrinking object sizes and enlarging holes. Dilation adds pixels to boundaries, enlarging object sizes and shrinking holes. Both operations use a structuring element to determine how many pixels are added or removed. Erosion compares the structuring element to the image, removing pixels where it is not contained. Dilation compares overlaps, adding pixels where the structuring element and image overlap by at least one element.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes within images. It works by detecting imperfect instances of objects of a certain class of shapes via a voting procedure. Specifically, the Hough transform can be used to detect lines, circles, and other shapes in an image if their parametric equations are known, and it provides robust detection even under noise and partial occlusion. It works by quantizing the parameter space that describes the shape and counting the number of votes each parametric description receives from edge points in the image.
This document discusses image thresholding techniques for image segmentation. It describes thresholding as the basic first step for segmentation that partitions an image into foreground and background pixels based on intensity value. Simple thresholding uses a single cutoff value but can fail for complex histograms. Adaptive thresholding divides an image into sub-images and thresholds each individually to handle varying intensities better than simple thresholding. The document provides examples and algorithms to illustrate thresholding and its limitations and adaptations.
Basic Introduction about Image Restoration (Order Statistics Filters)
Median Filter
Max and Min Filter
MidPoint Filter
Alpha-trimmed Mean filter.
and Brief Introduction to Periodic Noise
Any Question contact kalyan.acharjya@gmail.com
SOC Application Studies: Image CompressionA B Shinde
This document discusses application studies of AES encryption and JPEG image compression on SOC designs. It provides an overview of the AES algorithm and requirements, describing the encryption process. An initial SOC design for AES is proposed using an ARM7 processor, and performance is evaluated. JPEG compression is also summarized, outlining the color space transformation, discrete cosine transform, and entropy coding steps. Finally, an example JPEG system for a digital still camera is presented using a TMS320C54x processor to implement the imaging pipeline and compression.
This presentation discusses digital image processing. It begins with definitions of digital images and digital image processing. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The history of digital image processing is then reviewed from the 1920s to today. Key examples of applications like medical imaging, satellite imagery, and industrial inspection are provided. The main stages of digital image processing are outlined, including image acquisition, enhancement, restoration, segmentation, and compression. The document concludes with an overview of a system for automatic face recognition using color-based segmentation.
Digital Image Processing and Edge DetectionSeda Yalçın
This presentation is an introduction for digital image processing and edge detection which covers them on four topic; example of fields that use digital image processing, visibility that depends on human perception, fundamental definition of an image, analysis of edge detection algorithms such as Roberts, Prewitt, Sobel and Laplacian of a Gaussian.
صيانة وترميم المبانى الاثرية(حالة مبنى البريد الرئيسى بالخرطوم)Mazin Yahia
الورقة المهنية " صيانة وترميم المبانى الاثرية(حالة مبنى البريد الرئيسى بالخرطوم) " التى تم عرضها فى المؤتمر العلمى الرابع " نحو تنمية حضرية مستدامة " مايو 2016
The document discusses the history and development of the Russian language from ancient times to the present. It covers topics like the origins of the written language, historical periods of use and development, and modern status and use of Russian in government, education and daily life. In 3 sentences or less:
The passage provides an overview of the history of the Russian language from its origins to modern use, discussing early written forms, periods of development and influence, and current status and applications in government, education and society. Key events and influences on the language over time are summarized.
Amman Downtown Plan & Revitalization Strategy | Amman InstituteAmman Institute
The document provides a framework and strategy for downtown Amman development over the next 20 years. It analyzes key issues like loss of authentic role and function, inadequate public realm, and traffic problems. The vision is for an inclusive, commercially and residentially diverse city with historical identity. The strategy focuses on revitalizing commercial areas, creating vibrant public spaces, and providing attractive housing options. It proposes interventions like improving pedestrian networks, developing new parks and plazas, and supporting social and economic development through affordable housing, markets enhancement, and office/accommodation development.
This document provides an introduction to digital image processing. It defines a digital image as a finite set of pixels representing attributes like color or brightness. Digital image processing involves improving images for human interpretation or machine perception. The history of digital image processing is traced from early applications in newspapers to modern uses in medicine, satellites, and law enforcement. Key stages of digital image processing include acquisition, enhancement, restoration, segmentation, and compression.
Digital image processing involves techniques to improve and analyze digital images. It focuses on tasks like enhancing images for human interpretation, processing images for machine applications, and processing image data for storage and transmission. Key stages in digital image processing include image acquisition, enhancement, restoration, segmentation, and representation. Digital image processing has a long history and is now widely used in applications like medical imaging, satellite imagery analysis, industrial inspection, and law enforcement.
This document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a 2D image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document traces the history of digital image processing from the 1920s to its widespread use today. It provides examples of applications in fields like enhancement, medicine, mapping, inspection, law enforcement and human-computer interfaces. Finally, it outlines the key stages of digital image processing systems including acquisition, restoration, processing, analysis and compression.
The document is an introduction to a course on digital image processing. It begins with definitions of digital images and digital image processing. It then provides a brief history of digital image processing, highlighting early applications in newspapers and space exploration. It also gives examples of current applications in areas like medicine, mapping, industrial inspection, and human-computer interfaces. Finally, it outlines some key stages in digital image processing pipelines like image acquisition, enhancement, restoration, segmentation, and compression.
This document discusses digital image processing. It defines a digital image and digital image processing. The history of digital image processing is covered from the 1920s to today. Examples of applications are given, including image enhancement, medical imaging, industrial inspection, and more. The key stages of digital image processing are outlined, such as image acquisition, enhancement, restoration, segmentation, and others.
Digital image processing involves representing images as arrays of pixels and then processing those pixels to improve or analyze the image. It has applications in fields like medicine, mapping, law enforcement, and human-computer interfaces. The key stages of digital image processing include image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
This document provides an introduction to a course on digital image processing. It discusses what a digital image is, defines digital image processing, and outlines the history and key applications of the field. The lecture will cover the definition of a digital image, the tasks of digital image processing, the history and evolution of the field from the 1920s to today, examples of applications in areas like medicine, satellite imagery, industrial inspection, and human-computer interfaces, and the main stages of digital image processing work including image acquisition, enhancement, restoration, and recognition.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, astronomy, and industrial inspection. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, color processing, and compression.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, space exploration, and more. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, compression, and color processing.
This document provides an overview of digital image processing, including:
- It defines what a digital image is and how images are digitized through sampling and quantization.
- It discusses the history of digital image processing from the 1920s to today, highlighting early applications and key advances like CAT scans.
- It gives examples of current uses like image enhancement, medical imaging, industrial inspection, and computer vision tasks like face and object recognition.
- It outlines the main stages of digital image processing pipelines including image acquisition, enhancement, restoration, segmentation, and compression.
- It provides context on the related field of computer vision and its goals of interpreting and understanding images.
Digital images are representations of images using discrete pixel values. Vision is a complex natural process, and digital image processing aims to perform tasks like improving images for human interpretation and machine perception. Key stages in processing include acquisition, enhancement, restoration, morphological operations, segmentation, and representation. Digital image processing has a long history and is now widely used in applications such as medicine, geospatial analysis, industrial inspection, and law enforcement. Examples demonstrate how techniques are applied to tasks like medical imaging, satellite imagery analysis, and printed circuit board inspection.
This document outlines the syllabus for the course IT6005 - Digital Image Processing. The syllabus is divided into 5 units that cover digital image fundamentals, image enhancement, image restoration and segmentation, wavelets and image compression, and image representation and recognition. Unit 1 introduces key concepts in digital image processing such as pixels, gray levels, sampling and quantization. It also provides a brief history of the origin and development of digital image processing.
The document provides a history of digital image processing from the early 1920s to present day. It discusses some of the earliest applications including transmitting newspaper images via submarine cable. Major developments occurred in the 1960s with improved computing enabling enhanced images from space missions. Digital image processing began being used for medical applications in the 1970s. The field has since expanded significantly with uses in areas like astronomy, art, medicine, law enforcement, and more. The document also defines digital images and digital image processing, and outlines some key stages in processing including acquisition, restoration, segmentation, and representation.
Digital images are represented as arrays of numbers called pixels. Each pixel value corresponds to attributes like intensity, color, or height at that location. Digital image processing involves techniques to enhance, analyze, and extract information from digital images for tasks like interpretation, transmission, and machine perception. It has evolved from early applications processing images from space missions and medical scans to now being used widely across fields such as entertainment, surveillance, and industrial inspection. Key stages in digital image processing typically involve image acquisition, enhancement, analysis through techniques like segmentation and recognition, and output of processed results.
This document provides an overview of digital image processing. It begins with definitions of key terms like digital image, pixels, and image file formats. It then outlines the main stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses the history and applications of digital image processing in fields like medicine, astronomy, law enforcement, and more. Finally, it describes the typical components of an image processing system such as image sensors, specialized hardware, computer, software, storage, displays, and networking.
This document presents a student's presentation on digital image processing. It begins with definitions of digital images and digital image processing. It then provides a history of digital image processing from the 1920s to today. Key examples of digital image processing applications are discussed, including image enhancement, medical imaging, geographic information systems, industrial inspection, and human-computer interfaces. The main stages of a digital image processing system are outlined, including image acquisition, enhancement, restoration, segmentation, and object recognition. Finally, the document summarizes the student's work on a basic color-space based face detection system.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
This document discusses digital image processing. It begins by defining a digital image and digital image processing. It then provides a brief history of digital image processing from the 1920s to today. Examples of digital image processing applications are given in various domains like medicine, geography, industrial inspection, law enforcement, human-computer interfaces, and art. Key stages of digital image processing like enhancement, segmentation, and understanding are also mentioned.
Similar to Digital Image Processing_ ch1 introduction-2003 (20)
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
3. 3
of
36
References
“Digital Image Processing”, Rafael C.
Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
– Support reference
“Machine Vision: Automated Visual
Inspection and Robot Vision”, David
Vernon, Prentice Hall, 1991
– Available online at:
homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/
– Google.com
4. 4
of
36
Contents
This lecture will cover:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
5. 5
of
36
What is a Digital Image?
A digital image is a representation of a twodimensional image as a finite set of digital
values, called picture elements or pixels
6. 6
of
36
What is a Digital Image? (cont…)
Pixel values typically represent gray levels,
colours, heights, etc
Remember digitization implies that a digital
image is an approximation of a real scene
1 pixel
7. 7
of
36
What is a Digital Image? (cont…)
Common image formats include:
– 1 sample per point (B&W or Grayscale)
– 3 samples per point (Red, Green, and Blue)
– 4 samples per point (Red, Green, Blue, and “Alpha”,
a.k.a. Opacity)
For most of this course we will focus on grey-scale
images
8. 8
of
36
What is Digital Image Processing?
Digital image processing focuses on two
major tasks
– Improvement of pictorial information for
human interpretation
– Processing of image data for storage,
transmission and representation for
autonomous machine perception
Some argument about where image
processing ends and fields such as image
analysis and computer vision start
9. 9
of
36
What is DIP? (cont…)
التواصل
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process
Mid Level Process
High Level Process
Input: Image
Output: Image
Input: Image
Output: Attributes
Input: Attributes
Output: Understanding
Examples: Noise
removal, image
sharpening
Examples: Object
recognition,
segmentation
Examples: Scene
understanding,
autonomous navigation
In this course we will
stop here
10. 10
of
36
History of Digital Image Processing
Early 1920s: One of the first applications of
digital imaging was in the newspaper industry
– The Bartlane cable picture
Early digital image
transmission service
– Images were transferred by submarine cable
between London and New York
– Pictures were coded for cable transfer and
reconstructed at the receiving end on a
telegraph printer
11. 11
of
36
History of DIP (cont…)
Mid to late 1920s: Improvements to the
Bartlane system resulted in higher quality
images
– New reproduction
processes based
on photographic
techniques
– Increased number
of tones in
reproduced images
Improved
digital image
Early 15 tone digital
image
12. 12
of
36
History of DIP (cont…)
1960s: Improvements in computing
technology and the onset of the space race
led to a surge of work in digital image
processing
– 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
– Such techniques were used
in other space missions
including the Apollo landings
A picture of the moon taken
by the Ranger 7 probe
minutes before landing
13. 13
of
36
History of DIP (cont…)
1970s: Digital image processing begins to
be used in medical applications
– 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT
image
14. 14
of
36
History of DIP (cont…)
1980s - Today: The use of digital image
processing techniques has exploded and
they are now used for all kinds of tasks in all
kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
16. 16
of
36
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
18. 18
of
36
Examples: Medicine
Take slice from MRI scan of canine heart,
and find boundaries between types of tissue
– Image with gray levels representing tissue
density
– Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart
Edge Detection Image
19. 19
of
36
Examples: GIS
Geographic Information Systems
– Digital image processing techniques are used
extensively to manipulate satellite imagery
– Terrain classification
– Meteorology
20. 20
of
36
Examples: GIS (cont…)
Night-Time Lights of
the World data set
– Global inventory of
human settlement
– Not hard to imagine
the kind of analysis
that might be done
using this data
21. 21
of
36
Examples: Industrial Inspection
Human operators are
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
22. 22
of
36
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
– Machine inspection is used to determine that
all components are present and that all solder
joints are acceptable solder joints: وصل اللحام
– Both conventional imaging and x-ray imaging
are used
23. 23
of
36
Examples: Law Enforcement
Image processing
techniques are used
extensively by law
enforcers
– Number plate
recognition for speed
cameras/automated
toll systems
– Fingerprint recognition
– Enhancement of
CCTV images
24. 24
of
36
Examples: HCI
Try to make human
computer interfaces more
natural
– Face recognition
– Gesture recognition
These tasks can be
extremely difficult
التعرف على
الميماءات
25. 25
of
36
Key Stages in Digital Image Processing
Image
Restoration
ترميم الصورة
Morphological
Processing
Image تحسين الصوره
Enhancement
Segmentation
Image الحصول على
Acquisition الصوره
Object
Recognition
Representation
& Description
Problem Domain
Colour Image
Processing
Image
Compression
26. 26
of
36
Key Stages in Digital Image Processing:
Image Aquisition
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
27. 27
of
36
Key Stages in Digital Image Processing:
Image Enhancement
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
28. 28
of
36
Key Stages in Digital Image Processing:
Image Restoration
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
37. 37
of
36
Key Stages in Digital Image Processing:
Colour Image Processing
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
38. 38
of
36
Digitising an image
To convert the continuous function f(x,y) to digital form we need to
sample the continuous sensed data in both coordinates and in
amplitude using finite and discrete sets of values.
– Digitizing the coordinate values is called sampling.
– Digitizing the amplitude values is called quantisation.
The number of selected values in the sampling process is known as
the image spatial resolution. This is simply the number of pixels
relative to the given image area
The number of selected values in the quantisation process is called
the grey-level (colour level) resolution. This is expressed in terms
of the number of bits allocated to the colour levels.
The quality of a digitised image depends the resolution parameters
on both processes.
39. 39
of
36
Digital image Representaion – Revised
A monochrome digital image is a 2-dimensional light intensity
function f (x,y) whose independent variables (x,y) are digitised
through spatial sampling, and whose intensity values are
quantised by a finite uniformly spread grey-levels. i.e. an image f
can be represented as a 2-dimentional array:
f(1,1)
f(1,3)
…
f(1,n)
f(2,1)
f(2,2)
f(2,3)
…
f(2,n)
f(3,1)
f(3,2)
f(3,3)
…
f(3,n)
:
:
f=
f(1,2)
:
:
:
:
:
:
:
:
f(m,2)
f(m,3)
…
f(m,n)
f(m,1)
Usually, m=n and the number of graylevels are g=2k for some k. The
spatial resolution is mn and g is the greylevel resolution.
RGB based colour images are represented similarly except that f(i,j)
is a 3D vector representing intensity of the three primary colors at
the (i,j) pixel posiotion,
40. 40
of
36
Spatial Resolution
The spatial resolution of a digital image reflects the amount
of details that one can see in the image (i.e. the ratio of
pixel “area” to the area of the image display).
If an image is spatially sampled at mxn pixels, then the
larger mn the finer the observed details.
For a fixed image area, the noticeable image quality is
directly proportional to the value of mn results.
Reduced spatial resolution, within the same area, may
result in what is known as Checkerboard pattern.
However beyond a certain fine spatial resolution, the
human eye may not be able to notice improved quality.
41. 41
of
36
Spatial Resolution Vs Image Quality
Decreasing spatial resolution reduces image quality proportionally Checkerboard pattern.
† Images extracted from DIP, 2nd Edition, Gonzalez & Woods, PH.
42. 42
of
36
Spatial Resolution Vs Image Quality - continued
The checkerboard effect is not visible if a lower–resolution
image is displayed in a proportionately small window.
44. 44
of
36
Effect of grey level resolution
8 bits
5 bits
2 bits
7 bits
6 bits
4 bits
3 bits
1 bit
0 bits !!!
45. 45
of
36
Zooming and Resizing
It is the scaling of an image area A of wxh pixels by a factor s while
maintaing spatial resolution (i.e. output has sw×sh pixels).
First we need a linear scaling function S to map the coordinates of
new pixels onto the original pixel grid of A.
For each (x,y) in the resized area, we need to interpolate the gray
value sf(x,y) in terms of the pixels values in A that neighbour the
point S(x,y). Different models of approximations are used.
S(A)
A
Example: Scaling A by a factor s=1.5
46. Zooming and Resizing - Continued
46
of
36
Interpolation schemes include:
– Nearest neighbour : sf(x,y) is gray value of its nearest pixel in A.
– Bilinear : sf(x,y) is weighted average gray value of its 4 neighbouring
pixels
Checkerboard effect
Blurring effect
•
Images are zoomed from 128x128, 64x64 and 32x32 sizes to 1024x1024. Top row
use the nearest neighbour interpolation, bottom row use Bilinear interpolation.
47. Image files Format
47
of
36
Image files consists of two parts:
A header found at the start of the file and consisting of
parameters regarding:
Number of rows (height)
Number of columns (width)
Number of bands (i.e. colors)
Number of bits per pixel (bpp)
File type
Image data which lists all pixel values (vectors) on the
first row, followed by 2nd row, and so on.
Common image file formats include :
BIN, RAW, BMP, JPEG, TIFF, GIF, PPM, PBM, PGM, …
Real world is continuous – an image is simply a digital approximation of this.
Give the analogy of the character recognition system.
Low Level: Cleaning up the image of some text
Mid level: Segmenting the text from the background and recognising individual characters
High level: Understanding what the text says