Pixel transforms,
Color transforms,
Histogram processing & equalization ,
Filtering,
Convolution,
Fourier transformation and its applications in sharpening,
Blurring and noise removal
Features Detection
Edge Detection
Corner Detection
Line and Curve Detection
Active Contours
SIFT and HOG Descriptors
Shape Context Descriptors
Morphological Operations
Segmentation
Active Contours
Split and Merge
Watershed
Region Splitting and Merging
Graph-based Segmentation
Mean shift and Model finding
Normalized Cut
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses various morphological image processing techniques including binary morphology, grayscale morphology, dilation, erosion, opening, closing, boundary extraction, region filling, connected components, hit-or-miss, thinning, thickening, and skeletonization. Morphological operations can be used for tasks like edge detection, noise removal, image enhancement, and image segmentation. The key morphological operations of dilation and erosion expand and shrink binary images using a structuring element, while opening and closing combine these operations to remove noise or fill holes.
Mathematical morphology is a framework for image analysis using set theory operations. It is used for tasks like noise filtering, shape analysis, and segmentation. Basic operations include erosion, dilation, opening, and closing using a structuring element. Erosion shrinks objects while dilation expands them. Opening eliminates small objects and closing fills small holes. Together these operations can filter images while preserving overall shapes. Morphological operations also enable extracting object boundaries, thinning images to skeletons, and finding connected components.
Features Detection
Edge Detection
Corner Detection
Line and Curve Detection
Active Contours
SIFT and HOG Descriptors
Shape Context Descriptors
Morphological Operations
Segmentation
Active Contours
Split and Merge
Watershed
Region Splitting and Merging
Graph-based Segmentation
Mean shift and Model finding
Normalized Cut
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses various morphological image processing techniques including binary morphology, grayscale morphology, dilation, erosion, opening, closing, boundary extraction, region filling, connected components, hit-or-miss, thinning, thickening, and skeletonization. Morphological operations can be used for tasks like edge detection, noise removal, image enhancement, and image segmentation. The key morphological operations of dilation and erosion expand and shrink binary images using a structuring element, while opening and closing combine these operations to remove noise or fill holes.
Mathematical morphology is a framework for image analysis using set theory operations. It is used for tasks like noise filtering, shape analysis, and segmentation. Basic operations include erosion, dilation, opening, and closing using a structuring element. Erosion shrinks objects while dilation expands them. Opening eliminates small objects and closing fills small holes. Together these operations can filter images while preserving overall shapes. Morphological operations also enable extracting object boundaries, thinning images to skeletons, and finding connected components.
An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image Texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image. This presentation consists of its types, uses, methods and approaches.
This document discusses image segmentation techniques. It describes how segmentation partitions an image into meaningful regions based on discontinuities or similarities in pixel intensity. The key methods covered are thresholding, edge detection using gradient and Laplacian operators, and the Hough transform for global line detection. Adaptive thresholding is also introduced as a technique to handle uneven illumination.
The document discusses computer vision with deep learning. It provides an overview of convolutional neural networks and their use in computer vision applications like image classification and object detection. Specifically, it discusses how CNNs use convolutional layers to learn visual features from images and provide examples of CNNs being used for pipeline defect classification and filler cap quality control.
The document discusses various techniques for image segmentation including discontinuity-based approaches, similarity-based approaches, thresholding methods, region-based segmentation using region growing and region splitting/merging. Key techniques covered include edge detection using gradient operators, the Hough transform for edge linking, optimal thresholding, and split-and-merge segmentation using quadtrees.
Image segmentation refers to partitioning a digital image into multiple regions or sets of pixels based on characteristics like color or texture. The goal is to simplify the image representation to make it easier to analyze. Some applications in medical imaging include locating tumors, measuring tissue volumes, and computer-guided surgery. Common segmentation techniques include thresholding, edge detection, region growing, and split-and-merge approaches.
The document discusses image restoration techniques. It describes how images can become degraded through phenomena like motion, improper camera focusing, and noise. The goal of image restoration is to recover the original high quality image from its degraded version using knowledge about the degradation process and types of noise. Common noise models include Gaussian, Rayleigh, Erlang, exponential, and impulse noise. Filtering techniques like mean, order statistics, and adaptive filters can be used for restoration by smoothing the image while preserving edges. The adaptive filters change based on local image statistics to better reduce noise with less blurring than regular filters.
The document discusses image segmentation techniques. It begins by defining segmentation as partitioning an image into distinct regions that correlate with objects or features of interest. The goal of segmentation is to find meaningful groups of pixels. Several segmentation techniques are described, including region growing/shrinking, clustering methods, and boundary detection. Region growing uses homogeneity tests to merge neighboring regions, while clustering divides space based on similarity within groups. Boundary detection finds boundaries between objects. The document provides examples and details of applying these segmentation methods.
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method
Energyfunctional
– Data and Smoothness terms
• GraphCut – Min cut
– Max Flow
• ApplicationsinRadiologyImages
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
This document discusses image enhancement techniques in the spatial domain. It begins by introducing intensity transformations and spatial filtering as the two principal categories of spatial domain processing. It then describes the basics of intensity transformations, including how they directly manipulate pixel values in an image. The document focuses on different types of basic intensity transformation functions such as image negation, log transformations, power law transformations, and piecewise linear transformations. It provides examples of how these transformations can be used to enhance images. Finally, it discusses histogram processing and how the histogram of an image provides information about the distribution of pixel intensities.
It is the basic introduction of how the images will be captured and converted form analog to digital format by using sampling and quantization process and further algorithms will be apply on the digitized image.
This document discusses various types of image filtering techniques including low pass filters, high pass filters, and predefined filters in MATLAB. Low pass filters blur images by averaging pixel values, while high pass filters emphasize edges by removing slowly varying intensities. Common predefined filters in MATLAB include averaging, Gaussian, Laplacian, Sobel, and unsharp filters for blurring, smoothing, edge detection, and contrast enhancement respectively. Larger filter sizes produce blurrier low pass and thicker edged high pass filtered images.
Digital Image Processing: Image Enhancement in the Frequency DomainMostafa G. M. Mostafa
This document is a chapter from a textbook on digital image processing. It discusses the discrete Fourier transform (DFT) and its properties. It also covers various filtering techniques that can be performed in the frequency domain, including low-pass, high-pass, band-pass, and homomorphic filters using approaches like Gaussian, Butterworth, and ideal filters. Homework problems 4.9 and 4.12 are also mentioned at the end.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
This document discusses techniques for super resolution image reconstruction from multiple low resolution images. There are three main approaches: interpolation-based, example-learning-based, and multi-image based. Multi-image based super resolution attempts to reconstruct the original high resolution image by using information from a set of observed low resolution images. Key steps involved in multi-image super resolution include image registration to determine displacements between images, modeling the imaging process, and using techniques like the Irani and Peleg algorithm to estimate the blurring function and reconstruct the high resolution image.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
Image segmentation is an important image processing step, and it is used everywhere if we want to analyze what is inside the image. Image segmentation, basically provide the meaningful objects of the image.
Lec8: Medical Image Segmentation (II) (Region Growing/Merging)Ulaş Bağcı
. Region Growing algorithm
• Homogeneity Criteria
• Split/Merge algorithm
• Examples from CT, MRI, PET
• Limitations
Image Filtering, Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, astronomy, and industrial inspection. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, color processing, and compression.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, space exploration, and more. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, compression, and color processing.
An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image Texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image. This presentation consists of its types, uses, methods and approaches.
This document discusses image segmentation techniques. It describes how segmentation partitions an image into meaningful regions based on discontinuities or similarities in pixel intensity. The key methods covered are thresholding, edge detection using gradient and Laplacian operators, and the Hough transform for global line detection. Adaptive thresholding is also introduced as a technique to handle uneven illumination.
The document discusses computer vision with deep learning. It provides an overview of convolutional neural networks and their use in computer vision applications like image classification and object detection. Specifically, it discusses how CNNs use convolutional layers to learn visual features from images and provide examples of CNNs being used for pipeline defect classification and filler cap quality control.
The document discusses various techniques for image segmentation including discontinuity-based approaches, similarity-based approaches, thresholding methods, region-based segmentation using region growing and region splitting/merging. Key techniques covered include edge detection using gradient operators, the Hough transform for edge linking, optimal thresholding, and split-and-merge segmentation using quadtrees.
Image segmentation refers to partitioning a digital image into multiple regions or sets of pixels based on characteristics like color or texture. The goal is to simplify the image representation to make it easier to analyze. Some applications in medical imaging include locating tumors, measuring tissue volumes, and computer-guided surgery. Common segmentation techniques include thresholding, edge detection, region growing, and split-and-merge approaches.
The document discusses image restoration techniques. It describes how images can become degraded through phenomena like motion, improper camera focusing, and noise. The goal of image restoration is to recover the original high quality image from its degraded version using knowledge about the degradation process and types of noise. Common noise models include Gaussian, Rayleigh, Erlang, exponential, and impulse noise. Filtering techniques like mean, order statistics, and adaptive filters can be used for restoration by smoothing the image while preserving edges. The adaptive filters change based on local image statistics to better reduce noise with less blurring than regular filters.
The document discusses image segmentation techniques. It begins by defining segmentation as partitioning an image into distinct regions that correlate with objects or features of interest. The goal of segmentation is to find meaningful groups of pixels. Several segmentation techniques are described, including region growing/shrinking, clustering methods, and boundary detection. Region growing uses homogeneity tests to merge neighboring regions, while clustering divides space based on similarity within groups. Boundary detection finds boundaries between objects. The document provides examples and details of applying these segmentation methods.
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method
Energyfunctional
– Data and Smoothness terms
• GraphCut – Min cut
– Max Flow
• ApplicationsinRadiologyImages
Presentation for the Berlin Computer Vision Group, December 2020 on deep learning methods for image segmentation: Instance segmentation, semantic segmentation, and panoptic segmentation.
This document discusses image enhancement techniques in the spatial domain. It begins by introducing intensity transformations and spatial filtering as the two principal categories of spatial domain processing. It then describes the basics of intensity transformations, including how they directly manipulate pixel values in an image. The document focuses on different types of basic intensity transformation functions such as image negation, log transformations, power law transformations, and piecewise linear transformations. It provides examples of how these transformations can be used to enhance images. Finally, it discusses histogram processing and how the histogram of an image provides information about the distribution of pixel intensities.
It is the basic introduction of how the images will be captured and converted form analog to digital format by using sampling and quantization process and further algorithms will be apply on the digitized image.
This document discusses various types of image filtering techniques including low pass filters, high pass filters, and predefined filters in MATLAB. Low pass filters blur images by averaging pixel values, while high pass filters emphasize edges by removing slowly varying intensities. Common predefined filters in MATLAB include averaging, Gaussian, Laplacian, Sobel, and unsharp filters for blurring, smoothing, edge detection, and contrast enhancement respectively. Larger filter sizes produce blurrier low pass and thicker edged high pass filtered images.
Digital Image Processing: Image Enhancement in the Frequency DomainMostafa G. M. Mostafa
This document is a chapter from a textbook on digital image processing. It discusses the discrete Fourier transform (DFT) and its properties. It also covers various filtering techniques that can be performed in the frequency domain, including low-pass, high-pass, band-pass, and homomorphic filters using approaches like Gaussian, Butterworth, and ideal filters. Homework problems 4.9 and 4.12 are also mentioned at the end.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
This document discusses techniques for super resolution image reconstruction from multiple low resolution images. There are three main approaches: interpolation-based, example-learning-based, and multi-image based. Multi-image based super resolution attempts to reconstruct the original high resolution image by using information from a set of observed low resolution images. Key steps involved in multi-image super resolution include image registration to determine displacements between images, modeling the imaging process, and using techniques like the Irani and Peleg algorithm to estimate the blurring function and reconstruct the high resolution image.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
Image segmentation is an important image processing step, and it is used everywhere if we want to analyze what is inside the image. Image segmentation, basically provide the meaningful objects of the image.
Lec8: Medical Image Segmentation (II) (Region Growing/Merging)Ulaş Bağcı
. Region Growing algorithm
• Homogeneity Criteria
• Split/Merge algorithm
• Examples from CT, MRI, PET
• Limitations
Image Filtering, Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, astronomy, and industrial inspection. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, color processing, and compression.
This document provides an introduction to digital image processing. It defines what a digital image is as a finite set of pixels representing a two-dimensional scene. Digital image processing is described as focusing on improving images for human interpretation and processing images for machine perception. The history of digital image processing is outlined from early applications in newspapers to current uses in fields like medicine, space exploration, and more. Key stages of digital image processing are identified as image acquisition, enhancement, restoration, morphological processing, segmentation, representation, object recognition, compression, and color processing.
This presentation discusses digital image processing. It begins with definitions of digital images and digital image processing. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The history of digital image processing is then reviewed from the 1920s to today. Key examples of applications like medical imaging, satellite imagery, and industrial inspection are provided. The main stages of digital image processing are outlined, including image acquisition, enhancement, restoration, segmentation, and compression. The document concludes with an overview of a system for automatic face recognition using color-based segmentation.
The document is an introduction to a course on digital image processing. It begins with definitions of digital images and digital image processing. It then provides a brief history of digital image processing, highlighting early applications in newspapers and space exploration. It also gives examples of current applications in areas like medicine, mapping, industrial inspection, and human-computer interfaces. Finally, it outlines some key stages in digital image processing pipelines like image acquisition, enhancement, restoration, segmentation, and compression.
Digital Image Processing_ ch1 introduction-2003Malik obeisat
The document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a two-dimensional image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document outlines the history of digital image processing and provides examples of its use in applications such as image enhancement, medical imaging, satellite imagery, and industrial inspection. It also describes common stages in digital image processing like image acquisition, enhancement, restoration, segmentation, and compression.
Digital image processing involves techniques to improve and analyze digital images. It focuses on tasks like enhancing images for human interpretation, processing images for machine applications, and processing image data for storage and transmission. Key stages in digital image processing include image acquisition, enhancement, restoration, segmentation, and representation. Digital image processing has a long history and is now widely used in applications like medical imaging, satellite imagery analysis, industrial inspection, and law enforcement.
The document provides a history of digital image processing from the early 1920s to present day. It discusses some of the earliest applications including transmitting newspaper images via submarine cable. Major developments occurred in the 1960s with improved computing enabling enhanced images from space missions. Digital image processing began being used for medical applications in the 1970s. The field has since expanded significantly with uses in areas like astronomy, art, medicine, law enforcement, and more. The document also defines digital images and digital image processing, and outlines some key stages in processing including acquisition, restoration, segmentation, and representation.
The document provides an introduction to digital image processing. It discusses that digital image processing deals with analyzing and manipulating digital images using computer algorithms and software. Digital images are represented as a matrix of pixels, where each pixel has a numeric value representing characteristics like brightness or color. Some key applications of digital image processing mentioned include medical imaging, remote sensing, machine vision, and image enhancement.
Digital image processing involves representing images as arrays of pixels and then processing those pixels to improve or analyze the image. It has applications in fields like medicine, mapping, law enforcement, and human-computer interfaces. The key stages of digital image processing include image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
This document discusses digital image processing. It defines a digital image and digital image processing. The history of digital image processing is covered from the 1920s to today. Examples of applications are given, including image enhancement, medical imaging, industrial inspection, and more. The key stages of digital image processing are outlined, such as image acquisition, enhancement, restoration, segmentation, and others.
This document provides an introduction to a course on digital image processing. It discusses what a digital image is, defines digital image processing, and outlines the history and key applications of the field. The lecture will cover the definition of a digital image, the tasks of digital image processing, the history and evolution of the field from the 1920s to today, examples of applications in areas like medicine, satellite imagery, industrial inspection, and human-computer interfaces, and the main stages of digital image processing work including image acquisition, enhancement, restoration, and recognition.
Digital image processing has evolved significantly since the early 20th century. Some key developments include the first use of digital images in newspapers in the 1920s, improvements to space imagery in the 1960s that aided NASA missions, and the growth of medical applications like CAT scans in the 1970s. Today, digital image processing is used widely across many domains like enhancement, artistic effects, medicine, mapping, industrial inspection, security, and human-computer interfaces. It involves fundamental steps such as acquisition, enhancement, restoration, segmentation, and compression.
This document outlines the syllabus for a digital image processing course. It introduces key concepts like what a digital image is, areas of digital image processing like low-level, mid-level and high-level processes, a brief history of the field, applications in different domains, and fundamental steps involved. The course will cover topics in digital image fundamentals and processing techniques like enhancement, restoration, compression and segmentation. It will be taught using MATLAB and C# in the labs. Assessment will include homework, exams, labs and a final project.
This document provides an introduction to digital image processing. It defines a digital image as a finite set of pixels representing attributes like color or brightness. Digital image processing involves improving images for human interpretation or machine perception. The history of digital image processing is traced from early applications in newspapers to modern uses in medicine, satellites, and law enforcement. Key stages of digital image processing include acquisition, enhancement, restoration, segmentation, and compression.
This document presents a student's presentation on digital image processing. It begins with definitions of digital images and digital image processing. It then provides a history of digital image processing from the 1920s to today. Key examples of digital image processing applications are discussed, including image enhancement, medical imaging, geographic information systems, industrial inspection, and human-computer interfaces. The main stages of a digital image processing system are outlined, including image acquisition, enhancement, restoration, segmentation, and object recognition. Finally, the document summarizes the student's work on a basic color-space based face detection system.
Presentation on Digital Image ProcessingSalim Hosen
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document provides an overview of digital image processing. It begins with definitions of key terms like digital image, pixels, and image file formats. It then outlines the main stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses the history and applications of digital image processing in fields like medicine, astronomy, law enforcement, and more. Finally, it describes the typical components of an image processing system such as image sensors, specialized hardware, computer, software, storage, displays, and networking.
This document outlines the syllabus for the course IT6005 - Digital Image Processing. The syllabus is divided into 5 units that cover digital image fundamentals, image enhancement, image restoration and segmentation, wavelets and image compression, and image representation and recognition. Unit 1 introduces key concepts in digital image processing such as pixels, gray levels, sampling and quantization. It also provides a brief history of the origin and development of digital image processing.
Chap.3 Knowledge Representation Issues Chap.4 Inference in First Order LogicKhushali Kathiriya
Knowledge–based agents,
The Wumpus world Logic,
Propositional logic,
Propositional theorem proving
Effective propositional model checking,
Agents based on propositional logic,
First Order Logic,
Forward Chaining/ Resolution,
Backward Chaining/ Resolution,
Unification Algorithm, Resolution,
Clausal Normal Form (CNF)
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow and levels of neurotransmitters and endorphins which elevate and stabilize mood.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise stimulates the production of endorphins in the brain which can help alleviate feelings of stress or sadness.
The document discusses understanding in artificial intelligence. It defines understanding as the process of simulating human intelligence through machine learning and algorithms. Machine learning can be supervised, involving labeled training examples to minimize error, or unsupervised, where the system finds common characteristics in unlabeled data. Understanding is also discussed in relation to constraint satisfaction problems, where conditions must be met, like in map coloring or Sudoku puzzles. Backtracking is used to systematically search for solutions that satisfy all constraints.
This document discusses weak slot and filler structures in artificial intelligence. It describes semantic net representation, which represents knowledge as a graphical network of nodes and arcs. It provides examples of representing statements about a cat named Jerry in a semantic net. The document also discusses frame representation, which organizes knowledge into structured records called frames that contain slots and slot values. An example frame is provided for a person named Ram. Advantages and disadvantages of both semantic nets and frames are outlined.
This document provides an overview of uncertainty in artificial intelligence and probabilistic reasoning. It discusses sources of uncertainty like uncertain input, knowledge, and output. Probabilistic reasoning uses probability to represent uncertain knowledge. The document introduces basic probability notation including propositions, atomic events, unconditional probability, conditional probability, independence, and Bayes' rule. It explains how to perform inference using full joint probability distributions and marginalization. The document was prepared by Prof. Khushali B Kathiriya and provides an introduction to representing and reasoning with uncertainty in AI systems.
This document discusses symbolic reasoning under uncertainty. It introduces monotonic reasoning, where conclusions remain valid even when new information is added, and non-monotonic reasoning, where conclusions can be invalidated by new information. For non-monotonic reasoning, it provides an example where concluding a bird can fly is invalidated by learning the bird is a penguin. The document is presented by Prof. Khushali B Kathiriya and outlines introduction to monotonic reasoning, introduction to non-monotonic reasoning, and an example of non-monotonic reasoning logic.
The document discusses knowledge representation using rules in artificial intelligence. It covers procedural versus declarative knowledge, forward chaining and backward chaining. Forward chaining starts with known facts and applies rules to reach a goal, working from bottom to top. Backward chaining starts at the goal and works backwards through rules to find supporting facts, using a top-down approach. An example of selling missiles to prove someone is a criminal is used to illustrate both forward and backward chaining techniques.
This document discusses knowledge representation in artificial intelligence. It covers various techniques for knowledge representation including logical representation using propositional logic and first-order predicate logic, semantic network representation, frame representation, and production rules. It also discusses issues in knowledge representation such as representing important attributes, relationships, and granularity of knowledge. Propositional logic is introduced as the simplest form of logic where statements are represented by propositions that can be either true or false. The syntax and semantics of propositional logic are also covered.
Here are the key AI techniques discussed in the document:
- Tree searching techniques like depth-first search, breadth-first search, uniform cost search, A* search, and heuristic search methods.
- Rule-based systems that apply rules to deduce conclusions.
- Constraint satisfaction techniques that find solutions that satisfy constraints.
- Generate and test approaches that generate candidate solutions and test them against requirements.
- Description and matching techniques that describe states and match them to goals.
- Goal reduction techniques that hierarchically reduce goals to subgoals.
The document discusses these techniques as common approaches used to solve different types of AI problems. It provides examples but does not go into detailed explanations of each technique.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
CV_2 Image Processing
1. Computer Vision
Chap.2 Image Processing
SUB CODE: 3171614
SEMESTER: 7TH IT
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
2. Outline
• Pixel transforms
• Color transforms
• Histogram processing & equalization
• Filtering
• Convolution
• Fourier transformation and its applications in sharpening
• Blurring and noise removal
Prepared by: Prof. Khushali B Kathiriya
2
3. What is Image Processing?
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
4. “One picture is worth more than ten thousand words”
Anonymous
Prepared by: Prof. Khushali B Kathiriya
4
5. What is Digital Image Processing?
• Digital image processing focuses on two major tasks
• Improvement of pictorial information for human interpretation
• Processing of image data for storage, transmission and representation for
autonomous machine perception
Prepared by: Prof. Khushali B Kathiriya
5
6. What is DIP? (cont…)
The continuum from image processing to computer vision can be broken up into
low-, mid- and high-level processes
Low Level Process
Input: Image
Output: Image
Examples: Noise
removal, image
sharpening
Mid Level Process
Input: Image
Output: Attributes
Examples: Object
recognition,
segmentation
High Level Process
Input: Attributes
Output: Understanding
Examples: Scene
understanding,
autonomous navigation
Prepared by: Prof. Khushali B Kathiriya
6
7. History of Digital Image Processing
Early 1920s: One of the first applications of digital imaging was in the news-
paper industry
• The Bartlane cable picture transmission service
• Images were transferred by submarine cable between London and New York
• Pictures were coded for cable transfer and reconstructed at the receiving end on a
telegraph printer
Early digital image
Prepared by: Prof. Khushali B Kathiriya
7
8. History of DIP (cont…)
Mid to late 1920s: Improvements to the Bartlane system resulted in higher quality
images
• New reproduction
processes based
on photographic
techniques
• Increased number
of tones in
reproduced images
Improved
digital image Early 15 tone digital
image
Prepared by: Prof. Khushali B Kathiriya
8
9. History of DIP (cont…)
1960s: Improvements in computing technology and the onset of the space race led
to a surge of work in digital image processing
• 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
• Such techniques were used
in other space missions
including the Apollo landings
A picture of the moon taken
by the Ranger 7 probe
minutes before landing
Prepared by: Prof. Khushali B Kathiriya
9
10. History of DIP (cont…)
1970s: Digital image processing begins to be used in medical applications
• 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT
image
Prepared by: Prof. Khushali B Kathiriya
10
11. History of DIP (cont…)
1980s - Today: The use of digital image processing techniques has exploded and
they are now used for all kinds of tasks in all kinds of areas
• Image enhancement/restoration
• Artistic effects
• Medical visualisation
• Industrial inspection
• Law enforcement
• Human computer interfaces
Prepared by: Prof. Khushali B Kathiriya
11
12. Examples: Image Enhancement
One of the most common uses of DIP techniques: improve quality, remove noise
etc
Prepared by: Prof. Khushali B Kathiriya
12
13. Examples: The Hubble Telescope
•Launched in 1990 the Hubble
telescope can take images of
very distant objects
•However, an incorrect mirror
made many of Hubble’s
images useless
•Image processing
techniques were
used to fix this
Prepared by: Prof. Khushali B Kathiriya
13
14. Examples: Artistic Effects
Artistic effects are used to make
images more visually
appealing, to add special effects
and to make composite images
Prepared by: Prof. Khushali B Kathiriya
14
15. Examples: Medicine
Take slice from MRI scan of canine heart, and find boundaries between types of
tissue
• Image with gray levels representing tissue density
• Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart Edge Detection Image
Prepared by: Prof. Khushali B Kathiriya
15
16. Examples: GIS
Geographic Information Systems
• Digital image processing techniques are used extensively to manipulate satellite
imagery
• Terrain classification
• Meteorology
Prepared by: Prof. Khushali B Kathiriya
16
17. Examples: GIS (cont…)
Night-Time Lights of the World data set
• Global inventory of human
settlement
• Not hard to imagine the kind of
analysis that might be done using
this data
Prepared by: Prof. Khushali B Kathiriya
17
18. Examples: Industrial Inspection
•Human operators are expensive, slow
and unreliable
•Make machines do the job instead
•Industrial vision systems are used in
all kinds of industries
Prepared by: Prof. Khushali B Kathiriya
18
19. Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
• Machine inspection is used to determine that all components are present and that all
solder joints are acceptable
• Both conventional imaging and x-ray imaging are used
Prepared by: Prof. Khushali B Kathiriya
19
20. Examples: Law Enforcement
Image processing techniques are used
extensively by law enforcers
• Number plate recognition for speed
cameras/automated toll systems
• Fingerprint recognition
• Enhancement of CCTV images
Prepared by: Prof. Khushali B Kathiriya
20
21. Examples: HCI
Try to make human computer interfaces more
natural
• Face recognition
• Gesture recognition
Prepared by: Prof. Khushali B Kathiriya
21
22. Key Stages in Digital Image
Processing
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
23. Key Stages in Digital Image Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
23
24. Key Stages in Digital Image Processing:
Image Aquisition
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
24
25. Key Stages in Digital Image Processing:
Image Enhancement
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
25
26. Key Stages in Digital Image Processing:
Image Restoration
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
26
27. Key Stages in Digital Image Processing:
Morphological Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
27
28. Key Stages in Digital Image Processing:
Segmentation
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
28
29. Key Stages in Digital Image Processing:
Object Recognition
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
29
30. Key Stages in Digital Image Processing:
Representation & Description
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
30
31. Key Stages in Digital Image Processing:
Image Compression
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
31
32. Key Stages in Digital Image Processing:
Colour Image Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
32
34. Key Stages in Digital Image Processing:
Image Aquisition
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
34
35. Key Stages in Digital Image Processing:
Image Enhancement
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
35
36. A Note About Grey Levels
• So far when we have spoken about image grey level values we have said
they are in the range [0, 255]
• Where 0 is black and 255 is white
• There is no reason why we have to use this range
• The range [0,255] stems from display technologies
• For many of the image processing operations , grey levels are assumed
to be given in the range [0.0, 1.0]
Prepared by: Prof. Khushali B Kathiriya
36
37. What is a Digital Image? (Cont.)
Common image formats include:
• 1 sample per point (B&W or Grayscale)
• 3 samples per point (Red, Green, and Blue)
• 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a.
Opacity)
Prepared by: Prof. Khushali B Kathiriya
37
38. What Is Image Enhancement?
Image enhancement is the process of making images more useful
The reasons for doing this include:
• Highlighting interesting detail in images
• Removing noise from images
• Making images more visually appealing
Prepared by: Prof. Khushali B Kathiriya
38
43. Spatial & Frequency Domains
There are two broad categories of image enhancement techniques
• Spatial domain techniques
• Direct manipulation of image pixels
• Frequency domain techniques
• Manipulation of Fourier transform or wavelet transform of an image
Prepared by: Prof. Khushali B Kathiriya
43
45. Image Histograms
• The histogram of an image shows us the distribution of grey levels in the
image
• Massively useful in image processing, especially in segmentation
Grey Levels
Frequencies
Prepared by: Prof. Khushali B Kathiriya
45
56. Histogram Examples (cont…)
• A selection of images and
their histograms
• Notice the relationships
between the images and
their histograms
• Note that the high contrast
image has the most
evenly spaced histogram
Prepared by: Prof. Khushali B Kathiriya
56
57. Histogram Example
• Plot the histogram for following data.
Prepared by: Prof. Khushali B Kathiriya
57
Grey Level Number of Pixels
0 40
1 20
2 10
3 15
4 10
5 3
6 2
58. Histogram Example
• Calculate the probability of occurrence of the grey level (Probability Density
Functions)
Prepared by: Prof. Khushali B Kathiriya
58
Grey Level Number of
Pixels(nk)
P(rk) = nk/n
0 40
1 20
2 10
3 15
4 10
5 3
6 2
n=100
59. Histogram Example
• Calculate the probability of occurrence of the grey level (normalized histogram)
Prepared by: Prof. Khushali B Kathiriya
59
Grey Level Number of
Pixels(nk)
P(rk) = nk/n
0 40 0.4
1 20 0.2
2 10 0.1
3 15 0.15
4 10 0.1
5 3 0.03
6 2 0.02
n=100
61. Linear Stretching
• One way to increase the dynamic range is by using a technique
known as histogram stretching
• In this method , we do not alter the basic shape of the histogram ,
but we spread it so as to cover the entire dynamic range
Prepared by: Prof. Khushali B Kathiriya
61
• Smax -> Maximum grey level of output image
• Smin -> Minimum grey level of output image
• rmax -> Maximum grey level of input image
• rmin -> Minimum grey level of input image
Rmax - Rmin
Smax – Smin
s = T(r) = (R - Rmin) + Smin
62. Histogram Stretching : Example
• Perform histogram stretching so that new image has a dynamic range of [0,7]
Prepared by: Prof. Khushali B Kathiriya
62
Grey Level Number of Pixels
0 0
1 0
2 50
3 60
4 50
5 20
6 10
7 0
66. Histogram Stretching
• Modified histogram in the dynamic range of [0,7]
Prepared by: Prof. Khushali B Kathiriya
66
Grey Level Number of Pixels
0 50
1 0
2 60
3 0
4 50
5 20
6 0
7 10
67. Try out your self…
Histogram Stretching
• Modified histogram in the dynamic range of [0,7]
Prepared by: Prof. Khushali B Kathiriya
68
Grey Level Number of Pixels
0 100
1 90
2 85
3 70
4 0
5 0
6 0
7 0
68. Histogram Stretching
• Modified histogram in the dynamic range of [0,7]
Prepared by: Prof. Khushali B Kathiriya
69
Grey Level Number of Pixels
0 100
1 0
2 90
3 0
4 0
5 85
6 0
7 70
70. Histogram Equalisation
• Spreading out the frequencies in an image (or equalising the image) is a simple
way to improve dark or washed out images
• The formula for histogram
equalisation is given where
• rk: input intensity
• sk: processed intensity
• k: the intensity range
(e.g 0.0 – 1.0)
• nj: the frequency of intensity j
• n: the sum of all frequencies
Prepared by: Prof. Khushali B Kathiriya
71
)
( k
k r
T
s =
=
=
k
j
j
r r
p
1
)
(
=
=
k
j
j
n
n
1
75. Histogram Equalization
• Histogram--->PDF --- > CDF --- > Equalized
histogram
• CDF – cumulative density function
=
=
r
r
k
k r
p
r
T
S
0
)
(
)
(
Prepared by: Prof. Khushali B Kathiriya
76
76. Equalize the given histogram
Grey
Level
Number
of
Pixels(nk)
PDF
P(rk) =
nk/n
CDF (L-1)*Sk Round
off
0 790
1 1023
2 850
3 656
4 329
5 245
6 122
7 81
4096
Prepared by: Prof. Khushali B Kathiriya
77
77. Equalize the given histogram
Grey
Level
Number
of
Pixels(nk)
PDF
P(rk) =
nk/n
CDF
Sk =
(L-1)*Sk Round
off
0 790 0.19
1 1023 0.25
2 850 0.21
3 656 0.16
4 329 0.08
5 245 0.06
6 122 0.03
7 81 0.02
4096
r
r r
p
0
)
(
Prepared by: Prof. Khushali B Kathiriya
78
78. Equalize the given histogram
Grey
Level
Number
of
Pixels(nk)
PDF
P(rk) =
nk/n
CDF
Sk =
(L-1)*Sk Round
off
0 790 0.19 0.19
1 1023 0.25 0.44
2 850 0.21 0.65
3 656 0.16 0.81
4 329 0.08 0.89
5 245 0.06 0.95
6 122 0.03 0.98
7 81 0.02 1
4096
r
r r
p
0
)
(
Prepared by: Prof. Khushali B Kathiriya
79
79. Equalize the given histogram
Grey
Level
Number
of
Pixels(nk)
PDF
P(rk) =
nk/n
CDF
Sk =
(L-1)*Sk Round
off
0 790 0.19 0.19 1.33 1
1 1023 0.25 0.44 3.08 3
2 850 0.21 0.65 4.55 5
3 656 0.16 0.81 5.67 6
4 329 0.08 0.89 6.23 6
5 245 0.06 0.95 6.65 7
6 122 0.03 0.98 6.86 7
7 81 0.02 1 7 7
4096
r
r r
p
0
)
(
Prepared by: Prof. Khushali B Kathiriya
80
80. Equalized Grey Level
Grey Level Old Number of
Pixels
New Number of
Pixels
0 790 0
1 1023 790
2 850 0
3 656 1023
4 329 0
5 245 850
6 122 656+329 = 985
7 81 245+122+81 = 448
Prepared by: Prof. Khushali B Kathiriya
81
Grey
Level
Round
off
0 1
1 3
2 5
3 6
4 6
5 7
6 7
7 7
84. Try out your self
(Equalize the given histogram)
Grey Level Number of Pixels
0 100
1 90
2 50
3 20
4 0
5 0
6 0
7 0
Prepared by: Prof. Khushali B Kathiriya
85
85. Try out your self
(Equalize the given histogram)
Prepared by: Prof. Khushali B Kathiriya
86
87. What is Image Restoration?
• Image restoration attempts to restore images that have been degraded
• Identify the degradation process and attempt to reverse it
• Similar to image enhancement, but more objective
Prepared by: Prof. Khushali B Kathiriya
88
88. Image restoration vs. image enhancement
• Enhancement:
• largely a subjective process
• Priori knowledge about the degradation is not a must (sometimes no
degradation is involved)
• Procedures are heuristic and take advantage of the psychophysical aspects of
human visual system
• Restoration:
• more an objective process
• Images are degraded
• Tries to recover the images by using the knowledge about the degradation
Prepared by: Prof. Khushali B Kathiriya
89
89. Noise and Images
• The sources of noise in digital images arise
during image acquisition (digitization) and
transmission
• Imaging sensors can be affected by ambient
conditions
• Interference can be added
to an image during transmission
Prepared by: Prof. Khushali B Kathiriya
90
90. Noise Model
• We can consider a noisy image to be modelled as follows:
• where f(x, y) is the original image pixel, η(x, y) is the noise term and g(x, y) is
the resulting noisy pixel
• If we can estimate the model the noise in an image is based on this will help us
to figure out how to restore the image
)
,
(
)
,
(
)
,
( y
x
y
x
f
y
x
g
+
=
Prepared by: Prof. Khushali B Kathiriya
91
91. Noise Models
Gaussian Rayleigh
Erlang Exponential
Uniform
Impulse
There are many different models for the
image
noise term η(x, y):
• Gaussian
• Most common model
• Rayleigh
• Erlang
• Exponential
• Uniform
• Impulse
• Salt and pepper noise
Prepared by: Prof. Khushali B Kathiriya
93
92. Noise Example
• The test pattern to the right is ideal for
demonstrating the addition of noise
• The following slides will show the result of
adding noise based on various models to this
image
Histogram to go here
Image
Histogram
Prepared by: Prof. Khushali B Kathiriya
94
96. Filtering to Remove Noise: Arithmetic Mean Filter
• We can use spatial filters of different kinds to remove different kinds of noise
• The arithmetic mean filter is a very simple one and is calculated as follows:
This is implemented as the
simple smoothing filter
Blurs the image to remove
noise
=
xy
S
t
s
t
s
g
mn
y
x
f
)
,
(
)
,
(
1
)
,
(
ˆ
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
Prepared by: Prof. Khushali B Kathiriya
98
97. Other Means
There are different kinds of mean filters all of which exhibit slightly different
behaviour:
• Geometric Mean
• Harmonic Mean
• Contraharmonic Mean
Prepared by: Prof. Khushali B Kathiriya
99
98. Other Means (cont…)
• There are other variants on the mean which can give different performance
• Geometric Mean:
• Achieves similar smoothing to the arithmetic mean, but tends to lose less image
detail
mn
S
t
s xy
t
s
g
y
x
f
1
)
,
(
)
,
(
)
,
(
ˆ
=
Prepared by: Prof. Khushali B Kathiriya
100
99. Other Means (cont…)
Harmonic Mean:
• Works well for salt noise, but fails for pepper noise
• Also does well for other kinds of noise such as Gaussian noise
=
xy
S
t
s t
s
g
mn
y
x
f
)
,
( )
,
(
1
)
,
(
ˆ
Prepared by: Prof. Khushali B Kathiriya
101
100. Other Means (cont…)
• Contraharmonic Mean:
• Q is the order of the filter and adjusting its value changes the filter’s behaviour
• Positive values of Q eliminate pepper noise
• Negative values of Q eliminate salt noise
+
=
xy
xy
S
t
s
Q
S
t
s
Q
t
s
g
t
s
g
y
x
f
)
,
(
)
,
(
1
)
,
(
)
,
(
)
,
(
ˆ
Prepared by: Prof. Khushali B Kathiriya
102
102. Noise Removal Examples (cont…)
Image
Corrupted
By Pepper
Noise
Result of
Filtering Above
With 3*3
Contraharmonic
Q=1.5
Prepared by: Prof. Khushali B Kathiriya
104
103. Noise Removal Examples (cont…)
Image
Corrupted
By Salt
Noise
Result of
Filtering Above
With 3*3
Contraharmonic
Q=-1.5
Prepared by: Prof. Khushali B Kathiriya
105
104. Contraharmonic Filter: Here Be Dragons
Choosing the wrong value for Q when using the contraharmonic filter can have
drastic results
Prepared by: Prof. Khushali B Kathiriya
106
106. Order Statistics Filters
• Spatial filters that are based on ordering the pixel values that make up the
neighbourhood operated on by the filter
• Useful spatial filters include
• Median filter
• Max and min filter
• Midpoint filter
• Alpha trimmed mean filter
Prepared by: Prof. Khushali B Kathiriya
108
107. Median Filter
• Median Filter:
• Excellent at noise removal, without the smoothing effects that can occur with
other smoothing filters
• Particularly good when salt and pepper noise is present
)}
,
(
{
)
,
(
ˆ
)
,
(
t
s
g
median
y
x
f
xy
S
t
s
=
Prepared by: Prof. Khushali B Kathiriya
109
108. Max and Min Filter
Max Filter:
Min Filter:
Max filter is good for pepper noise and min is good for salt noise
)}
,
(
{
max
)
,
(
ˆ
)
,
(
t
s
g
y
x
f
xy
S
t
s
=
)}
,
(
{
min
)
,
(
ˆ
)
,
(
t
s
g
y
x
f
xy
S
t
s
=
Prepared by: Prof. Khushali B Kathiriya
110
109. Midpoint Filter
Midpoint Filter:
Good for random Gaussian and uniform noise
+
=
)}
,
(
{
min
)}
,
(
{
max
2
1
)
,
(
ˆ
)
,
(
)
,
(
t
s
g
t
s
g
y
x
f
xy
xy S
t
s
S
t
s
Prepared by: Prof. Khushali B Kathiriya
111
110. Alpha-Trimmed Mean Filter
• Alpha-Trimmed Mean Filter:
• We can delete the d/2 lowest and d/2 highest grey levels
• So gr(s, t) represents the remaining mn – d pixels
−
=
xy
S
t
s
r t
s
g
d
mn
y
x
f
)
,
(
)
,
(
1
)
,
(
ˆ
Prepared by: Prof. Khushali B Kathiriya
112
111. Noise Removal Examples
Image
Corrupted
By Salt And
Pepper Noise
Result of 1
Pass With A
3*3 Median
Filter
Result of 2
Passes With
A 3*3 Median
Filter
Result of 3
Passes With
A 3*3 Median
Filter
Prepared by: Prof. Khushali B Kathiriya
113
112. Noise Removal Examples (cont…)
Image
Corrupted
By Pepper
Noise
Image
Corrupted
By Salt
Noise
Result Of
Filtering
Above
With A 3*3
Min Filter
Result Of
Filtering
Above
With A 3*3
Max Filter
Prepared by: Prof. Khushali B Kathiriya
114
113. Noise Removal Examples (cont…)
Image
Corrupted
By Uniform
Noise
Image Further
Corrupted
By Salt and
Pepper Noise
Filtered By
5*5 Arithmetic
Mean Filter
Filtered By
5*5 Median
Filter
Filtered By
5*5 Geometric
Mean Filter
Filtered By
5*5 Alpha-Trimmed
Mean Filter
Prepared by: Prof. Khushali B Kathiriya
115
114. Periodic Noise
• Typically arises due to
electrical or electromagnetic
interference
• Gives rise to regular noise
patterns in an image
• Frequency domain techniques
in the Fourier domain are
most effective at removing
periodic noise
Prepared by: Prof. Khushali B Kathiriya
116
115. Band Reject Filters
•Removing periodic noise from an image involves
removing a particular range of frequencies from
that image
•Band reject filters can be used for this purpose
•An ideal band reject filter is given as follows:
+
+
−
−
=
2
)
,
(
1
2
)
,
(
2
0
2
)
,
(
1
)
,
(
0
0
0
0
W
D
v
u
D
if
W
D
v
u
D
W
D
if
W
D
v
u
D
if
v
u
H
Prepared by: Prof. Khushali B Kathiriya
117
116. Band Reject Filters (cont…)
• The ideal band reject filter is shown below, along with Butterworth and Gaussian
versions of the filter
Ideal Band
Reject Filter
Butterworth
Band Reject
Filter (of order 1)
Gaussian
Band Reject
Filter
Prepared by: Prof. Khushali B Kathiriya
118
117. Band Reject Filter Example
Image corrupted
by sinusoidal
noise
Fourier spectrum
of corrupted
image
Butterworth band
reject filter
Filtered image
Prepared by: Prof. Khushali B Kathiriya
119
118. Adaptive Filters
• The filters discussed so far are applied to an entire image without any regard
for how image characteristics vary from one point to another
• The behaviour of adaptive filters changes depending on the characteristics of
the image inside the filter region
• We will take a look at the adaptive median filter
Prepared by: Prof. Khushali B Kathiriya
120
119. Adaptive Median Filtering
• The median filter performs relatively well on impulse noise as long as the
spatial density of the impulse noise is not large
• The adaptive median filter can handle much more spatially dense impulse
noise, and also performs some smoothing for non-impulse noise
• The key insight in the adaptive median filter is that the filter size changes
depending on the characteristics of the image
Prepared by: Prof. Khushali B Kathiriya
121
120. Adaptive Median Filtering (cont…)
Remember that filtering looks at each original pixel image in turn and generates a new filtered
pixel
First examine the following notation:
• zmin = minimum grey level in Sxy
• zmax = maximum grey level in Sxy
• zmed = median of grey levels in Sxy
• zxy = grey level at coordinates (x, y)
• Smax =maximum allowed size of Sxy
Prepared by: Prof. Khushali B Kathiriya
122
121. Adaptive Median Filtering (cont…)
Level A: A1 = zmed – zmin
A2 = zmed – zmax
If A1 > 0 and A2 < 0, Go to level B
Else increase the window size
If window size ≤ Smax repeat level A
Else output zmed
Level B: B1 = zxy – zmin
B2 = zxy – zmax
If B1 > 0 and B2 < 0, output zxy
Else output zmed
Prepared by: Prof. Khushali B Kathiriya
123
122. Adaptive Median Filtering (cont…)
The key to understanding the algorithm is to remember that the adaptive median
filter has three purposes:
• Remove impulse noise
• Provide smoothing of other noise
• Reduce distortion
Prepared by: Prof. Khushali B Kathiriya
124
123. Adaptive Filtering Example
Image corrupted by salt
and pepper noise with
probabilities Pa = Pb=0.25
Result of filtering with a
7 * 7 median filter
Result of adaptive
median filtering with i = 7
Prepared by: Prof. Khushali B Kathiriya
125
124. Summary
• Restoration is slightly more objective than enhancement
• Spatial domain techniques are particularly useful for removing random
noise
• Frequency domain techniques are particularly useful for removing
periodic noise
Prepared by: Prof. Khushali B Kathiriya
126
126. Contents
In this lecture we will look at spatial filtering techniques:
• Neighbourhood operations
• What is spatial filtering?
• Smoothing operations
• What happens at the edges?
• Correlation and convolution
Prepared by: Prof. Khushali B Kathiriya
128
127. Neighbourhood Operations
• Neighbourhood operations simply operate on a larger neighbourhood of pixels
than point operations
• Neighbourhoods are
mostly a rectangle
around a central pixel
• Any size rectangle
and any shape filter
are possible
Origin x
y Image f (x, y)
(x, y)
Neighbourhood
Prepared by: Prof. Khushali B Kathiriya
129
128. Simple Neighbourhood Operations
Some simple neighbourhood operations include:
• Min: Set the pixel value to the minimum in the neighbourhood
• Max: Set the pixel value to the maximum in the neighbourhood
• Median: The median value of a set of numbers is the midpoint value in that set (e.g.
from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the median works better
than the average
Prepared by: Prof. Khushali B Kathiriya
130
129. Simple Neighbourhood Operations Example
123 127 128 119 115 130
140 145 148 153 167 172
133 154 183 192 194 191
194 199 207 210 198 195
164 170 175 162 173 151
Original Image x
y
Enhanced Image x
y
Prepared by: Prof. Khushali B Kathiriya
131
130. The Spatial Filtering Process
r s t
u v w
x y z
Origin x
y Image f (x, y)
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
x*g + y*h + z*i
Filter
Simple 3*3
Neighbourhood
e 3*3 Filter
a b c
d e f
g h i
Original Image
Pixels
*
The above is repeated for every pixel in the original image to generate the
filtered image
Prepared by: Prof. Khushali B Kathiriya
132
131. Spatial Filtering: Equation Form
−
= −
=
+
+
=
a
a
s
b
b
t
t
y
s
x
f
t
s
w
y
x
g )
,
(
)
,
(
)
,
(
Filtering can be given in
equation form as shown
above
Notations are based on the
image shown to the left
Prepared by: Prof. Khushali B Kathiriya
133
132. Smoothing Spatial Filters
One of the simplest spatial filtering operations we can perform is a smoothing
operation
• Simply average all of the pixels in a neighbourhood around a central value
• Especially useful
in removing noise
from images
• Also useful for
highlighting gross
detail
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
Simple
averaging
filter
(Box filter)
Prepared by: Prof. Khushali B Kathiriya
134
133. Smoothing Spatial Filtering
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
Origin x
y Image f (x, y)
e = 1/9*106 +
1/9*104 + 1/9*100 + 1/9*108 +
1/9*99 + 1/9*98 +
1/9*95 + 1/9*90 + 1/9*85
= 98.3333
Filter
Simple 3*3
Neighbourhood
106
104
99
95
100 108
98
90 85
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
1/9
3*3 Smoothing
Filter
104 100 108
99 106 98
95 90 85
Original Image
Pixels
*
The above is repeated for every pixel in the original image to generate the
smoothed image
Prepared by: Prof. Khushali B Kathiriya
135
134. Image Smoothing Example
• The image at the top left
is an original image of
size 500*500 pixels
• The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
• 3, 5, 9, 15 and 35
• Notice how detail begins
to disappear
Prepared by: Prof. Khushali B Kathiriya
136
135. Correlation
• The equation of correlation is defined as follows :
• It performs dot product between weights and function
Prepared by: Prof. Khushali B Kathiriya
137
a b
s a t b
g( x,y ) ω( s,t ) f ( x s,y t )
g ω f
=− =−
= + +
=
143. Weighted Smoothing Filters
• More effective smoothing filters can be generated by allowing different pixels in
the neighbourhood different weights in the averaging function
• Pixels closer to the
central pixel are more
important
• Often referred to as a
weighted averaging
1/16
2/16
1/16
2/16
4/16
2/16
1/16
2/16
1/16
Weighted
averaging filter
Prepared by: Prof. Khushali B Kathiriya
145
144. Another Smoothing Example
By smoothing the original image we get rid of lots of the finer detail
which leaves only the gross features for thresholding
Original Image Smoothed Image Thresholded Image
Prepared by: Prof. Khushali B Kathiriya
146
145. Averaging Filter Vs. Median Filter Example
• Filtering is often used to remove noise from images
• Sometimes a median filter works better than an averaging filter
Original Image
With Noise
Image After
Averaging Filter
Image After
Median Filter
Prepared by: Prof. Khushali B Kathiriya
147
146. Averaging Filter Vs. Median Filter Example
Prepared by: Prof. Khushali B Kathiriya
148
147. Averaging Filter Vs. Median Filter Example
Prepared by: Prof. Khushali B Kathiriya
149
148. Averaging Filter Vs. Median Filter Example
Prepared by: Prof. Khushali B Kathiriya
150
149. Simple Neighbourhood Operations Example
123 127 128 119 115 130
140 145 148 153 167 172
133 154 183 192 194 191
194 199 207 210 198 195
164 170 175 162 173 151
x
y
Prepared by: Prof. Khushali B Kathiriya
151
150. Strange Things Happen At The Edges!
Origin x
y Image f (x, y)
e
e
e
e
• At the edges of an image we are missing pixels to form a neighbourhood
e e
e
Prepared by: Prof. Khushali B Kathiriya
152
151. Strange Things Happen At The Edges! (cont…)
There are a few approaches to dealing with missing edge pixels:
• Omit missing pixels
• Only works with some filters
• Can add extra code and slow down processing
• Pad the image
• Typically with either all white or all black pixels
• Replicate border pixels
• Truncate the image
• Allow pixels wrap around the image
• Can cause some strange image artefacts
Prepared by: Prof. Khushali B Kathiriya
153
152. Simple Neighbourhood Operations Example
123 127 128 119 115 130
140 145 148 153 167 172
133 154 183 192 194 191
194 199 207 210 198 195
164 170 175 162 173 151
x
y
Prepared by: Prof. Khushali B Kathiriya
154
153. Strange Things Happen At The Edges! (cont…)
Original
Image
Filtered Image:
Zero Padding
Filtered Image:
Replicate Edge Pixels
Filtered Image:
Wrap Around Edge Pixels
154. Correlation & Convolution
• The filtering we have been talking about so far is referred to as
correlation with the filter itself referred to as the correlation kernel
• Convolution is a similar operation, with just one subtle difference
• For symmetric filters it makes no difference
eprocessed = v*e +
z*a + y*b + x*c +
w*d + u*e +
t*f + s*g + r*h
r s t
u v w
x y z
Filter
a b c
d e e
f g h
Original Image
Pixels
*
Prepared by: Prof. Khushali B Kathiriya
156
155. Convolution
• The equation of correlation is defined as follows :
• It performs cross product between weights and function
Prepared by: Prof. Khushali B Kathiriya
157
a b
s a t b
g( x,y ) ω( s,t ) f ( x s,y t )
g ω f
=− =−
= − −
=
165. Filter and Frequency
• Frequency – The number of times that a periodic function repeats the same
sequence of values during a unit variation of the independent variable.
• Filter – A device or material for suppressing or minimizing waves or oscillations
of certain frequencies.
Prepared by: Prof. Khushali B Kathiriya
167
166. The Big Idea
=
• Any function that periodically repeats itself can be
expressed as a sum of sines and cosines of
different frequencies each multiplied by a different
coefficient – a Fourier series
Prepared by: Prof. Khushali B Kathiriya
168
167. The Big Idea (cont…)
Notice how we get closer and closer to the original function as we add more
and more frequencies
Prepared by: Prof. Khushali B Kathiriya
169
168. The Big Idea (cont…) Frequency
domain signal
processing
example in Excel
Prepared by: Prof. Khushali B Kathiriya
170
169. Complex Numbers
• A complex number, C, is defined as
C = R + jI,
where R is real number & I is Imaginary number equal to the square root of
-1.
ie. j = 1
−
Prepared by: Prof. Khushali B Kathiriya
171
170. The Discrete Fourier Transform (DFT)
• The Discrete Fourier Transform of f(x, y), for x = 0, 1, 2…M-1 and y = 0,1,2…N-1,
denoted by F(u, v), is given by the equation:
for u = 0, 1, 2…M-1 and v = 0, 1, 2…N-1.
−
=
−
=
+
−
=
1
0
1
0
)
/
/
(
2
)
,
(
)
,
(
M
x
N
y
N
vy
M
ux
j
e
y
x
f
v
u
F
Prepared by: Prof. Khushali B Kathiriya
172
171. DFT & Images
• The DFT of a two dimensional image can be visualised by showing the
spectrum of the image component frequencies
DFT
Prepared by: Prof. Khushali B Kathiriya
173
174. DFT & Images (cont…)
DFT
Scanning electron microscope
image of an integrated circuit
magnified ~2500 times
Fourier spectrum of the image
Prepared by: Prof. Khushali B Kathiriya
176
175. DFT & Images (cont…)
Prepared by: Prof. Khushali B Kathiriya
177
176. DFT & Images (cont…)
Prepared by: Prof. Khushali B Kathiriya
178
177. The Inverse DFT
• It is really important to note that the Fourier transform is completely reversible
• The inverse DFT is given by:
for x = 0, 1, 2…M-1 and y = 0, 1, 2…N-1
−
=
−
=
+
=
1
0
1
0
)
/
/
(
2
)
,
(
1
)
,
(
M
u
N
v
N
vy
M
ux
j
e
v
u
F
MN
y
x
f
Prepared by: Prof. Khushali B Kathiriya
179
178. The DFT and Image Processing
To filter an image in the frequency domain:
1. Compute F(u,v) the DFT of the image
2. Multiply F(u,v) by a filter function H(u,v)
3. Compute the inverse DFT of the result
Prepared by: Prof. Khushali B Kathiriya
180
179. Some Basic Frequency Domain Filters
Low Pass Filter
High Pass Filter
Prepared by: Prof. Khushali B Kathiriya
181
181. Major filter categories
• Typically, filters are classified by examining their
properties in the frequency domain:
(1) Low-pass
(2) High-pass
(3) Band-pass
(4) Band-stop
Prepared by: Prof. Khushali B Kathiriya
183
183. Low-pass filters
(i.e., smoothing filters)
• Preserve low frequencies - useful for noise suppression
frequency domain time domain
Example:
Prepared by: Prof. Khushali B Kathiriya
185
184. High-pass filters
(i.e., sharpening filters)
• Preserves high frequencies - useful for edge detection
frequency
domain
time
domain
Example:
Prepared by: Prof. Khushali B Kathiriya
186
185. Band-pass filters
• Preserves frequencies within a certain band
frequency
domain
time
domain
Example:
Prepared by: Prof. Khushali B Kathiriya
187
186. Band-stop filters
• How do they look like?
Band-pass Band-stop
Prepared by: Prof. Khushali B Kathiriya
188
187. Frequency Domain Methods
Case 1: H(u,v) is specified in
the frequency domain.
Case 2: h(x,y) is specified in
the spatial domain.
(real)
Prepared by: Prof. Khushali B Kathiriya
189