Digital Image Processing is an introduction to the topic that covers the definition of digital images and digital image processing. It provides a brief history of the field and examples of applications like medical imaging, satellite imagery analysis, and industrial inspection. The document concludes with an overview of the key stages in digital image processing like image acquisition, enhancement, and representation.
Digital Image Processing_ ch1 introduction-2003Malik obeisat
The document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a two-dimensional image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document outlines the history of digital image processing and provides examples of its use in applications such as image enhancement, medical imaging, satellite imagery, and industrial inspection. It also describes common stages in digital image processing like image acquisition, enhancement, restoration, segmentation, and compression.
This document outlines the syllabus for a digital image processing course. It introduces key concepts like what a digital image is, areas of digital image processing like low-level, mid-level and high-level processes, a brief history of the field, applications in different domains, and fundamental steps involved. The course will cover topics in digital image fundamentals and processing techniques like enhancement, restoration, compression and segmentation. It will be taught using MATLAB and C# in the labs. Assessment will include homework, exams, labs and a final project.
1. Image restoration aims to reconstruct or recover an image that has been distorted by known degradation processes.
2. Degradation can occur during image acquisition, display, or processing due to factors like sensor noise, blurring, motion, or atmospheric effects.
3. Restoration techniques model the degradation process and apply the inverse to estimate the original undistorted image. The accuracy of the estimate depends on how well the degradation is modeled.
Digital image processing involves performing operations on digital images using computer algorithms. It has several functional categories including image restoration to remove noise and distortions, enhancement to modify the visual impact, and information extraction to analyze images. The main steps are acquisition, enhancement, restoration, color processing, compression, segmentation, and filtering using techniques like pixelization, principal components analysis, and neural networks. It has applications in medical imaging, film, transmission, sensing, and robotics. The advantages are noise removal, flexibility in format and manipulation, and easy storage and retrieval. The disadvantages can include high initial costs and potential data loss if storage devices fail.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This presentation discusses digital image processing. It begins with definitions of digital images and digital image processing. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The history of digital image processing is then reviewed from the 1920s to today. Key examples of applications like medical imaging, satellite imagery, and industrial inspection are provided. The main stages of digital image processing are outlined, including image acquisition, enhancement, restoration, segmentation, and compression. The document concludes with an overview of a system for automatic face recognition using color-based segmentation.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
Digital Image Processing_ ch1 introduction-2003Malik obeisat
The document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a two-dimensional image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document outlines the history of digital image processing and provides examples of its use in applications such as image enhancement, medical imaging, satellite imagery, and industrial inspection. It also describes common stages in digital image processing like image acquisition, enhancement, restoration, segmentation, and compression.
This document outlines the syllabus for a digital image processing course. It introduces key concepts like what a digital image is, areas of digital image processing like low-level, mid-level and high-level processes, a brief history of the field, applications in different domains, and fundamental steps involved. The course will cover topics in digital image fundamentals and processing techniques like enhancement, restoration, compression and segmentation. It will be taught using MATLAB and C# in the labs. Assessment will include homework, exams, labs and a final project.
1. Image restoration aims to reconstruct or recover an image that has been distorted by known degradation processes.
2. Degradation can occur during image acquisition, display, or processing due to factors like sensor noise, blurring, motion, or atmospheric effects.
3. Restoration techniques model the degradation process and apply the inverse to estimate the original undistorted image. The accuracy of the estimate depends on how well the degradation is modeled.
Digital image processing involves performing operations on digital images using computer algorithms. It has several functional categories including image restoration to remove noise and distortions, enhancement to modify the visual impact, and information extraction to analyze images. The main steps are acquisition, enhancement, restoration, color processing, compression, segmentation, and filtering using techniques like pixelization, principal components analysis, and neural networks. It has applications in medical imaging, film, transmission, sensing, and robotics. The advantages are noise removal, flexibility in format and manipulation, and easy storage and retrieval. The disadvantages can include high initial costs and potential data loss if storage devices fail.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This presentation discusses digital image processing. It begins with definitions of digital images and digital image processing. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The history of digital image processing is then reviewed from the 1920s to today. Key examples of applications like medical imaging, satellite imagery, and industrial inspection are provided. The main stages of digital image processing are outlined, including image acquisition, enhancement, restoration, segmentation, and compression. The document concludes with an overview of a system for automatic face recognition using color-based segmentation.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
This presentation describes briefly about the image enhancement in spatial domain, basic gray level transformation, histogram processing, enhancement using arithmetic/ logical operation, basics of spatial filtering and local enhancements.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
Spatial filtering using image processingAnuj Arora
(1) Spatial filtering is defined as operations performed on pixels within a neighborhood of an image using a mask or kernel. (2) Filters can be used to blur/smooth an image by reducing noise or sharpen an image by enhancing edges. (3) Common linear filtering methods include averaging, Gaussian, and derivative filters which are implemented using various mask patterns to modify pixels in the filtered image.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
Edge detection of video using matlab codeBhushan Deore
Bhushan M. Deore presented on edge detection techniques at the Department of Electronics & Telecommunication at PLITMS Buldana on October 2, 2013. The presentation covered various edge detection methods including first order derivative methods (Roberts, Sobel, Prewitt), second order derivative methods (Laplacian, LoG, DoG), and optimal edge detection using Canny edge detection. Code examples were provided to demonstrate edge detection on video streams and applications in areas like video surveillance, traffic management, and remote sensing were discussed.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
The document discusses image sampling and quantization. It defines a digital image as a discrete 2D array containing intensity values of finite bits. A digital image is formed by sampling a continuous image, which involves multiplying it by a comb function of discrete delta pulses, yielding discrete image values. Quantization further discretizes the intensity values into a finite set of values. For accurate image reconstruction, the sampling frequency must be greater than twice the maximum image frequency, as stated by the sampling theorem.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
This document discusses digital image processing and spatial filtering. It begins by explaining that spatial filtering operates on neighborhoods of pixels rather than individual pixels. It then provides examples of simple neighborhood operations like minimum, maximum, and median filters. It also shows how spatial filtering can be expressed as an equation. The document goes on to explain smoothing spatial filters, which average pixel values in a neighborhood. It provides an example of a 3x3 averaging filter and shows how it is applied to each pixel. Finally, it discusses weighted smoothing filters that give more importance to pixels closer to the center.
This document provides an overview of key concepts in digital image processing, including:
1. It discusses fundamental steps like image acquisition, enhancement, color image processing, and wavelets and multiresolution processing.
2. Image enhancement techniques process images to make them more suitable for specific applications.
3. Color image processing has increased in importance due to more digital images on the internet. Wavelets allow images to be represented at various resolution levels.
This document summarizes digital image processing techniques including algebraic approaches to image restoration and inverse filtering. It discusses:
1) Unconstrained and constrained restoration, with unconstrained having no knowledge of noise and constrained using knowledge of noise.
2) Inverse filtering which is a direct method that minimizes error between degraded and original images using matrix operations, but can be unstable due to noise or near-zero filter values.
3) Pseudo-inverse filtering which adds a threshold to the inverse filter to avoid instability, working better for noisy images by not amplifying high frequency noise.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
The document discusses digital image representation and processing. It covers:
1) How digital images are represented as 2D arrays of integer pixel values stored in computer memory.
2) The main types of digital images - binary, grayscale, and true color images - based on the number of possible values per pixel.
3) Common image processing techniques like segmentation, thresholding, and histograms that analyze and modify digital images.
4) Thresholding converts pixels to black/white based on a threshold and is often used in segmentation. Histograms show pixel value distributions to aid analysis.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
1) Digital image processing involves improving, restoring, compressing, segmenting, and recognizing digital images. It has applications in industry, medicine, traffic control, entertainment, and more.
2) The origins of digital image processing date back to the 1920s in newspaper printing, but it developed significantly with the space program in the 1960s and medical CT scans in the 1970s.
3) A digital image processing system typically involves image acquisition, storage, processing, and display. Low-level processes improve image quality while mid- and high-level processes extract attributes and recognize objects.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
cvpr2011: human activity recognition - part 5: description basedzukun
This document discusses description-based approaches for analyzing human activities. It describes representing activities semantically using definitions of their structures, and recognizing activities by matching observations to these definitions. It also discusses hierarchical representations of both simple and complex/recursive activities like interactions between multiple people. Recognition algorithms work by matching video observations to the formal syntactic representations of activities. Experiments demonstrated recognizing a variety of simple interactions between people from continuous video sequences.
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
This presentation describes briefly about the image enhancement in spatial domain, basic gray level transformation, histogram processing, enhancement using arithmetic/ logical operation, basics of spatial filtering and local enhancements.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
Spatial filtering using image processingAnuj Arora
(1) Spatial filtering is defined as operations performed on pixels within a neighborhood of an image using a mask or kernel. (2) Filters can be used to blur/smooth an image by reducing noise or sharpen an image by enhancing edges. (3) Common linear filtering methods include averaging, Gaussian, and derivative filters which are implemented using various mask patterns to modify pixels in the filtered image.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
Edge detection of video using matlab codeBhushan Deore
Bhushan M. Deore presented on edge detection techniques at the Department of Electronics & Telecommunication at PLITMS Buldana on October 2, 2013. The presentation covered various edge detection methods including first order derivative methods (Roberts, Sobel, Prewitt), second order derivative methods (Laplacian, LoG, DoG), and optimal edge detection using Canny edge detection. Code examples were provided to demonstrate edge detection on video streams and applications in areas like video surveillance, traffic management, and remote sensing were discussed.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
The document discusses image sampling and quantization. It defines a digital image as a discrete 2D array containing intensity values of finite bits. A digital image is formed by sampling a continuous image, which involves multiplying it by a comb function of discrete delta pulses, yielding discrete image values. Quantization further discretizes the intensity values into a finite set of values. For accurate image reconstruction, the sampling frequency must be greater than twice the maximum image frequency, as stated by the sampling theorem.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
This document discusses digital image processing and spatial filtering. It begins by explaining that spatial filtering operates on neighborhoods of pixels rather than individual pixels. It then provides examples of simple neighborhood operations like minimum, maximum, and median filters. It also shows how spatial filtering can be expressed as an equation. The document goes on to explain smoothing spatial filters, which average pixel values in a neighborhood. It provides an example of a 3x3 averaging filter and shows how it is applied to each pixel. Finally, it discusses weighted smoothing filters that give more importance to pixels closer to the center.
This document provides an overview of key concepts in digital image processing, including:
1. It discusses fundamental steps like image acquisition, enhancement, color image processing, and wavelets and multiresolution processing.
2. Image enhancement techniques process images to make them more suitable for specific applications.
3. Color image processing has increased in importance due to more digital images on the internet. Wavelets allow images to be represented at various resolution levels.
This document summarizes digital image processing techniques including algebraic approaches to image restoration and inverse filtering. It discusses:
1) Unconstrained and constrained restoration, with unconstrained having no knowledge of noise and constrained using knowledge of noise.
2) Inverse filtering which is a direct method that minimizes error between degraded and original images using matrix operations, but can be unstable due to noise or near-zero filter values.
3) Pseudo-inverse filtering which adds a threshold to the inverse filter to avoid instability, working better for noisy images by not amplifying high frequency noise.
This document discusses digital image processing concepts including:
- Image acquisition and representation, including sampling and quantization of images. CCD arrays are commonly used in digital cameras to capture images as arrays of pixels.
- A simple image formation model where the intensity of a pixel is a function of illumination and reflectance at that point. Typical ranges of illumination and reflectance are provided.
- Image interpolation techniques like nearest neighbor, bilinear, and bicubic interpolation which are used to increase or decrease the number of pixels in a digital image. Examples of applying these techniques are shown.
- Basic relationships between pixels including adjacency, paths, regions, boundaries, and distance measures like Euclidean, city block, and
The document discusses digital image representation and processing. It covers:
1) How digital images are represented as 2D arrays of integer pixel values stored in computer memory.
2) The main types of digital images - binary, grayscale, and true color images - based on the number of possible values per pixel.
3) Common image processing techniques like segmentation, thresholding, and histograms that analyze and modify digital images.
4) Thresholding converts pixels to black/white based on a threshold and is often used in segmentation. Histograms show pixel value distributions to aid analysis.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
1) Digital image processing involves improving, restoring, compressing, segmenting, and recognizing digital images. It has applications in industry, medicine, traffic control, entertainment, and more.
2) The origins of digital image processing date back to the 1920s in newspaper printing, but it developed significantly with the space program in the 1960s and medical CT scans in the 1970s.
3) A digital image processing system typically involves image acquisition, storage, processing, and display. Low-level processes improve image quality while mid- and high-level processes extract attributes and recognize objects.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
cvpr2011: human activity recognition - part 5: description basedzukun
This document discusses description-based approaches for analyzing human activities. It describes representing activities semantically using definitions of their structures, and recognizing activities by matching observations to these definitions. It also discusses hierarchical representations of both simple and complex/recursive activities like interactions between multiple people. Recognition algorithms work by matching video observations to the formal syntactic representations of activities. Experiments demonstrated recognizing a variety of simple interactions between people from continuous video sequences.
Handling displacement effects in on-body sensor-based activity recognitionOresti Banos
So far little attention has been paid to activity recognition systems limitations during out-of-lab daily usage. Sensor displacement is one of these major issues, particularly deleterious for inertial on-body sensing. The effect of the displacement normally translates into a drift on the signal space that further propagates to the feature level, thus modifying the expected behavior of the predened recognition systems. On the use of several sensors and diverse motion-sensing modalities, in this paper we compare two fusion methods to evaluate the importance
of decoupling the combination process at feature and classication levels under realistic sensor congurations. In particular a 'feature fusion' and a 'multi-sensor hierarchical-classifier' are considered. The results reveal that the aggregation of sensor-based decisions may overcome the
difficulties introduced by the displacement and confirm the gyroscope as possibly the most displacement-robust sensor modality.
This presentation illustrates part of the work described in the following articles:
* Banos, O., Damas, M., Pomares, H., Rojas, I.: Handling displacement effects in on-body sensor-based activity recognition. In: Proceedings of the 5th International Work-conference on Ambient Assisted Living an Active Ageing (IWAAL 2013), San José, Costa Rica, December 2-6, (2013)
* Banos, O., Damas, M., Pomares, H., Rojas, I. On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition. Sensors, vol. 12, no. 6, pp. 8039-8054 (2012)
Activity recognition based on a multi-sensor meta-classifierOresti Banos
Ensuring ubiquity, robustness and continuity of monitoring
is of key importance in activity recognition. To that end, multiple sensor congurations and fusion techniques are ever more used. In this paper we present a multi-sensor meta-classier that aggregates the knowledge of several sensor-based decision entities to provide a unique and reliable activity classication. This model introduces a new weighting scheme which improves the rating of the impact that each entity has on the decision fusion process. Sensitivity and specicity are particularly considered as insertion and rejection weighting metrics instead of the overall accuracy classication performance proposed in a previous work. For the sake of comparison, both new and previous weighting models together with feature fusion models are tested on an extensive activity recognition
benchmark dataset. The results demonstrate that the new weighting scheme enhances the decision aggregation thus leading to an improved recognition system.
This presentation illustrates part of the work described in the following articles:
* Banos, O., Damas, M., Pomares, H., Rojas, F., Delgado-Marquez, B. & Valenzuela, O.
Human activity recognition based on a sensor weighting hierarchical classifier.
Soft Computing - A Fusion of Foundations, Methodologies and Applications, Springer Berlin / Heidelberg, vol. 17, pp. 333-343 (2013)
* Banos, O., Damas, M., Pomares, H., Rojas, I.: Activity recognition based on a multi-sensor meta-classifier. In: Proceedings of the 2013 International Work Conference on Neural Networks (IWANN 2013), Tenerife, Spain, June 12-14, (2013)
Lisp is a functional programming language where the basic data structure is linked lists and atoms. It was one of the earliest programming languages developed in 1958. Lisp programs are run by interacting with an interpreter like Clisp. Key aspects of Lisp include its use of prefix notation, treating all code as nested lists, defining functions using defun, and its emphasis on recursion and higher-order functions. Common control structures include cond for conditional evaluation and looping constructs like loop. Lisp fell out of widespread use due to performance issues with interpretation and low interoperability with other languages.
This document provides an overview of digital image processing. It discusses what digital images are composed of and how they are processed using computers. The key steps in digital image processing are described as image acquisition, enhancement, restoration, representation and description, and recognition. A variety of techniques can be used at each step like filtering, segmentation, morphological operations, and compression. The document also outlines common sources of digital images, such as from the electromagnetic spectrum, and applications like medical imaging, astronomy, security screening, and human-computer interfaces.
Presentation introducing LISP, looking at the history and concepts behind this powerfull programming language.
Presentation by Tijs van der Storm for the sept 2012 Devnology meetup at the Mirabeau offices in Amsterdam
This document provides a brief introduction to the Lisp programming language. It discusses Lisp's history from its origins in 1958 to modern implementations like Common Lisp and Scheme. It also covers Lisp's support for functional, imperative, and object-oriented paradigms. A key feature of Lisp is its use of s-expressions as both code and data, which enables powerful macros to transform and generate code at compile time.
This document provides an overview of the Lisp programming language. It begins with some notable quotes about Lisp praising its power and importance. It then covers the basic syntax of Lisp including its use of prefix notation, basic data types like integers and booleans, and variables. It demonstrates how to print, use conditional statements like IF and COND, and describes lists as the core data structure in Lisp.
Introduction to Lisp. A survey of lisp's history, current incarnations and advanced features such as list comprehensions, macros and domain-specific-language [DSL] support.
Digital image processing focuses on improving images for human interpretation and machine perception. It involves key stages like acquisition, enhancement, restoration, morphological processing, segmentation, and representation. Applications include medical imaging, industrial inspection, law enforcement, and human-computer interfaces. While digital images allow for faster and more efficient processing than analog images, limitations include reduced quality if enlarged beyond a certain file size.
The document provides background information on programming languages and their history. It discusses early pioneers in computer programming such as Ada Lovelace, Herman Hollerith, and Konrad Zuse. It outlines the development of many popular modern programming languages such as Fortran, COBOL, BASIC, Pascal, C, C++, Java, PHP, JavaScript, Python, Ruby, and others, describing their key features and common uses. Ada Lovelace is noted as creating the first computer program in 1843 for Charles Babbage's analytical engine.
Computer programming is the process of writing source code instructions in a programming language to instruct a computer to perform tasks. Source code is written text using a human-readable programming language like C++, Java, or Python. A program is a sequence of instructions that performs a specific task. Programmers write computer programs by designing source code using programming languages. Programming languages are classified as high-level or low-level. High-level languages provide abstraction from computer details while low-level languages require knowledge of computer design. Language translators like compilers and interpreters convert source code into executable programs.
LISP and PROLOG are early AI programming languages. LISP, created in 1958, uses lists and is functional while PROLOG, created in the 1970s, is logic-based and declarative. Both use recursion and allow programming with lists. They are commonly used for symbolic reasoning, knowledge representation and natural language processing. While different in approach, they both allow developing AI systems through a non-procedural programming style.
Lisp was invented in 1958 by John McCarthy and was one of the earliest high-level programming languages. It has a distinctive prefix notation and uses s-expressions to represent code as nested lists. Lisp features include built-in support for lists, dynamic typing, and an interactive development environment. It was closely tied to early AI research and used in systems like SHRDLU. Lisp allows programs to treat code as data through homoiconicity and features like lambdas, conses, and list processing functions make it good for symbolic and functional programming.
Chapter 14 Cross Cultural Consumer BehaviorAvinash Kumar
The document discusses cross-cultural consumer behavior from an international perspective. It covers several topics including the imperative for companies to be multinational, cross-cultural consumer analysis, and alternative multinational marketing strategies. Some key points are that marketers must understand similarities and differences between cultures, there is a growing global middle class and teenage market, and companies can use standardized or localized marketing approaches depending on the product and culture.
This document provides an overview of digital image processing and is divided into multiple parts. Part I discusses digital image fundamentals, image transforms, image enhancement, image restoration, image compression, and image segmentation. It introduces key concepts such as digital image systems, sampling and quantization, pixel relationships, and image transforms in both the spatial and frequency domains. Image processing techniques like filtering, histogram processing, and frequency domain filtering are also summarized.
The document is an introduction to a course on digital image processing. It begins with definitions of digital images and digital image processing. It then provides a brief history of digital image processing, highlighting early applications in newspapers and space exploration. It also gives examples of current applications in areas like medicine, mapping, industrial inspection, and human-computer interfaces. Finally, it outlines some key stages in digital image processing pipelines like image acquisition, enhancement, restoration, segmentation, and compression.
The document provides an overview of the history and development of computed tomography (CT) scanning. It discusses how CT was pioneered by Godfrey Hounsfield and Allan Cormack in the 1970s, for which they received the 1979 Nobel Prize. It describes the early prototype CT scanners and technological advances that increased scanning speed, such as the introduction of spiral/helical scanning. The document also outlines the basic principles of CT imaging and image reconstruction methods.
This document discusses digital image processing. It defines a digital image and digital image processing. The history of digital image processing is covered from the 1920s to today. Examples of applications are given, including image enhancement, medical imaging, industrial inspection, and more. The key stages of digital image processing are outlined, such as image acquisition, enhancement, restoration, segmentation, and others.
This document provides an introduction to digital image processing. It defines a digital image as a finite set of pixels representing attributes like color or brightness. Digital image processing involves improving images for human interpretation or machine perception. The history of digital image processing is traced from early applications in newspapers to modern uses in medicine, satellites, and law enforcement. Key stages of digital image processing include acquisition, enhancement, restoration, segmentation, and compression.
Digital image processing involves techniques to improve and analyze digital images. It focuses on tasks like enhancing images for human interpretation, processing images for machine applications, and processing image data for storage and transmission. Key stages in digital image processing include image acquisition, enhancement, restoration, segmentation, and representation. Digital image processing has a long history and is now widely used in applications like medical imaging, satellite imagery analysis, industrial inspection, and law enforcement.
This document provides an introduction to digital image processing. It defines a digital image as a finite set of digital values representing a 2D image. Digital image processing focuses on improving images for human interpretation and processing images for machine perception. The document traces the history of digital image processing from the 1920s to its widespread use today. It provides examples of applications in fields like enhancement, medicine, mapping, inspection, law enforcement and human-computer interfaces. Finally, it outlines the key stages of digital image processing systems including acquisition, restoration, processing, analysis and compression.
This document provides an introduction to a course on digital image processing. It discusses what a digital image is, defines digital image processing, and outlines the history and key applications of the field. The lecture will cover the definition of a digital image, the tasks of digital image processing, the history and evolution of the field from the 1920s to today, examples of applications in areas like medicine, satellite imagery, industrial inspection, and human-computer interfaces, and the main stages of digital image processing work including image acquisition, enhancement, restoration, and recognition.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
Digital image processing involves processing digital images using digital computers. There are several key stages in digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. Digital image processing has various applications in fields like document handling, biometrics, medical imaging, traffic monitoring, and more. It plays an important role in computer vision tasks ranging from low-level processes like noise removal to high-level tasks like scene understanding.
This document provides an overview of digital image processing, including:
- It defines what a digital image is and how images are digitized through sampling and quantization.
- It discusses the history of digital image processing from the 1920s to today, highlighting early applications and key advances like CAT scans.
- It gives examples of current uses like image enhancement, medical imaging, industrial inspection, and computer vision tasks like face and object recognition.
- It outlines the main stages of digital image processing pipelines including image acquisition, enhancement, restoration, segmentation, and compression.
- It provides context on the related field of computer vision and its goals of interpreting and understanding images.
Digital images are representations of images using discrete pixel values. Vision is a complex natural process, and digital image processing aims to perform tasks like improving images for human interpretation and machine perception. Key stages in processing include acquisition, enhancement, restoration, morphological operations, segmentation, and representation. Digital image processing has a long history and is now widely used in applications such as medicine, geospatial analysis, industrial inspection, and law enforcement. Examples demonstrate how techniques are applied to tasks like medical imaging, satellite imagery analysis, and printed circuit board inspection.
computervision1.pdf it is about computer visionshesnasuneer
This document provides an introduction to digital image processing and computer vision. It discusses how images are represented digitally through sampling and quantization. Low-level image processing techniques like preprocessing, segmentation, and object description are used to simplify computer vision tasks. Fundamental concepts in digital image processing are also introduced, such as how images can be represented as functions and processed using mathematical tools like the Fourier transform and convolution.
This document provides a syllabus for a course on digital image processing, outlining 4 units that will be covered over 12 weeks. The units include topics like image enhancement, spatial and frequency domain filtering, image restoration, compression, and morphological operations. References for further reading on digital image processing are also provided.
Image processing3 imageenhancement(histogramprocessing)John Williams
This document discusses image enhancement techniques in digital image processing. It introduces image enhancement and its goals of highlighting details, removing noise, and improving visual appeal. Histogram equalization is described as a method to improve dark or washed out images by spreading out the frequencies in an image histogram to increase contrast. Examples are provided to demonstrate histogram equalization transformations and their effects on images. The key steps of histogram equalization calculations are also outlined.
Digital image processing involves representing images as arrays of pixels and then processing those pixels to improve or analyze the image. It has applications in fields like medicine, mapping, law enforcement, and human-computer interfaces. The key stages of digital image processing include image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
This document provides an overview of digital image processing. It discusses what digital image processing is, provides a brief history, and outlines some of the key stages involved, including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses some example applications like medical imaging, autonomous vehicles, traffic monitoring, and biometrics. The document uses images to illustrate different concepts and stages in digital image processing.
Digital images are formed through a process of image acquisition, sampling and quantization. An image is represented as a 2D array of pixels, with each pixel having an intensity or color value. The spatial resolution and pixel depth determine how much detail can be preserved from the original scene. A higher resolution captures more details but results in larger file sizes. The human eye can distinguish between intensities that differ by around 5% and can detect spatial frequencies up to about 60 cycles/degree. Digital image processing techniques are needed to enhance, analyze and compress digital images for different applications.
This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
The document discusses the fundamentals of digital image processing. It defines a digital image as a 2D function where amplitude at each point represents intensity or gray level. A digital image is composed of pixels which are discrete image elements. Image processing includes low-level tasks like noise reduction, mid-level tasks like segmentation, and high-level tasks like object recognition. Mathematical representation of a digital image involves illumination and reflectance components. Intensity at each point in a monochrome image represents its gray level value within the gray scale range from minimum to maximum.
This document provides an introduction to image processing. It discusses key concepts such as signals, signal processing, and how images can be represented as signals and matrices. The document covers how images are converted to digital form and stored on computers. It also describes different levels of image processing from low-level operations like enhancement to higher-level tasks like recognition and interpretation. Overall, the document gives an overview of the fundamentals of digital image processing.
The document outlines the fundamental steps for digital image processing projects, including image acquisition, preprocessing, segmentation, representation and description, recognition and interpretation, and postprocessing. It discusses improving images for human or machine use, and describes common image processing techniques like enhancement, thresholding, representation, description, recognition, and interpretation. The overall methodology presented is meant to increase the likelihood of success for image processing projects.
This document outlines the syllabus for the course IT6005 - Digital Image Processing. The syllabus is divided into 5 units that cover digital image fundamentals, image enhancement, image restoration and segmentation, wavelets and image compression, and image representation and recognition. Unit 1 introduces key concepts in digital image processing such as pixels, gray levels, sampling and quantization. It also provides a brief history of the origin and development of digital image processing.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
1. Digital Image Processing:
Introduction
Brian Mac Namee
Brian.MacNamee@comp.dit.ie
Course Website: http://www.comp.dit.ie/bmacnamee
2. 2
of
36
Introduction
“One picture is worth more than ten
thousand words”
Anonymous
3. 3
of
36
Miscellanea
Lectures:
– Thursdays 12:00 – 13:00
– Fridays 15:00 – 16:00
Labs:
– Wednesdays 09:00 – 11:00
Web Site: www.comp.dit.ie/bmacnamee/
– Previous year’s slides are available here
– Slides etc will also be available on WebCT
E-mail: Brian.MacNamee@dit.ie
4. 4
of
36
References
“Digital Image Processing”, Rafael C.
Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
– Much of the material that follows is taken from
this book
“Machine Vision: Automated Visual
Inspection and Robot Vision”, David
Vernon, Prentice Hall, 1991
– Available online at:
homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/
5. 5
of
36
Contents
This lecture will cover:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
6. 6
of
36
What is a Digital Image?
A digital image is a representation of a two-
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
dimensional image as a finite set of digital
values, called picture elements or pixels
7. 7
of
36
What is a Digital Image? (cont…)
Pixel values typically represent gray levels,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
colours, heights, opacities etc
Remember digitization implies that a digital
image is an approximation of a real scene
1 pixel
8. 8
of
36
What is a Digital Image? (cont…)
Common image formats include:
– 1 sample per point (B&W or Grayscale)
– 3 samples per point (Red, Green, and Blue)
– 4 samples per point (Red, Green, Blue, and “Alpha”,
a.k.a. Opacity)
For most of this course we will focus on grey-scale
images
9. 9
of
36
What is Digital Image Processing?
Digital image processing focuses on two
major tasks
– Improvement of pictorial information for
human interpretation
– Processing of image data for storage,
transmission and representation for
autonomous machine perception
Some argument about where image
processing ends and fields such as image
analysis and computer vision start
10. 10
of
36
What is DIP? (cont…)
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes
Output: Image Output: Attributes Output: Understanding
Examples: Noise Examples: Object Examples: Scene
removal, image recognition, understanding,
sharpening segmentation autonomous navigation
In this course we will
stop here
11. 11
of
36
History of Digital Image Processing
Early 1920s: One of the first applications of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
digital imaging was in the news-
paper industry
– The Bartlane cable picture
transmission service Early digital image
– Images were transferred by submarine cable
between London and New York
– Pictures were coded for cable transfer and
reconstructed at the receiving end on a
telegraph printer
12. 12
of
36
History of DIP (cont…)
Mid to late 1920s: Improvements to the
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Bartlane system resulted in higher quality
images
– New reproduction
processes based
on photographic
techniques
– Increased number
of tones in Improved
digital image Early 15 tone digital
reproduced images image
13. 13
of
36
History of DIP (cont…)
1960s: Improvements in computing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
technology and the onset of the space race
led to a surge of work in digital image
processing
– 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
– Such techniques were used
A picture of the moon taken
in other space missions by the Ranger 7 probe
including the Apollo landings minutes before landing
14. 14
of
36
History of DIP (cont…)
1970s: Digital image processing begins to
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
be used in medical applications
– 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial Typical head slice CAT
Tomography (CAT) scans image
15. 15
of
36
History of DIP (cont…)
1980s - Today: The use of digital image
processing techniques has exploded and
they are now used for all kinds of tasks in all
kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
16. 16
of
36
Examples: Image Enhancement
One of the most common uses of DIP
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
techniques: improve quality, remove noise
etc
17. 17
of
36
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
18. 18
of
36
Examples: Artistic Effects
Artistic effects are
used to make
images more
visually appealing,
to add special
effects and to make
composite images
19. 19
of
36
Examples: Medicine
Take slice from MRI scan of canine heart,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
and find boundaries between types of tissue
– Image with gray levels representing tissue
density
– Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart Edge Detection Image
20. 20
of
36
Examples: GIS
Geographic Information Systems
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
– Digital image processing techniques are used
extensively to manipulate satellite imagery
– Terrain classification
– Meteorology
21. 21
of
36
Examples: GIS (cont…)
Night-Time Lights of
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
the World data set
– Global inventory of
human settlement
– Not hard to imagine
the kind of analysis
that might be done
using this data
22. 22
of
36
Examples: Industrial Inspection
Human operators are
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
23. 23
of
36
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
– Machine inspection is used to determine that
all components are present and that all solder
joints are acceptable
– Both conventional imaging and x-ray imaging
are used
24. 24
of
36
Examples: Law Enforcement
Image processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
techniques are used
extensively by law
enforcers
– Number plate
recognition for speed
cameras/automated
toll systems
– Fingerprint recognition
– Enhancement of
CCTV images
25. 25
of
36
Examples: HCI
Try to make human
computer interfaces more
natural
– Face recognition
– Gesture recognition
Does anyone remember the
user interface from “Minority
Report”?
These tasks can be
extremely difficult
26. 26
of
36
Key Stages in Digital Image Processing
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
27. 27
of
Key Stages in Digital Image Processing:
36 Image Aquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
28. 28
of
Key Stages in Digital Image Processing:
36 Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
29. 29
of
Key Stages in Digital Image Processing:
36 Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
30. 30
of
Key Stages in Digital Image Processing:
36 Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
31. 31
of
Key Stages in Digital Image Processing:
36 Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
32. 32
of
Key Stages in Digital Image Processing:
36 Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
33. 33
of
Key Stages in Digital Image Processing:
36 Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
34. 34
of
Key Stages in Digital Image Processing:
36 Image Compression
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
35. 35
of
Key Stages in Digital Image Processing:
36 Colour Image Processing
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
36. 36
of
36
Summary
We have looked at:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
Next time we will start to see how it all
works…
Editor's Notes
Real world is continuous – an image is simply a digital approximation of this.
Give the analogy of the character recognition system. Low Level: Cleaning up the image of some text Mid level: Segmenting the text from the background and recognising individual characters High level: Understanding what the text says