The document provides information about digital image processing. It begins with definitions of key terms like image, digital image, and digital image processing. It then discusses different types of images like monochrome, grayscale, and color images. It also covers image file formats, sources of noise in digital images, common types of noise like Gaussian noise and salt and pepper noise, and basic filtering techniques. The document contains detailed explanations with examples to illustrate different concepts in digital image processing.
This document discusses digital image compression. It notes that compression is needed due to the huge amounts of digital data. The goals of compression are to reduce data size by removing redundant data and transforming the data prior to storage and transmission. Compression can be lossy or lossless. There are three main types of redundancy in digital images - coding, interpixel, and psychovisual - that compression aims to reduce. Channel encoding can also be used to add controlled redundancy to protect the source encoded data when transmitted over noisy channels. Common compression methods exploit these different types of redundancies.
Digital image processing involves techniques to restore degraded images. Image restoration aims to recover the original undistorted image from a degraded observation. The degradation is typically modeled as the original image being operated on by a degradation function and additive noise. Common restoration techniques include spatial domain filters like mean, median and order-statistic filters to remove noise, and frequency domain filtering to reduce periodic noise. The choice of restoration method depends on the type and characteristics of degradation in the image.
This document discusses image segmentation techniques, specifically linking edge points through local and global processing. Local processing involves linking edge-detected pixels that are similar in gradient strength and direction within a neighborhood. Global processing uses the Hough transform to link edge points into lines by mapping points in the image space to the parameter space of slope-intercept or polar coordinates. Thresholding in parameter space identifies coherent lines composed of edge points. The Hough transform allows finding lines even if there are gaps or other defects in detected edge points.
This document summarizes techniques for least mean square filtering and geometric transformations. It discusses minimum mean square error (Wiener) filtering, constrained least squares filtering, and geometric mean filtering for noise removal. It also covers spatial transformations, nearest neighbor gray level interpolation, and bilinear interpolation for geometric correction of distorted images. Examples are provided to demonstrate geometric distortion, nearest neighbor interpolation, and bilinear transformation.
Image compression involves reducing the size of image files to reduce storage space and transmission time. There are three main types of redundancy in images: coding redundancy, spatial redundancy between neighboring pixels, and irrelevant information. Common compression methods remove these redundancies, such as Huffman coding, arithmetic coding, LZW coding, and run length coding. Popular image file formats include JPEG for photos, PNG for web images, and TIFF, GIF, and DICOM for other uses.
The document discusses basic relationships between pixels in digital images. It defines that a pixel has 4 horizontal and vertical neighbors, called 4-neighbors. It also has 4 diagonal neighbors, and together with the 4-neighbors they form the 8-neighbors of a pixel. Adjacency between pixels is defined based on 4, 8 or m-connectivity depending on pixel intensity values. Connectivity and paths between pixels are also described. Regions in an image are defined as connected subsets of pixels, and region boundaries are pixels adjacent to the complement of the region.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
This document discusses digital image compression. It notes that compression is needed due to the huge amounts of digital data. The goals of compression are to reduce data size by removing redundant data and transforming the data prior to storage and transmission. Compression can be lossy or lossless. There are three main types of redundancy in digital images - coding, interpixel, and psychovisual - that compression aims to reduce. Channel encoding can also be used to add controlled redundancy to protect the source encoded data when transmitted over noisy channels. Common compression methods exploit these different types of redundancies.
Digital image processing involves techniques to restore degraded images. Image restoration aims to recover the original undistorted image from a degraded observation. The degradation is typically modeled as the original image being operated on by a degradation function and additive noise. Common restoration techniques include spatial domain filters like mean, median and order-statistic filters to remove noise, and frequency domain filtering to reduce periodic noise. The choice of restoration method depends on the type and characteristics of degradation in the image.
This document discusses image segmentation techniques, specifically linking edge points through local and global processing. Local processing involves linking edge-detected pixels that are similar in gradient strength and direction within a neighborhood. Global processing uses the Hough transform to link edge points into lines by mapping points in the image space to the parameter space of slope-intercept or polar coordinates. Thresholding in parameter space identifies coherent lines composed of edge points. The Hough transform allows finding lines even if there are gaps or other defects in detected edge points.
This document summarizes techniques for least mean square filtering and geometric transformations. It discusses minimum mean square error (Wiener) filtering, constrained least squares filtering, and geometric mean filtering for noise removal. It also covers spatial transformations, nearest neighbor gray level interpolation, and bilinear interpolation for geometric correction of distorted images. Examples are provided to demonstrate geometric distortion, nearest neighbor interpolation, and bilinear transformation.
Image compression involves reducing the size of image files to reduce storage space and transmission time. There are three main types of redundancy in images: coding redundancy, spatial redundancy between neighboring pixels, and irrelevant information. Common compression methods remove these redundancies, such as Huffman coding, arithmetic coding, LZW coding, and run length coding. Popular image file formats include JPEG for photos, PNG for web images, and TIFF, GIF, and DICOM for other uses.
The document discusses basic relationships between pixels in digital images. It defines that a pixel has 4 horizontal and vertical neighbors, called 4-neighbors. It also has 4 diagonal neighbors, and together with the 4-neighbors they form the 8-neighbors of a pixel. Adjacency between pixels is defined based on 4, 8 or m-connectivity depending on pixel intensity values. Connectivity and paths between pixels are also described. Regions in an image are defined as connected subsets of pixels, and region boundaries are pixels adjacent to the complement of the region.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Morphological image processing uses mathematical morphology tools to extract image components and describe shapes. Some key tools include binary erosion and dilation, which thin and thicken objects. Erosion shrinks objects while dilation grows them. Opening and closing are combinations of erosion and dilation that smooth contours or fill gaps. The hit-or-miss transform detects shapes by requiring matches of foreground and background pixels. Other algorithms include boundary extraction, hole filling, and thinning to find skeletons, which are medial axes of object shapes.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
The document discusses various image enhancement techniques in the spatial domain. It covers basic gray level transformations like negatives, log transformations, and power law transformations. It also discusses histogram processing and enhancement using arithmetic operations. Furthermore, it explains smoothing and sharpening spatial filters, and how to combine different spatial enhancement methods. The document provides examples and background on these fundamental image enhancement concepts.
At the end of this lesson, you should be able to;
describe spatial resolution
describe intensity resolution
identify the effect of aliasing
describe image interpolation
describe relationships among the pixels
1. Image restoration aims to reconstruct or recover an image that has been distorted by known degradation processes.
2. Degradation can occur during image acquisition, display, or processing due to factors like sensor noise, blurring, motion, or atmospheric effects.
3. Restoration techniques model the degradation process and apply the inverse to estimate the original undistorted image. The accuracy of the estimate depends on how well the degradation is modeled.
Transform coding is a lossy compression technique that converts data like images and videos into an alternate form that is more convenient for compression purposes. It does this through a transformation process followed by coding. The transformation removes redundancy from the data by converting pixels into coefficients, lowering the number of bits needed to store them. For example, an array of 4 pixels requiring 32 bits to store originally might only need 20 bits after transformation. Transform coding is generally used for natural data like audio and images, removes redundancy, lowers bandwidth, and can form images with fewer colors. JPEG is an example of transform coding.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Intensity Transformation and Spatial filteringShajun Nisha
Dr. S. Shajun Nisha discusses intensity transformation and spatial filtering techniques in image processing. Intensity transformation functions modify pixel intensities based on a transformation function. Spatial filtering involves applying an operator over a neighborhood of pixels. Common intensity transformations include contrast stretching and logarithmic transforms. Histogram equalization is also described to improve contrast. Spatial filters include linear filters implemented using imfilter and non-linear filters like median filtering with ordfilt2 and medfilt2. Examples demonstrate applying these techniques to enhance images.
The document discusses the relationship between pixels in an image, including pixel neighborhoods and connectivity. It defines different types of pixel neighborhoods - the 4 nearest neighbors, 8 nearest neighbors including diagonals, and boundary pixels that have fewer than 8 neighbors. Connectivity refers to whether two pixels are adjacent or connected based on their intensity values and neighborhood relationships. Specifically, it describes 4-connectivity, 8-connectivity, and m-connectivity. Regions in an image are sets of connected pixels, while boundaries separate adjacent regions.
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
This document provides an overview of digital image processing. It discusses what digital images are composed of and how they are processed using computers. The key steps in digital image processing are described as image acquisition, enhancement, restoration, representation and description, and recognition. A variety of techniques can be used at each step like filtering, segmentation, morphological operations, and compression. The document also outlines common sources of digital images, such as from the electromagnetic spectrum, and applications like medical imaging, astronomy, security screening, and human-computer interfaces.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
The document discusses various image enhancement techniques in digital image processing. It describes point operations like image negative, contrast stretching, thresholding, brightness enhancement, log transformation, and power law transformation. Contrast stretching expands the range of intensity levels and can be done by multiplying pixels with a constant, using a transfer function, or histogram equalization. Thresholding converts an image to binary by assigning pixel values above a threshold to one level and below to another. Log and power law transformations compress high intensity values and expand low values to enhance an image. Matlab code examples are provided for each technique.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
Image processing involves manipulating digital images through algorithms implemented on computers. A digital image is composed of picture elements called pixels arranged in a grid. Each pixel represents a color or intensity value. Common image processing tasks include computer vision, optical character recognition, medical imaging, and more. Key concepts in image processing include pixels, resolution, color depth, and filtering/manipulating pixel values.
Morphological image processing uses mathematical morphology tools to extract image components and describe shapes. Some key tools include binary erosion and dilation, which thin and thicken objects. Erosion shrinks objects while dilation grows them. Opening and closing are combinations of erosion and dilation that smooth contours or fill gaps. The hit-or-miss transform detects shapes by requiring matches of foreground and background pixels. Other algorithms include boundary extraction, hole filling, and thinning to find skeletons, which are medial axes of object shapes.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
The document discusses various image enhancement techniques in the spatial domain. It covers basic gray level transformations like negatives, log transformations, and power law transformations. It also discusses histogram processing and enhancement using arithmetic operations. Furthermore, it explains smoothing and sharpening spatial filters, and how to combine different spatial enhancement methods. The document provides examples and background on these fundamental image enhancement concepts.
At the end of this lesson, you should be able to;
describe spatial resolution
describe intensity resolution
identify the effect of aliasing
describe image interpolation
describe relationships among the pixels
1. Image restoration aims to reconstruct or recover an image that has been distorted by known degradation processes.
2. Degradation can occur during image acquisition, display, or processing due to factors like sensor noise, blurring, motion, or atmospheric effects.
3. Restoration techniques model the degradation process and apply the inverse to estimate the original undistorted image. The accuracy of the estimate depends on how well the degradation is modeled.
Transform coding is a lossy compression technique that converts data like images and videos into an alternate form that is more convenient for compression purposes. It does this through a transformation process followed by coding. The transformation removes redundancy from the data by converting pixels into coefficients, lowering the number of bits needed to store them. For example, an array of 4 pixels requiring 32 bits to store originally might only need 20 bits after transformation. Transform coding is generally used for natural data like audio and images, removes redundancy, lowers bandwidth, and can form images with fewer colors. JPEG is an example of transform coding.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Intensity Transformation and Spatial filteringShajun Nisha
Dr. S. Shajun Nisha discusses intensity transformation and spatial filtering techniques in image processing. Intensity transformation functions modify pixel intensities based on a transformation function. Spatial filtering involves applying an operator over a neighborhood of pixels. Common intensity transformations include contrast stretching and logarithmic transforms. Histogram equalization is also described to improve contrast. Spatial filters include linear filters implemented using imfilter and non-linear filters like median filtering with ordfilt2 and medfilt2. Examples demonstrate applying these techniques to enhance images.
The document discusses the relationship between pixels in an image, including pixel neighborhoods and connectivity. It defines different types of pixel neighborhoods - the 4 nearest neighbors, 8 nearest neighbors including diagonals, and boundary pixels that have fewer than 8 neighbors. Connectivity refers to whether two pixels are adjacent or connected based on their intensity values and neighborhood relationships. Specifically, it describes 4-connectivity, 8-connectivity, and m-connectivity. Regions in an image are sets of connected pixels, while boundaries separate adjacent regions.
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
This document provides an overview of digital image processing. It discusses what digital images are composed of and how they are processed using computers. The key steps in digital image processing are described as image acquisition, enhancement, restoration, representation and description, and recognition. A variety of techniques can be used at each step like filtering, segmentation, morphological operations, and compression. The document also outlines common sources of digital images, such as from the electromagnetic spectrum, and applications like medical imaging, astronomy, security screening, and human-computer interfaces.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
This document discusses image compression techniques. It begins by defining image compression as reducing the data required to represent a digital image. It then discusses why image compression is needed for storage, transmission and other applications. The document outlines different types of redundancies that can be exploited in compression, including spatial, temporal and psychovisual redundancies. It categorizes compression techniques as lossless or lossy and describes several algorithms for each type, including Huffman coding, LZW coding, DPCM, DCT and others. Key aspects like prediction, quantization, fidelity criteria and compression models are also summarized.
The document discusses various image enhancement techniques in digital image processing. It describes point operations like image negative, contrast stretching, thresholding, brightness enhancement, log transformation, and power law transformation. Contrast stretching expands the range of intensity levels and can be done by multiplying pixels with a constant, using a transfer function, or histogram equalization. Thresholding converts an image to binary by assigning pixel values above a threshold to one level and below to another. Log and power law transformations compress high intensity values and expand low values to enhance an image. Matlab code examples are provided for each technique.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
The document discusses various techniques for image compression. It describes how image compression aims to reduce redundant data in images to decrease file size for storage and transmission. It discusses different types of redundancy like coding, inter-pixel, and psychovisual redundancy that compression algorithms target. Common compression techniques described include transform coding, predictive coding, Huffman coding, and Lempel-Ziv-Welch (LZW) coding. Key aspects like compression ratio, mean bit rate, objective and subjective quality metrics are also covered.
Image processing involves manipulating digital images through algorithms implemented on computers. A digital image is composed of picture elements called pixels arranged in a grid. Each pixel represents a color or intensity value. Common image processing tasks include computer vision, optical character recognition, medical imaging, and more. Key concepts in image processing include pixels, resolution, color depth, and filtering/manipulating pixel values.
This document discusses various types of noise that can affect digital images, especially those from remote sensing. It describes photon noise, thermal noise, impulse noise, structured noise and other categories. Methods are presented for modeling and reducing noise, including frame averaging, low-pass filtering, median filtering, and filtering specifically for periodic structured noise. Real examples of noise and noise reduction are shown from satellite images.
This document discusses image noise reduction systems. It defines two main types of images - vector images defined by control points and digital images defined as 2D arrays of pixels. It describes different types of digital images like binary, grayscale, and color images. It then discusses image noise sources, types of noise like salt and pepper, Gaussian, speckle and periodic noise. Various noise filtering techniques are presented like minimum, maximum, mean, median and rank order filtering to remove salt and pepper noise.
1) Noise exists in all communication systems and degrades signal quality. It is caused by random movement of electrons and can be internal or external.
2) Thermal noise, also known as Johnson noise, is generated by thermal agitation of electrons in conductors. It is proportional to temperature and bandwidth.
3) Noise figure and noise temperature are used to measure the degradation of signal to noise ratio caused by components in a communication system. Lower noise figure and temperature indicate less degradation.
This document discusses noise addition and filtering in images. It begins by introducing different types of digital images like binary, grayscale, and color images. It then discusses various sources of image noise like sensor heat, ISO settings, and memory failures. The main types of noise covered are salt and pepper noise, Gaussian noise, speckle noise, and uniform noise. Linear and non-linear filtering techniques are described for removing each noise type, including median filtering, Wiener filtering, and mean/Gaussian filtering. Performance of filters is evaluated using measures like mean squared error and peak signal-to-noise ratio. Matlab is mentioned for implementing noise addition and filtering.
This document describes a student project implementing speech recognition for desktop applications. It was completed by three students - Sarang Afle, Sneh Joshi, and Surbhi Sharma - for their computer science degree under the supervision of Professor Nitesh Rastogi. The project involved developing a speech recognition software that allows users to operate a computer through voice commands.
The document discusses image processing and provides information on several key topics:
1. Image processing can be grouped into compression, preprocessing, and analysis. Preprocessing improves image quality by reducing noise and enhancing edges. Analysis extracts numeric or graphical information for tasks like classification.
2. Images are 2D matrices of intensity values represented by pixels. Common digital formats include grayscale, RGB, and RGBA. Higher bit depths allow more intensity levels to be represented.
3. Basic measurements of images include spatial resolution in pixels per unit, bit depth determining representable intensity levels, and factors like saturation and noise.
This document discusses multimedia elements and digital images. It defines multimedia as a combination of text, images, sound and video, especially on computers or for entertainment. The document then focuses on digital images, discussing their importance, types (binary, grayscale, color), representation using pixels, file size calculation, and image processing techniques like matching. Key points covered include how images are represented as 2D arrays, with pixels having luminance or RGB values, and how file size depends on image depth, size and resolution. Image processing is used in many fields like medicine, military and education.
Digital image processing refers to manipulating, enhancing, and analyzing digital images using computer algorithms and techniques. It involves applying mathematical operations to digital images, which are treated as two-dimensional arrays of pixels where each pixel represents a point of color and brightness. The basic steps in digital image processing are image acquisition, enhancement, restoration, segmentation, representation/description, analysis, synthesis/compression. Digital image processing is widely used in applications like medical imaging, computer vision, and multimedia.
Digital images can be defined as a 2-D function where x and y are spatial coordinates and the amplitude at each point represents intensity or gray level. Digital images can be raster images, represented as grids of pixels, or vector images, stored as mathematical descriptions of shapes. The file size of a digital image depends on its resolution in pixels, bit depth, and file format. Common file formats include JPEG, PNG, and TIFF, each suited for different types of images.
This document summarizes various topics related to image processing including image data types, file formats, acquisition, storage, processing, communication, display, and enhancement techniques. It discusses key concepts such as image fundamentals, color models, resolution, bit depth, file formats like JPEG, GIF, TIFF, compression techniques including lossless, lossy, intraframe, interframe, and algorithms like run length encoding and Shannon-Fano coding. Image enhancement topics covered are point processing, spatial filtering, and color image processing.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
This document provides an overview of digital image processing. It discusses key concepts like image types (intensity, binary, indexed, RGB), image file formats (TIFF, JPEG), image resolutions, and the steps involved in digital image processing. The MATLAB Image Processing Toolbox is also mentioned as a tool for performing operations on images like visualization, analysis, and processing. Edge detection is highlighted as an important but difficult task in digital image processing.
Voice recognition and voice response systems allow for hands-free data entry using speech as the interface. Voice recognition systems analyze speech patterns to convert them to digital codes for computer input. Most require training a system to recognize a user's voice. Voice recognition is used in applications like manufacturing quality control and airline baggage sorting. Voice response systems provide verbal guidance for tasks using voice messaging and synthesis. Examples include automated phone systems and online services.
The document discusses image steganography and various related concepts. It introduces image steganography as hiding secret information in a cover image. Key points covered include:
- Huffman coding is used to encode the secret image before embedding. It assigns binary codes to image intensity values.
- Discrete wavelet transform (DWT) is applied to the cover image. The secret message is embedded in the high frequency DWT coefficients while preserving the low frequency coefficients to maintain image quality.
- Inverse DWT is applied to produce a stego-image containing the hidden secret image. Haar DWT is used in the described approach.
Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. It have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry.
Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, videos, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Video games most often use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered "cut scenes" and intro movies that would be typical CGI applications.
Lesson 6 discusses images in multimedia. It covers creating still images using bitmaps or vector graphics. Bitmaps use pixels to represent images while vector graphics use mathematical formulas. The document also discusses color models like RGB and HSB. Color palettes define the available colors and dithering is used to match colors. Common file formats for images on different platforms are also presented.
Images are an important element in multimedia. There are two main types of images: bitmaps, which use pixels to represent color information, and vector images, which use mathematical coordinates. Various tools can be used to create and edit images, including bitmap software, 3D modeling programs, and image capture and editing features. Color is a key aspect, with different color models and palettes used depending on the intended display and use of the images.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
This document is a mini project report on digital image processing using MATLAB. It discusses various image processing techniques and applications implemented in MATLAB, including image formats, operations, and tools. Applications demonstrated include text recognition, color tracking, solving an engineering problem using image processing, creating a virtual slate using laser tracking, face detection, and distance estimation. The report provides examples of MATLAB functions used for tasks like importing, displaying, converting and cropping images, as well as analyzing and manipulating them.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
This document discusses the history and advances in digital imaging technology used in orthodontics. It describes how digital imaging has evolved from early cephalometric films to current digital systems. Key points include:
- Early cephalometric films from the 1930s allowed for analysis of malocclusions. Digital imaging now offers 3D analysis capabilities.
- Digital images are composed of pixels arranged in a grid, whereas analog films have continuous shades of gray. Digital offers advantages like enhanced images and lower radiation exposure.
- Factors like resolution, file format, and compression influence image quality for applications like orthodontic photos. Higher resolution TIFF files preserve quality better than JPEG.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Similar to Noise recognition in digital image (20)
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Rainfall intensity duration frequency curve statistical analysis and modeling...
Noise recognition in digital image
1. Gurunanak Institute of Technology (GNIT)
M.Tech (CSE)
Presentation on
Submitted by:
MD: Reyad Hossain
Submitted to:
Mr: Moloy Dhar
19/16/2015MD:Reyad Hossain (GNIT)
2. 9/16/2015 2MD:Reyad Hossain (GNIT)
1.What is an Image page- (3-4)
2.What is Digital Image page-(5)
3.What is Digital Image Processing page-(6-7)
4.Types of Images page-(8-10)
5.Formats of Images page-(11)
6.Image Noise page-(12-13)
7.Types of Noises in Image page-(14-30)
8.Filtering page-(31-35)
9.Conclussion page-(36)
10.Referencess page-(37)
3. An image (from Latin: imago) is an artefact that depicts or records visual
perception, for example a two-dimensional picture, that has a similar appearance
to some subject—usually a physical object or a person, thus providing
a depiction of it.
Images may be two-dimensional, such as a photograph, screen display, and as well
as a three-dimensional, such as a statue or hologram. They may
be captured by optical devices – such
as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and
phenomena, such as the human eye or water.
The word image is also used in the broader sense of any two-dimensional figure
such as a map, a graph, a pie chart, or a painting. In this wider sense, images can
also be rendered manually, such as by drawing, the art of painting, carving,
rendered automatically by printing or computer graphics technology,
or developed by a combination of methods, especially in a pseudo-photograph.
39/16/2015MD:Reyad Hossain (GNIT)
4. The word ‘Spatial Domain’ means that we have to work in the given space, in this
case, the image. In other words, the term spatial domain implies working with the
pixel values or working directly with the available raw data.
(0,0)
g( x , y )
(255,255)x
y
Let g(x , y) be the original image where g is the gray level value and (x , y) are the
image coordinates. For a 8-bit image, g can take values from 0-255 where 0
represents black, 255 represents white and all the intermediate values represent
shades of gray. In an image of size 256*256, x and y can take values from (0 , 0) to
(255 , 255) as shown in the figure.
49/16/2015MD:Reyad Hossain (GNIT)
5. DIGITAL IMAGES are electronic snapshots taken of a scene or scanned from
documents, such as photographs, manuscripts, printed texts, and artwork. The
digital image is sampled and mapped as a grid of dots or picture elements (pixels).
Each pixel is assigned a total value (black, white, shades of gray or colour), which is
represented in binary code (zeros and ones). The binary digits ("bits") for each pixel
are stored in a sequence by a computer and often reduced to a mathematical
representation (compressed). The bits are then interpreted and read by the
computer to produce an analogy version for display or printing
Pixel Values: As shown in this bifocal image, each pixel is assigned a tonal value,
in this example 0 for black and 1 for white.
59/16/2015MD:Reyad Hossain (GNIT)
6. The Digital Image Processing (DIP) refers to processing digital images by means
of a digital computer. The digital image processing encompasses a wide and
various field of applications. We can also say that this field of digital image
processing encompasses whose input and outputs are images and in addition
encompasses processes that extract attributes from images including the
recognition of the individual objects.
Digital image processing is the use of computer algorithms to perform image
processing on digital images. As a subcategory or field of digital signal
processing, digital image processing has many advantages over analogy image
processing. It allows a much wider range of algorithms to be applied to the input
data and can avoid problems such as the build-up of noise and signal distortion
during processing. Since images are defined over two dimensions (perhaps more)
digital image processing may be modelled in the form of multidimensional
systems
69/16/2015MD:Reyad Hossain (GNIT)
7. Many of the techniques of digital image processing, or digital picture
processing as it often was called, were developed in the 1960s at the Jet
Propulsion Laboratory, Massachusetts Institute of Technology, Bell
Laboratories, University of Maryland, and a few other research facilities, with
application to satellite imagery, wire-photo standards conversion, medical
imaging, videophone, character recognition, and photograph enhancement.
The cost of processing was fairly high, however, with the computing equipment
of that era. That changed in the 1970s, when digital image processing
proliferated as cheaper computers and dedicated hardware became available.
Images then could be processed in real time, for some dedicated problems such
as television standards conversion. As general-purpose computers became
faster, they started to take over the role of dedicated hardware for all but the
most specialized and computer-intensive operations.
79/16/2015MD:Reyad Hossain (GNIT)
8. It was stated earlier that images are 2-dimensional functions. Images are
classified as follows.
1.Monochrome Images: Monochrome images or Binary images. In this, each
pixel is stored as a single bit (0 or 1). Here, 0 represents black while 1 represents
white. It is a black and white image in the strictest sense. These images are also
called bit mapped images. In such images, we have only black and white pixels
and no other shades of gray.
2.Gray scale Image: Here, each pixel is usually stored as a byte (8-bits). Due to
this, each pixel can have values ranging from 0 (black) to 255 (white). Gray scale
images, as the name suggests have black, white and various shades of gray
presents in the image.
89/16/2015MD:Reyad Hossain (GNIT)
9. 3.Colour Image (24-bit): Colour images are based on the fact that a variety of
colours can be generated by mixing the three primary colours viz. Red, Green, and
Blue in proper proportions. In colour images, each pixel is composed of RGB values
and each of these colours require 8-bits (one byte) for its representation. Hence,
each pixel is represented by 24-bits [R (8-bits), G (8-bits), B (8-bits)].
A 24-bit colour image supports 16,777,216 different combination of colours.
Colour images can be easily converted to gray scale images using the equation.
X=0.30 R + 0.59 G + 0.11 B
An easier formula that could achieve similar results is.
X=
R + G + B
3
99/16/2015MD:Reyad Hossain (GNIT)
10. 4. Half Toning: We have all read newspapers at some point of time (hopefully).
The images do look like gray level images. But if you look closely, all the images
generated are basically using black colour.
Even the images that you see in most of the books (including this one) are
generated using black colour on a white background. In spite of this we do get an
Illusion of seeing gray levels. The technique to achieve an illusion of gray levels from
only black and white levels is called half-toning.
109/16/2015MD:Reyad Hossain (GNIT)
11. Image file formats are standardized means of organizing and storing digital
images. Image files are composed of digital data in one of these formats that can
be raster zed for use on a computer display or printer. An image file format may
store data in uncompressed, compressed, or vector formats. Once raster zed, an
image becomes a grid of pixels, each of which has a number of bits to designate its
colour equal to the colour depth of the device displaying it.
1. JPEG/JFIF: Joint Photographic Experts Group / JPEG File Interchange Format.
2. JPEG 2000: It’s the higher version of JPEG.
3. EXIF: Exchangeable Image File Format.
4. TIFF: Tagged Image File Format.
5. RIF: Raw Image Format.
6. GIF: Graphics Interchange Format.
7 BMP: Bitmap File Format.
8. PNG: Portable Network Graphic Format.
9. PPN: Portable Pixmap Format.
10. PGM: Portable Gray map Format.
11. PBM: Portable Bit map Format.
119/16/2015MD:Reyad Hossain (GNIT)
12. The principal sources of noise in a digital image arise during acquisition and during
transmission. No matter how much care one takes, some amount of noise always
creeps in. Based on the shapes (Probability Density Functions) of the noise.
Image noise is random (not present in the object imaged) variation of brightness
or colour information in images, and is usually an aspect of electronic noise. It can
be produced by the sensor and circuitry of a scanner or digital camera. Image noise
can also originate in film grain and in the unavoidable shot noise of an ideal photon
detector. Image noise is an undesirable by-product of image capture that adds
spurious and extraneous information.
The original meaning of "noise" was and remains "unwanted signal"; unwanted
electrical fluctuations in signals received by AM radios caused audible acoustic
noise ("static"). By analogy unwanted electrical fluctuations themselves came to be
known as "noise". Image noise is, of course, inaudible.
The magnitude of image noise can range from almost imperceptible specks on a
digital photograph taken in good light, to optical and radio astronomical images
that are almost entirely noise, from which a small amount of information can be
derived by sophisticated processing (a noise level that would be totally unacceptable
in a photograph since it would be impossible to determine even what the subject
was).
129/16/2015MD:Reyad Hossain (GNIT)
13. Degradation
Function
h( x , y )
Restoration
Function+f( x , y )
n(x , y)
g(x , y)
f(x , y)
Image degradation is said to occur when a certain image under goes loss of stored
information either due to digitization or conversion (i.e. algorithmic operations),
decreasing visual quality.
The initial image (source, f(x , y)) undergoes degradation due to various
operations, conversions and losses. This introduces Noise. This Noisy image is
further restored via restoration filters to make it visually acceptable for user.
Degraded Image=Degradation Function*Source + Noise
g(x , y) = h(x , y) * f(x , y) + n(x , y)
139/16/2015MD:Reyad Hossain (GNIT)
14. Image noise is random (not present in the object imaged) variation of brightness
or colour information in images, and is usually an aspect of electronic noise. It can
be produced by the sensor and circuitry of a scanner or digital camera. Image
noise can also originate in film grain and in the unavoidable shot noise of an ideal
photon detector.
1.Gaussian Noise.
2.Salt and pepper (Impulse) Noise.
3.Poisson Noise.
4.Erlang (Gamma) Noise.
5.Exponential Noise.
6.Uniform Noise.
149/16/2015MD:Reyad Hossain (GNIT)
15. Principal sources of Gaussian noise in digital images arise during acquisition e.g.
sensor noise caused by poor illumination and/or high temperature, and/or
transmission e.g. electronic circuit noise.
A typical model of image noise is Gaussian, additive, independent at eachpixel,
and independent of the signal intensity, caused primarily by Johnson–Nyquist
noise (thermal noise), including that which comes from the reset noise of
capacitors (“KTC noise").Amplifier noise is a major part of the "read noise" of an
image sensor, that is, of the constant noise level in dark areas of theimage.In color
cameras where more amplification is used in the blue colour channel than in the
green or red channel, there can be more noise in the bluechannel.At higher
exposures, however, image sensor noise is dominated by shot noise, which is not
Gaussian and not independent of signal intensity.
159/16/2015MD:Reyad Hossain (GNIT)
16. The probability density function (PDF) of Gaussian Noise is given by the expression.
2
.1
)(
22
2/)(
z
e
zp
2
1
2
607.0
z
2
z Gray Level
Mean of average value of z
Standard Deviation
Variance
169/16/2015MD:Reyad Hossain (GNIT)
17. If we plot this function, we notice that
70% of its value lies in the range and
95% of its value lies in the range
Gaussian noise has a maximum value at μ and then it starts falling off.
Let us consider the image shown in figure.
)](),[(
)]2(),2[(
a
b
a b Gray Level
Noofpixels
a b
Noofpixels
(a) (b) Histogram of the image
(c) Gaussian occur Histogram modified
Note: Gaussian
noise occur due to
circuit noise,
sensor noise, poor
illumination, high
temperature.
179/16/2015MD:Reyad Hossain (GNIT)
18. Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise
or spike noise. An image containing salt-and-pepper noise will have dark pixels in
bright regions and bright pixels in dark regions. This type of noise can be caused
by analogy-to-digital converter errors, bit errors in transmission, etc. It can be
mostly eliminated by using dark frame subtraction, median filtering and
interpolating around dark/bright pixels.
Dead pixels in an LCD monitor produce a similar, but non-random, display.
The salt-and-pepper noise are also called shot noise, impulse noise or spike noise
that is usually caused by faulty memory locations ,malfunctioning pixel elements
in the camera sensors, or there can be timing errors in the process of digitization
.In the salt and pepper noise there are only two possible values exists that is a and
b and the probability of each is less than 0.2.If the numbers greater than this
numbers the noise will swamp out image. For 8-bit image the typical value for 255
for salt-noise and pepper noise is 0 Reasons for Salt and Pepper Noise: a. By
memory cell failure. b. By malfunctioning of camera’s sensor cells. c. By
synchronization errors in image digitizing or transmission.
189/16/2015MD:Reyad Hossain (GNIT)
19. Fig . Salt and Pepper noise
The PDF of the salt and pepper noise (bipolar noise) is:
;0
;
;
)( b
a
p
p
zp
For z=a
For z=b
elsewhere
If or is zero, this noise is called unipolar noise. The PDF of salt and pepper is
shown in figure in the next page.
ap bp
199/16/2015MD:Reyad Hossain (GNIT)
20. ap
bp
a b Gray Level
Generally, a and b are black and white gray levels respectively. Hence, for a 8-bit image,
a=0, b=255 because of which the noise is called salt (white) and pepper (black).
Sometimes, it is called as speckle noise.
Now take some images to understand it properly.
209/16/2015MD:Reyad Hossain (GNIT)
21. Let us take the same image as the one taken for the Gaussian example.
a b
When salt anf pepper creeps in, the image looks like
a b
219/16/2015MD:Reyad Hossain (GNIT)
22. Photon noise, also known as Poisson noise, is a basic form of uncertainty
associated with the measurement of light, inherent to the quantized nature of light
and the independence of photon detections. Its expected magnitude is signal
dependent and constitutes the dominant source of image noise except in low-light
conditions.
Image sensors measure scene irradiance by counting the number of discrete
photons incident on the sensor over a given time interval. In digital sensors, the
photoelectric effect is used to convert photons into electrons, whereas film based
sensors rely on photo-sensitive chemical reactions. In both cases, the
independence of random individual photon arrivals leads to photon noise, a signal
dependent form of uncertainty that is a property of the underlying signal itself
229/16/2015MD:Reyad Hossain (GNIT)
23. The dominant noise in the darker parts of an image from an image sensor is
typically that caused by statistical quantum fluctuations, that is, variation in the
number of photons sensed at a given exposure level. This noise is known as
photon shot noise. Shot noise has a root-mean-square value proportional to the
square root of the image intensity, and the noises at different pixels are
independent of one another. Shot noise follows a Poisson distribution, which
except at very low intensity levels approximates a Gaussian distribution.
In addition to photon shot noise, there can be additional shot noise from the dark
leakage current in the image sensor; this noise is sometimes known as "dark shot
noise "or "dark-current shot noise". Dark current is greatest at "hot pixels" within
the image sensor. The variable dark charge of normal and hot pixels can be
subtracted off (using "dark frame subtraction"), leaving only the shot noise, or
random component, of the leakage. If dark-frame subtraction is not done, or if the
exposure time is long enough that the hot pixel charge exceeds the linear charge
capacity, the noise will be more than just shot noise, and hot pixels appear as salt-
and-pepper noise.
Individual photon detections can be treated as independent events that follow a
random temporal distribution. As a result, photon counting is a classic Poisson
process, and the number of photons N measured by a given sensor element over a
time interval t is described by the discrete probability distribution .
!
)(
)(
k
te
KNp
kt
r
239/16/2015MD:Reyad Hossain (GNIT)
24. where λ is the expected number of photons per unit time interval, which is
proportional to the incident scene irradiance. This is a standard Poisson distribution
with a rate parameter λt that corresponds to the expected incident photon count.
The uncertainty described by this distribution is known as photon noise
Centre image Poisson noise occur
249/16/2015MD:Reyad Hossain (GNIT)
25. Gamma noise often is associated with processes related to waiting times between
random (Poisson-distributed) events. Gamma noise typically is generated as a
pseudorandom pattern of waiting times between events of a unit mean Poisson
process.
The shape of the Gamma noise is very similar to the Rayleigh distribution. The
Gamma noise distribution starts from zero. It is given by the following expression.
0
)!1()(
1
b
za
zp
b
b
az
e for z≥0
For <0
259/16/2015MD:Reyad Hossain (GNIT)
26. )(zp
k
ab /)1(
z
Here, a>0 and b is a positive integer. The mean and the variance of this
distribution are given by,
and
22
/
/
ab
ab
269/16/2015MD:Reyad Hossain (GNIT)
27. Exponential distribution has an exponential shape. It is given by the following
expression.
Here, a>0
The mean and the variance of the exponential noise is given by
0
)(
az
ae
zp
for z≥0
for z<0
2
2 1
1
a
a
)(zp
a
z
279/16/2015MD:Reyad Hossain (GNIT)
28. The noise caused by quantizing the pixels of a sensed image to a number of
discrete levels is known as quantization noise. It has an approximately uniform
distribution. Though it can be signal dependent, it will be signal independent if
other noise sources are big enough to cause dithering, or if dithering is explicitly
applied.
Quantization, in mathematics and digital signal processing, is the process of
mapping a large set of input values to a (countable) smaller
set. Rounding and truncation are typical examples of quantization processes.
Quantization is involved to some degree in nearly all digital signal processing, as
the process of representing a signal in digital form ordinarily involves rounding.
Quantization also forms the core of essentially all loss compression algorithms.
The difference between an input value and its quantized value (such as round-off
error) is referred to as quantization error. A device or algorithmic function that
performs quantization is called a quantize. An analogy-to-digital converter is an
example of a quantize.
289/16/2015MD:Reyad Hossain (GNIT)
29. The uniform noise cause by quantizing the pixels of image to a number of
distinct levels is known as quantization noise. It has approximately uniform
distribution. In the uniform noise the level of the gray values of the noise are
uniformly distributed across a specified range. Uniform noise can be used to
generate any different type of noise distribution. This noise is often used to
degrade images for the evaluation of image restoration algorithms. This noise
provides the most neutral or unbiased noise
299/16/2015MD:Reyad Hossain (GNIT)
30. As the name suggests, this noise is uniform over a certain band of gray levels.
The PDF of uniform noise is given by
if a ≤ z ≤ b
otherwise
The mean of the function is
The variance of this function is given by
0
1
)( abzp
2
ba
12
)( 2
2 ab
)(zp
ap
bp
a b
z
)(zp
ab
1
a b
z
309/16/2015MD:Reyad Hossain (GNIT)
31. Filtering in an image processing is a basis function that is used to achieve many
tasks such as noise reduction, interpolation, and re-sampling. Filtering image data
is a standard process used in almost all image processing systems. The choice of
filter is determined by the nature of the task performed by filter and behavior and
type of the data. Filters are used to remove noise from digital image while keeping
the details of image preserved is an necessary part of image processing. Filters can
be described by different categories:
Filtering without Detection: In this filtering there is a window mask which is
moved across the observed image. This mask is usually of the size (2N+1)/2, in
which N is a any positive integer. In this the centre element is the pixel of concern.
When the mask is start moving from left top corner to the right bottom corner of
the image, it perform some arithmetic operations without discriminating any pixel
of image
319/16/2015MD:Reyad Hossain (GNIT)
32. Detection followed by Filtering: This filtering involves two steps. In the first
step it identify the noisy pixels of image and in second step it filters those pixels of
image which contain noise. In this filtering also there is a mask which is moved
across the image. It performs some arithmetic operations to detect the noisy
pixels of image. Then the filtering operation is performed only on those pixels of
image which are found to be noisy in the first step, keeping the non-noisy pixel of
image intact.
Hybrid Filtering: In hybrid filtering scheme, two or more filters are used to filter
a corrupted location of a noisy image. The decision to apply a particular filter is
based on the noise level of noisy image at the test pixel location and the
performance of the filter which is used on a filtering mask.
329/16/2015MD:Reyad Hossain (GNIT)
33. Linear Filters: Linear filters are used to remove certain type of noise. Gaussian or
Averaging filters are suitable for this purpose. These filters also tend to blur the
sharp edges, destroy the lines and other fine details of image, and perform badly in
the presence of signal dependent noise.
Non-Linear Filters: In recent years, a variety of non-linear median type filters such
as rank conditioned, weighted median, relaxed median, rank selection have been
developed to overcome the shortcoming of linear filter.
Different Type of Linear and Non-Linear Filters:
Mean Filter: The mean filter is a simple spatial filter .It is a sliding-window filter
that replace the center value in the window. It replaces with the average mean of all
the pixel values in the kernel or window. The window is usually square but it can be
of any shape.
339/16/2015MD:Reyad Hossain (GNIT)
34. Advantage:
a. Easy to implement
b. b. Used to remove the impulse noise.
Disadvantage:
It does not preserve details of image. Some details are removes of image with
using the mean filter
349/16/2015MD:Reyad Hossain (GNIT)
35. Median Filter: Median Filter is a simple and powerful non-linear filter which is
based order statistics. It is easy to implement method of smoothing images. Median
filter is used for reducing the amount of intensity variation between one pixel and
the other pixel. In this filter, we do not replace the pixel value of image with the
mean of all neighbouring pixel values, we replaces it with the median value. Then
the median is calculated by first sorting all the pixel values into ascending order and
then replace the pixel being calculated with the middle pixel value. If the
neighbouring pixel of image which is to be consider contain an even numbers of
pixels, than the average of the two middle pixel values is used to replace. The median
filter gives best result when the impulse noise percentage is less than 0.1 %. When
the quantity of impulse noise is increased the median filter not gives best result
359/16/2015MD:Reyad Hossain (GNIT)
36. Enhancement of an noisy image is necessary task in digital image processing.
Filters are used best for removing noise from the images. In this paper we
describe various type of noise models and filters techniques. Filters
techniques are divided into two parts linear and non-linear techniques. After
studying linear and non-linear filter each of have limitations and advantages.
In the hybrid filtering schemes, there are two or more filters are
recommended to filter a corrupted location .The decision to apply a which
particular filter is based on the different noise level at the different test pixel
location or performance of the filter scheme on a filtering mask.
369/16/2015MD:Reyad Hossain (GNIT)
37. [1]. A. K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall of India,
First Edition, 1989.
[2]. Rafael C .Gonzalez and Richard E. woods, “Digital Image Processing”, Pearson
Education, Second Edition, 2005
[3]. K. S. Srinivasan and D. Ebenezer, “A New Fast and Efficient Decision-Based
Algorithm for Removal of High-density Impulse Noises,” IEEE Signal Processing
Letters, Vol. 14, No. 3, March 2007.
[4]. H. Hwang and R. A. Haddad”Adaptive Median Filters: New Algorithms and
Results” IEEE Transactions on image processing vol 4. P.no 499-502, Apr 1995.
[5]. Nachtegael, M, Schulte, S, Vander We ken. Kerre, E.E.2005.Fuzzy Filters for
Noise Reduction: The Case of Gaussian Noise. IEEE Explore, 201-206 D, De Witte. V,
206.
[6]. Suresh Kumar, Papendra Kumar, Manoj Gupta, Ashok Kumar Nagawat,
“Performance Comparison of Median and Wiener Filter in Image De-noising”
,International Journal of Computer Applications (0975 – 8887) Volume 12– No.4,
November 2010
379/16/2015MD:Reyad Hossain (GNIT)