The document provides an overview of a course on digital image processing. It is divided into 5 units that cover topics such as digital image fundamentals, image transforms, image enhancement, image filtering and restoration, image compression, image segmentation, and image representation and description. The course will examine concepts like sampling and quantization, image transforms including Fourier transforms, image enhancement techniques, image compression standards, and image segmentation methods. Students will learn about various image processing schemes and how to reconstruct images from projections using transforms like the Radon transform. The document provides context and outlines the scope of content to be covered in the digital image processing course.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
This document discusses various techniques for image enhancement in spatial domain. It defines image enhancement as improving visual quality or converting images for better analysis. Key techniques covered include noise removal, contrast adjustment, intensity adjustment, histogram equalization, thresholding, gray level slicing, and image rotation. Conversion methods like grayscale and different file formats are also summarized. Experimental results and applications in fields like medicine, astronomy, and security are mentioned.
Lecture 13 (Usage of Fourier transform in image processing)VARUN KUMAR
This document discusses the Fourier transform and its applications in image processing. It begins by explaining the Fourier transform for 1D and 2D continuous and discrete signals. The Fourier transform converts a signal from the time or space domain to the frequency domain. It then covers properties of the Fourier transform such as separability and translation. The document concludes by mentioning references for further reading on image processing and computer vision topics.
This document discusses various frequency domain image filtering techniques. It outlines the basic steps for filtering in the frequency domain which includes centering the Fourier transform, computing the discrete Fourier transform, multiplying by a filter function, computing the inverse transform and canceling centering operations. Specific filters are then described including low pass, high pass, ideal filters and Butterworth filters. Examples of applying these filters to images are provided to demonstrate the effects. Homomorphic filtering is also introduced as a technique for illumination correction.
This document discusses digital image processing and various image enhancement techniques. It begins with introductions to digital image processing and fundamental image processing systems. It then covers topics like image sampling and quantization, color models, image transforms like the discrete Fourier transform, and noise removal techniques like median filtering. Histogram equalization and homomorphic filtering are also summarized as methods for image enhancement.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
introduction to Digital Image Processingnikesh gadare
The document provides an overview of the key concepts and stages involved in digital image processing. It discusses image acquisition, preprocessing such as enhancement and restoration, and post-processing which includes tasks like segmentation, description and recognition. The goal is to introduce fundamental concepts and classical methods of digital image processing. Various applications are also highlighted including medical imaging, surveillance, and industrial inspection.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
This document discusses various techniques for image enhancement in spatial domain. It defines image enhancement as improving visual quality or converting images for better analysis. Key techniques covered include noise removal, contrast adjustment, intensity adjustment, histogram equalization, thresholding, gray level slicing, and image rotation. Conversion methods like grayscale and different file formats are also summarized. Experimental results and applications in fields like medicine, astronomy, and security are mentioned.
Lecture 13 (Usage of Fourier transform in image processing)VARUN KUMAR
This document discusses the Fourier transform and its applications in image processing. It begins by explaining the Fourier transform for 1D and 2D continuous and discrete signals. The Fourier transform converts a signal from the time or space domain to the frequency domain. It then covers properties of the Fourier transform such as separability and translation. The document concludes by mentioning references for further reading on image processing and computer vision topics.
This document discusses various frequency domain image filtering techniques. It outlines the basic steps for filtering in the frequency domain which includes centering the Fourier transform, computing the discrete Fourier transform, multiplying by a filter function, computing the inverse transform and canceling centering operations. Specific filters are then described including low pass, high pass, ideal filters and Butterworth filters. Examples of applying these filters to images are provided to demonstrate the effects. Homomorphic filtering is also introduced as a technique for illumination correction.
This document discusses digital image processing and various image enhancement techniques. It begins with introductions to digital image processing and fundamental image processing systems. It then covers topics like image sampling and quantization, color models, image transforms like the discrete Fourier transform, and noise removal techniques like median filtering. Histogram equalization and homomorphic filtering are also summarized as methods for image enhancement.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
This document provides an overview of image compression techniques. It discusses how image compression works to reduce the number of bits needed to represent image data. The main goals of image compression are to reduce irrelevant and redundant image information to produce smaller and more efficient file sizes for storage and transmission. The document outlines different compression methods including lossless compression, which compresses data without any loss, and lossy compression, which allows for some loss of information in exchange for higher compression ratios. Specific techniques like run length encoding are also explained.
Image registration is a process that aligns pixels in two images to correspond to the same point in a scene. It allows images to be combined or focused in a way that improves information extraction. Some applications of image registration include stereo imaging, remote sensing, comparing images over time, and finding where a template matches an image. Template matching is used to find the best match between a template and image by measuring similarity or mismatch between them. Cross-correlation is commonly used as a similarity measure for template matching.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
The document discusses various techniques for image compression including:
- Run-length coding which encodes repeating pixel values and their lengths.
- Difference coding which encodes the differences between pixel values.
- Block truncation coding which divides images into blocks and assigns codewords.
- Predictive coding which predicts pixel values from neighbors and encodes differences.
Reversible compression allows exact reconstruction while lossy compression sacrifices some information for higher compression but images remain visually similar. Combining techniques can achieve even higher compression ratios.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
This is the basic introductory presentation for beginners. It gives you the idea about what is image processing means. The presentation consists of introduction to digital image processing, image enhancement, image filtering, finding an image edge, image analysis, tools for image processing and finally some application of digital image processing.
Lecture 15 DCT, Walsh and Hadamard TransformVARUN KUMAR
This document discusses discrete cosine, Walsh, and Hadamard transforms for 2D signals. It provides the mathematical formulas for the forward and inverse transforms of each. The discrete cosine transform uses cosine functions in its kernel. The Walsh transform uses the binary representation of values, with the kernel containing terms with (−1) factors. The Hadamard transform has a similar kernel to the Walsh transform. Each transform decomposes 2D signals into component frequencies or patterns in a way that is separable and symmetric.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
Wondering about using PNG or JPG or BMP or GIF. This presentation will answer all your queries related to designing digital images and which formats are best while saving them..
Terms like raster images, vector images, vectors, alpha channels, transparency, palettes, compression are explained here.
The document provides a history of digital image processing from the early 1920s to present day. It discusses some of the earliest applications including transmitting newspaper images via submarine cable. Major developments occurred in the 1960s with improved computing enabling enhanced images from space missions. Digital image processing began being used for medical applications in the 1970s. The field has since expanded significantly with uses in areas like astronomy, art, medicine, law enforcement, and more. The document also defines digital images and digital image processing, and outlines some key stages in processing including acquisition, restoration, segmentation, and representation.
The document discusses pseudo color images and techniques for converting grayscale images to color. It defines pseudo color images as grayscale images mapped to color according to a lookup table or function. It describes various color schemes for this mapping, including grayscale schemes that use shades of gray and oscillating schemes that emphasize certain grayscale ranges in color. The document also discusses using piecewise linear functions and smooth non-linear functions to transform grayscale levels to color for purposes such as enhancing contrast or reducing noise in images.
This document discusses frequency domain processing and various image transforms, with a focus on the discrete Fourier transform (DFT). It provides definitions and properties of the DFT, including its relationship to the Fourier transform and examples of applying the DFT to images. Other transforms discussed include the Walsh transform, with examples provided of computing and displaying the Walsh transform of an image. MATLAB code is presented for calculating the DFT and Walsh transform of grayscale images.
Setting the lower order bit plane to zero would have the effect of reducing the number of distinct gray levels by half. This would cause the histogram to become more peaked, with more pixels concentrated in fewer bins.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Noise in images can take various forms and have different sources. Gaussian noise follows a normal distribution and looks like subtle color variations, while salt and pepper noise completely replaces some pixel values with maximum or minimum values. Mean, median, and trimmed filters are commonly used to reduce noise. Mean filters average pixel values within a window, but can blur details. Median filters replace the center pixel with the median value in the window, which is effective for salt and pepper noise while retaining details better than mean filters. Adaptive filters vary the window size to better target noise without excessive blurring.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
This document provides an overview of the syllabus for the course ECS-702 Digital Image Processing. It covers 5 units: Introduction and Fundamentals, Image Enhancement in Spatial and Frequency Domains, Image Restoration, Morphological Image Processing, and Image Segmentation. The introduction discusses key concepts like the components of an image processing system, elements of visual perception, and the fundamental steps of image acquisition, enhancement, and restoration. The syllabus then delves into specific techniques in each unit such as spatial filters, Fourier transforms, noise models, morphological operations, and segmentation approaches.
This document provides an overview of digital image processing including key concepts, applications, fundamental steps, and components of an image processing system.
1) It defines a digital image and discusses image sources such as electromagnetic spectrum, acoustic, and synthetic images. Common applications are in medical imaging, machine vision, astronomy, and remote sensing.
2) The fundamental steps of image processing are outlined as image acquisition, enhancement, restoration, processing in different domains, compression, morphological operations, segmentation, representation and recognition.
3) The main components of an image processing system are sensors, hardware, computer, software, storage, displays and networking components.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
This document provides an overview of image compression techniques. It discusses how image compression works to reduce the number of bits needed to represent image data. The main goals of image compression are to reduce irrelevant and redundant image information to produce smaller and more efficient file sizes for storage and transmission. The document outlines different compression methods including lossless compression, which compresses data without any loss, and lossy compression, which allows for some loss of information in exchange for higher compression ratios. Specific techniques like run length encoding are also explained.
Image registration is a process that aligns pixels in two images to correspond to the same point in a scene. It allows images to be combined or focused in a way that improves information extraction. Some applications of image registration include stereo imaging, remote sensing, comparing images over time, and finding where a template matches an image. Template matching is used to find the best match between a template and image by measuring similarity or mismatch between them. Cross-correlation is commonly used as a similarity measure for template matching.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
The document discusses various techniques for image compression including:
- Run-length coding which encodes repeating pixel values and their lengths.
- Difference coding which encodes the differences between pixel values.
- Block truncation coding which divides images into blocks and assigns codewords.
- Predictive coding which predicts pixel values from neighbors and encodes differences.
Reversible compression allows exact reconstruction while lossy compression sacrifices some information for higher compression but images remain visually similar. Combining techniques can achieve even higher compression ratios.
This document discusses various spatial filters used for image processing, including smoothing and sharpening filters. Smoothing filters are used to reduce noise and blur images, with linear filters performing averaging and nonlinear filters using order statistics like the median. Sharpening filters aim to enhance edges and details by using derivatives, with first derivatives calculated via gradient magnitude and second derivatives using the Laplacian operator. Specific filters covered include averaging, median, Sobel, and unsharp masking.
This is the basic introductory presentation for beginners. It gives you the idea about what is image processing means. The presentation consists of introduction to digital image processing, image enhancement, image filtering, finding an image edge, image analysis, tools for image processing and finally some application of digital image processing.
Lecture 15 DCT, Walsh and Hadamard TransformVARUN KUMAR
This document discusses discrete cosine, Walsh, and Hadamard transforms for 2D signals. It provides the mathematical formulas for the forward and inverse transforms of each. The discrete cosine transform uses cosine functions in its kernel. The Walsh transform uses the binary representation of values, with the kernel containing terms with (−1) factors. The Hadamard transform has a similar kernel to the Walsh transform. Each transform decomposes 2D signals into component frequencies or patterns in a way that is separable and symmetric.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
Wondering about using PNG or JPG or BMP or GIF. This presentation will answer all your queries related to designing digital images and which formats are best while saving them..
Terms like raster images, vector images, vectors, alpha channels, transparency, palettes, compression are explained here.
The document provides a history of digital image processing from the early 1920s to present day. It discusses some of the earliest applications including transmitting newspaper images via submarine cable. Major developments occurred in the 1960s with improved computing enabling enhanced images from space missions. Digital image processing began being used for medical applications in the 1970s. The field has since expanded significantly with uses in areas like astronomy, art, medicine, law enforcement, and more. The document also defines digital images and digital image processing, and outlines some key stages in processing including acquisition, restoration, segmentation, and representation.
The document discusses pseudo color images and techniques for converting grayscale images to color. It defines pseudo color images as grayscale images mapped to color according to a lookup table or function. It describes various color schemes for this mapping, including grayscale schemes that use shades of gray and oscillating schemes that emphasize certain grayscale ranges in color. The document also discusses using piecewise linear functions and smooth non-linear functions to transform grayscale levels to color for purposes such as enhancing contrast or reducing noise in images.
This document discusses frequency domain processing and various image transforms, with a focus on the discrete Fourier transform (DFT). It provides definitions and properties of the DFT, including its relationship to the Fourier transform and examples of applying the DFT to images. Other transforms discussed include the Walsh transform, with examples provided of computing and displaying the Walsh transform of an image. MATLAB code is presented for calculating the DFT and Walsh transform of grayscale images.
Setting the lower order bit plane to zero would have the effect of reducing the number of distinct gray levels by half. This would cause the histogram to become more peaked, with more pixels concentrated in fewer bins.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Noise in images can take various forms and have different sources. Gaussian noise follows a normal distribution and looks like subtle color variations, while salt and pepper noise completely replaces some pixel values with maximum or minimum values. Mean, median, and trimmed filters are commonly used to reduce noise. Mean filters average pixel values within a window, but can blur details. Median filters replace the center pixel with the median value in the window, which is effective for salt and pepper noise while retaining details better than mean filters. Adaptive filters vary the window size to better target noise without excessive blurring.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
This document provides an overview of the syllabus for the course ECS-702 Digital Image Processing. It covers 5 units: Introduction and Fundamentals, Image Enhancement in Spatial and Frequency Domains, Image Restoration, Morphological Image Processing, and Image Segmentation. The introduction discusses key concepts like the components of an image processing system, elements of visual perception, and the fundamental steps of image acquisition, enhancement, and restoration. The syllabus then delves into specific techniques in each unit such as spatial filters, Fourier transforms, noise models, morphological operations, and segmentation approaches.
This document provides an overview of digital image processing including key concepts, applications, fundamental steps, and components of an image processing system.
1) It defines a digital image and discusses image sources such as electromagnetic spectrum, acoustic, and synthetic images. Common applications are in medical imaging, machine vision, astronomy, and remote sensing.
2) The fundamental steps of image processing are outlined as image acquisition, enhancement, restoration, processing in different domains, compression, morphological operations, segmentation, representation and recognition.
3) The main components of an image processing system are sensors, hardware, computer, software, storage, displays and networking components.
Digital image processing is the use of algorithms and mathematical models to process digital images. The goal of digital image processing is to enhance the quality of images, extract meaningful information from images, and automate image-based tasks.
Digital images can be represented as matrices of pixels where each pixel has a numeric value corresponding to its brightness level. To create a digital image, an analog image is sampled spatially to capture pixel locations and quantized to assign brightness levels. More pixels and levels increase image quality but also storage needs. Key steps in digital image processing include acquisition, enhancement, restoration, compression and analysis operations like segmentation and recognition. Together these techniques allow powerful manipulation of visual information on computers.
Discover the fundamentals, Characteristics & types of digital image analysis. Learn about pixels, bit depth, challenges, and AI impacts on image processing.
This document provides an overview of digital image processing. It discusses image sensors, sampling and quantization, image resolution, and elements of visual perception. Regarding image sensors, it notes there are three types: single imaging sensor, line sensor, and array sensor. It explains how images are digitized through sampling and quantization. Image resolution depends on sample number and gray level number, with higher numbers providing better approximation of the original image. Elements of visual perception discussed include the structure of the human eye, image formation in the eye, rods and cones, and brightness adaptation.
Computer vision and image processing are closely related fields that use AI techniques to extract information from visual inputs.
Image processing involves transforming images into digital form and performing operations to extract useful information. It includes steps like image acquisition, enhancement, restoration, representation, and recognition. Common applications of image processing include improving medical and satellite images.
Computer vision enables computers to interpret and understand visual inputs like images and videos. It seeks to develop techniques that help computers "see" and derive meaningful information from visual content. Key computer vision tasks include image classification, object detection, and image segmentation. Computer vision has many applications in industries like automotive, healthcare, and agriculture.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
Digital images are represented as a finite set of digital values called pixels arranged in a grid. There are several types of digital images including grayscale, RGB color, and binary. Digital image processing involves tasks like image enhancement, restoration, compression, and analysis. The key steps in digital image processing are image acquisition, representation and description, segmentation, recognition and display. The human visual system perceives brightness in a logarithmic fashion and can adapt to a wide range of light intensities. Proper sampling and quantization are required to convert a natural image into a digital image without loss of information.
General Review Of Algorithms Presented For Image SegmentationMelissa Moore
This paper proposes a system for recognizing human facial actions from images using image processing and machine learning techniques. The system first detects faces in images using a pretrained detector. Facial landmarks are then extracted to locate features like eyes, nose, mouth etc. Features extracted from the landmarks are used to recognize six basic facial expressions (happy, sad, angry, surprised, disgusted and neutral). The system is trained on a facial expression dataset to learn the patterns associated with each expression. The trained model can then be used to automatically recognize the expression in new input images. The proposed system has applications in areas like human-computer interaction, lie detection, sentiment analysis etc.
This document outlines the course syllabus for Digital Image Processing (DIP). It includes 5 units covering key topics in DIP like digital image fundamentals, image enhancement, restoration and segmentation, wavelets and compression, and image representation and recognition. The syllabus allocates 45 class periods to cover these units in depth. Recommended textbooks and references for the course are also provided.
Digital image processing involves manipulating digital images using a computer. It has two main applications: improving images for human interpretation and processing images for machine perception tasks. A digital image is composed of pixels arranged in a grid, each with an intensity value. Key steps in digital image processing include image acquisition through sensors, enhancement, restoration, compression and segmentation. The human visual system has adapted to a wide range of light intensities through mechanisms like brightness adaptation and color vision. Digital images are formed by sampling and quantizing a continuous image function.
Blending of Images Using Discrete Wavelet Transformrahulmonikasharma
The project presents multi focus image fusion using discrete wavelet transform with local directional pattern and spatial frequency analysis. Multi focus image fusion in wireless visual sensor networks is a process of blending two or more images to get a new one which has a more accurate description of the scene than the individual source images. In this project, the proposed model utilizes the multi scale decomposition done by discrete wavelet transform for fusing the images in its frequency domain. It decomposes an image into two different components like structural and textural information. It doesn’t down sample the image while transforming into frequency domain. So it preserves the edge texture details while reconstructing image from its frequency domain. It is used to reduce the problems like blocking, ringing artifacts occurs because of DCT and DWT. The low frequency sub-band coefficients are fused by selecting coefficient having maximum spatial frequency. It indicates the overall active level of an image. The high frequency sub-band coefficients are fused by selecting coefficients having maximum LDP code value LDP computes the edge response values in all eight directions at each pixel position and generates a code from the relative strength magnitude. Finally, fused two different frequency sub-bands are inverse transformed to reconstruct fused image. The system performance will be evaluated by using the parameters such as Peak signal to noise ratio, correlation and entropy
Evaluation Of Proposed Design And Necessary Corrective ActionSandra Arveseth
1. The document discusses the evaluation of a proposed satellite image design project and necessary corrective actions.
2. The objectives of the project are to construct a land cover classification taxonomy, classify satellite images by type (e.g. vegetation, buildings, water), and use MapReduce to process large amounts of satellite image data.
3. Satellite images play a major role in event detection like changing landscapes, monitoring glaciers, and detecting disasters. The project aims to detect land changes over time, store and classify the data, and retrieve it using defined mechanisms.
This document provides an overview of digital image processing and human vision. It discusses the key stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also covers the anatomy of the human eye, photoreceptors, color perception, image formation in the eye, brightness adaptation, and the Weber ratio relating the just noticeable difference in light intensity to background intensity. The document uses images and diagrams from the textbook "Digital Image Processing" to illustrate concepts in digital images and the human visual system.
Image Processing Compression and Reconstruction by Using New Approach Artific...CSCJournals
In this paper a neural network based image compression method is presented. Neural networks offer the potential for providing a novel solution to the problem of data compression by its ability to generate an internal data representation. This network, which is an application of back propagation network, accepts a large amount of image data, compresses it for storage or transmission, and subsequently restores it when desired. A new approach for reducing training time by reconstructing representative vectors has also been proposed. Performance of the network has been evaluated using some standard real world images. It is shown that the development architecture and training algorithm provide high compression ratio and low distortion while maintaining the ability to generalize and is very robust as well.
The document discusses key concepts in image processing including image sensing, acquisition, formation, sampling, quantization, and digital representation. It describes how the human eye forms images and contains photoreceptor cells. There are three main types of image sensors: single, line, and array. Sampling converts a continuous image to digital by selecting pixel values at regular intervals while quantization assigns discrete brightness levels. Together they allow images to be represented digitally as matrices of pixel values.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to M.sc.iii sem digital image processing unit i (20)
The document discusses the relationship between economics, environment, and ethics. It summarizes that we are facing issues today because of ignoring the fundamental relationship between the three. The economy relies on ecosystem services provided by the environment, but the environment is being degraded by waste and emissions. Ethical practices also constitute an unseen force guiding economic behavior.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
Scientific temper and attitude refer to traits like critical thinking, objectivity, open-mindedness, and respect for evidence. Developing a scientific attitude in students is the aim of science teaching. Some key aspects of scientific attitude are questioning beliefs, reasoning logically, honestly reporting observations, and accepting ideas that are supported by evidence. Fostering skills like curiosity, perseverance, and skepticism in students can help cultivate their scientific temper.
This document discusses the aims and objectives of teaching biological science. It begins by defining biological science as the study of life and living organisms. It then lists several objectives of teaching biological science, including developing students' scientific outlook, curiosity about their surroundings, and respect for nature. The document also discusses the values of teaching biological science, which include encouraging curiosity and knowledge, and keeping an open mind. It emphasizes that teaching biological science should help students become responsible democratic citizens and appreciate diverse perspectives. Overall, the document provides an overview of the goals and importance of teaching biological science.
This presentation discusses using information and communication technologies (ICT) applications in biology learning. It introduces the topic, noting the presenter and institution. The document provides references on the advantages and limitations of ICT in education, using ICT to integrate science teaching and learning, and the impact of ICT in education.
The term isolation refers to the separation of a strain from a natural, mixed population of living microbes, as present in the environment. It becomes necessary to maintain the viability and purity of the microorganism by keeping the pure culture free from contamination.
1) The document discusses oxidation-reduction (redox) reactions and concepts related to solution concentrations. It defines oxidizing and reducing agents and gives examples of each.
2) A redox reaction involves the simultaneous oxidation and reduction of reactants. In redox reactions, the total increase in oxidation number equals the total decrease.
3) Disproportionation reactions involve the same element in a compound being both oxidized and reduced. The reverse is called a comproportionation reaction.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help enhance one's emotional well-being and mental clarity.
The document discusses the concept of equilibrium in economics. It defines equilibrium as a state of balance where opposing forces neutralize each other. In microeconomics, market equilibrium occurs when supply equals demand. In macroeconomics, equilibrium is reached when aggregate demand equals aggregate supply. The document provides examples of economic disequilibrium and equilibrium, and examines how prices adjust via demand and supply mechanisms to reach equilibrium. Key terms in Hindi are also defined.
This document summarizes Crystal Field Theory, which considers the electrostatic interactions between metal ions and ligands. It describes ligands and metal ions as point charges that can have attractive or repulsive forces. This causes the d orbitals of the metal ion to split into two sets depending on if the field created by the ligands is weak or strong. The theory explains color in coordination compounds as being caused by d-d electron transitions under the influence of ligands. However, it has limitations like not accounting for other metal orbitals or the partial covalent nature of metal-ligand bonds.
Dr. Laxmi Verma teaches Microeconomics at the BA-1 level and her topic is on utility in Unit 1 of the course. She teaches at Shri Shankracharya Mahavidyalya in Junwani.
Dr. Laxmi Verma is teaching a class of B.A-1 students. The subject is Indian Economy and the topic being covered is New Economic Reform. The document provides basic context about an economics lecture being given to undergraduate students on recent reforms in the Indian economy.
An iso-product curve shows the different combinations of two factors of production, such as labor and capital, that result in the same level of output. It is represented graphically, with the two factors on the x and y axes and points of equal output connected to form an iso-product curve. Key properties are that iso-product curves slope downward to the right, are convex to the origin, and do not intersect, as each curve represents a different output level. Higher iso-product curves correspond to higher output levels. Iso-product curves allow producers to identify input combinations that achieve maximum output efficiently.
This document discusses demand theory and the relationship between supply and demand. It covers the following key points:
1) Demand theory explains how consumer demand for goods and services relates to their prices in the market. It forms the basis for the demand curve, which shows that as price increases, demand decreases.
2) Demand depends on the utility of goods in satisfying wants and needs as well as a consumer's ability to pay. Supply and demand determine market prices and reach equilibrium when supply equals demand.
3) The demand curve has a negative slope, showing an inverse relationship between price and quantity demanded. A change in non-price factors like income can shift the demand curve. The law of supply and
Land reform in India has involved abolishing intermediaries like rent collectors and establishing ceilings on land ownership to redistribute surplus land to the landless. The goals were to remove impediments to agricultural production from the previous feudal system and eliminate exploitation. Key reforms included abolishing rent collectors, regulating tenancy, imposing landholding ceilings, consolidating fragmented holdings, and promoting cooperative farming. Impacts included reducing disparities, giving ex-landlords other work, increasing revenue, and empowering small farmers and laborers. Land reform aimed to promote social justice and economic growth through a more equitable distribution of agricultural land.
This document discusses different types of structural isomerism that can occur in coordination compounds. It defines structural isomerism as compounds having the same molecular formula but different physical and chemical properties due to different structures or orientations. The types of structural isomerism discussed include ionization isomerism, solvate/hydrate isomerism, linkage isomerism, coordination isomerism, ligand isomerism, polymerization isomerism, geometrical isomerism (cis/trans), and optical isomerism. Examples are provided to illustrate each type of isomerism.
More from Shri Shankaracharya College, Bhilai,Junwani (20)
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The debris of the ‘last major merger’ is dynamically young
M.sc.iii sem digital image processing unit i
1. DIGITAL IMAGE PROCESSING
M.Sc. Computer Science
III Semester
Ms. Megha Deoray
Assistant Professor
Department of Computer Science
Shri Shankaracharya Mahavidyalaya Junwani Bhilai
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
2. THIRD SEMESTER: M.Sc.(CS)
Paper IV: Digital Image Processing
Max Marks: 100 Min Marks: 40
NOTE: - The Question Paper setter is advised to prepare unit-wise question with the
provision of internal choice.
UNIT - I
Digital Image fundaments: Introduction, An image model, sampling & quantization, basic
relationships between Pixels, imaging geometry.
UNIT - II
Image Transforms: Properties of 2 – D Fourier transform, FFT algorithm and other
separable image transforms. Walsh transforms. Hadamard, Cosine, Haar, Slant transforms,
KL transforms and their properties.
UNIT - III
Image Enhancement: Background, enhancement by point processing, histogram
processing, spatial filtering and enhancement in frequency domain, color image
processing.
Image filtering and restoration: degradation model, diagonalization of circulant and block
circulate
matrices, Algebraic approach to restoration, inverse filtering, least mean squares and
interactive restoration, geometric transformations.
UNIT - IV
Image compression: Fundamentals, image compression modes, error free
compression, lossy compression, image compression standards.
Image segmentation: Detection of discontinuities, edge linking and boundary detection
thresholding,
region – oriented segmentation, use of motion in segmentation.
UNIT - V
Representation and description: Various schemes for representation, boundary descriptors, and
regional description Image reconstruction from Projections, Radon Transforms;
Convolution/Filter back – Project Algorithms.
Reference:
1. Fundamentals of Digital Image Processing - A. K. Jain, Prentice Hall
2. Digital Image Processing - Rafael C. Gonzalez, Richard E. Woods
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
3. UNIT-I
INTRODUCTION TO IMAGE PROCESSING
Introduction- Before understanding the concept of digital image we have to know that “What is
an image” An image is a picture that has been created or copied and stored in electronic form.
There are two types of image: -
1 Bitmap Image (Raster Graphics)
2 Vector Image (Vector Graphics)
1 Bitmap Image (Raster Graphics)- An image stored in raster form is sometimes called a
bitmap in other words a picture that has been made of pixels is called bitmap images.
2 Vector Image (Vector Graphics): - An image that is made by some mathematical formula is
called vector Images.
1.1 Introduction:
The digital image processing deals with developing a digital system that performs operations
on a digital image. An image is nothing more than a two dimensional signal. It is defined by
the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and
vertically and the amplitude off at any pair of coordinate (x, y) is called the intensity or gray
level of the image at that point. When x, y and the amplitude values of f are all finite discrete
quantities, we call the image a digital image. The field of image digital image processing refers
to the processing of digital image by means of a digital computer. A digital image is composed
of a finite number of elements, each of which has a particular location and values of these
elements are referred to as picture elements, image elements and pixels.
1.2 Types of an image
Binary image The binary image as its name suggests, contain only two pixel elements i.e 0 & 1,
where 0 refers to black and 1 refers to white. This image is also known as Monochrome.
BLACK AND WHITE IMAGE– The image which consist of only black and white color is
called BLACK AND WHITE IMAGE.
8-bit COLOR FORMAT– It is the most famous image format. It has 256 different shades of
colors in it and commonly known as Grayscale Image. In this format, 0 stands for Black, and 255
stands for white, and 127 stands for gray.
16-bit COLOR FORMAT– It is a color image format. It has 65,536 different colors in it.It is
also known as High Color Format. In this format the distribution of color is not as same as
Grayscale image.
A 16-bit format is actually divided into three further formats which are Red, Green and Blue.
That famous RGB format.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
4. Applications:
Some of the major fields in which digital image processing is widely used are mentioned below
(1) Gamma Ray Imaging- Nuclear medicine and astronomical observations.
(2) X-Ray imaging – X-rays of body.
(3) Ultraviolet Band –Lithography, industrial inspection, microscopy, lasers.
(4) Visual And Infrared Band – Remote sensing.
(5) Microwave Band – Radar imaging.
1.3 Components of Image processing System:
i) Image Sensors: With reference to sensing, two elements are required to acquire digital
image. The first is a physical device that is sensitive to the energy radiated by the object we
wish to image and second is specialized image processing hardware.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
5. ii) Specialize image processing hardware: It consists of the digitizer just mentioned, plus
hardware that performs other primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical operations in parallel on images
iii) Computer: It is a general purpose computer and can range from a PC to a supercomputer
depending on the application. In dedicated applications, sometimes specially designed
computer are used to achieve a required level of performance
iv) Software: It consists of specialized modules that perform specific tasks a well-designed
package also includes capability for the user to write code, as a minimum, utilizes the
specialized module. More sophisticated software packages allow the integration of these
modules.
v) Mass storage: This capability is a must in image processing applications. An image of
size 1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires one
Megabytes of storage space if the image is not compressed. Image processing applications
falls into three principal categories of storage
i) Short term storage for use during processing
ii) On line storage for relatively fast retrieval
iii) Archival storage such as magnetic tapes and disks
vi) Image display: Image displays in use today are mainly color TV monitors. These monitors
are driven by the outputs of image and graphics displays cards that are an integral part of
computer system.
vii) Hardcopy devices: The devices for recording image includes laser printers, film cameras,
heat sensitive devices inkjet units and digital units such as optical and CD ROM disk. Films
provide the highest possible resolution, but paper is the obvious medium of choice for written
applications.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
6. viii) Networking: It is almost a default function in any computer system in use today because
of the large amount of data inherent in image processing applications. The key consideration
in image transmission bandwidth.
1.4 Elements of Visual Spectrum:
(i) Structure of Human eye:
The eye is nearly a sphere with average approximately 20 mm diameter. The eye is enclosed
with three membranes
a) The cornea and sclera - it is a tough, transparent tissue that covers the anterior surface of
the eye. Rest of the optic globe is covered by the sclera
b) The choroid – It contains a network of blood vessels that serve as the major source of
nutrition to the eyes. It helps to reduce extraneous light entering in the eye It has two parts
(1) Iris Diaphragms- it contracts or expands to control the amount of light that enters
the eyes
(2) Ciliary body
(c) Retina – it is innermost membrane of the eye. When the eye is properly focused, light from
an object outside the eye is imaged on the retina. There are various light receptors over the
surface of the retina The two major classes of the receptors are-
1) cones- it is in the number about 6 to 7 million. These are located in the central portion of the
retina called the fovea. These are highly sensitive to color. Human can resolve fine details with
these cones because each one is connected to its own nerve end. Cone vision is called photonic
or bright light vision.
2) Rods – these are very much in number from 75 to 150 million and are distributed over the
entire retinal surface. The large area of distribution and the fact that several roads are connected
to a single nerve give a general overall picture of the field of view. They are not
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
7. involved in the color vision and are sensitive to low level of illumination. Rod vision is called
is isotopic or dim light vision. The absent of reciprocators is called blind spot.
(ii) Image formation in the eye:
The major difference between the lens of the eye and an ordinary optical lens in that the former
is flexible. The shape of the lens of the eye is controlled by tension in the fiber of the ciliary
body. To focus on the distant object, the controlling muscles allow the lens to become thicker
in order to focus on object near the eye it becomes relatively flattened. The distance between
the center of the lens and the retina is called the focal length and it varies from 17mm to 14mm
as the refractive power of the lens increases from its minimum to its maximum. When the eye
focuses on an object farther away than about 3m.the lens exhibits its lowest refractive power.
When the eye focuses on a nearly object. The lens is most strongly refractive. The retinal image
is reflected primarily in the area of the fovea. Perception then takes place by the relative
excitation of light receptors, which transform radiant energy into electrical impulses that are
ultimately decoded by the brain.
(iii) Brightness adaption and discrimination:
Digital image are displayed as a discrete set of intensities. The range of light intensity levels to
which the human visual system can adopt is enormous- on the order of 1010
- from isotopic
threshold to the glare limit. Experimental evidences indicate that subjective brightness is a
logarithmic function of the light intensity incident on the eye.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
8. The curve represents the range of intensities to which the visual system can adopt. But the
visual system cannot operate over such a dynamic range simultaneously. Rather, it is
accomplished by change in its overcall sensitivity called brightness adaptation. For any given
set of conditions, the current sensitivity level to which of the visual system is called brightness
adoption level, Ba in the curve. The small intersecting curve represents the range of subjective
brightness that the eye can perceive when adapted to this level. It is restricted at level Bb, at
and below which all stimuli are perceived as indistinguishable blacks. The upper portion of the
curve is not actually restricted. Whole simply raise the adaptation level higher than Ba. The
ability of the eye to discriminate between change in light intensity at any specific adaptation
level is also of considerable interest. Take a flat, uniformly illuminated area large enough to
occupy the entire field of view of the subject. It may be a diffuser such as an opaque glass, that
is illuminated from behind by a light source whose intensity, I can be varied. To this field is
added an increment of illumination ΔI in the form of a short duration flash that appears as circle
in the center of the uniformly illuminated field. If ΔI is not bright enough, the subject cannot
see any perceivable changes.
As ΔI gets stronger the subject may indicate of a perceived change. ΔIc is the increment of
illumination discernible 50% of the time with background illumination I. Now, ΔIc /I is
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
9. called the Weber ratio. Small value means that small percentage change in intensity is
discernible representing “good” brightness discrimination. Large value of Weber ratio means
large percentage change in intensity is required representing “poor brightnessdiscrimination”.
(iv) Optical Illusion:
In this the eye fills the non-existing information or wrongly pervious geometrical properties of
objects.
1.5 Fundamental steps involved in Image processing:
There are two categories of the steps involved in the image processing –
(1) Methods whose outputs are input are images.
(2) Methods whose outputs are attributes extracted from those images.
i) Image acquisition: It could be as simple as being given an image that is already in digital
form. Generally, the image acquisition stage involves processing such scaling.
ii) Image Enhancement: It is among the simplest and most appealing areas of digital image
processing. The idea behind this is to bring out details that are obscured or simply to highlight
certain features of interest in image. Image enhancement is a very subjective area of image
processing.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
10. iii) Image Restoration: It deals with improving the appearance of an image. It is an objective
approach, in the sense that restoration techniques tend to be based on mathematical or
probabilistic models of image processing. Enhancement, on the other hand is based on human
subjective preferences regarding what constitutes a “good” enhancement result.
iv) Color image processing: It is an area that is been gaining importance because of the use of
digital images over the internet. Color image processing deals with basically color models and
their implementation in image processing applications.
v) Wavelets and Multiresolution Processing: These are the foundation for representing
image in various degrees of resolution.
vi) Compression: It deals with techniques reducing the storage required to save an image, or
the bandwidth required to transmit it over the network. It has to major approaches a) Lossless
Compression b) Lossy Compression
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
11. vii) Morphological processing: It deals with tools for extracting image components that are
useful in the representation and description of shape and boundary of objects. It is majorly used
in automated inspection applications.
viii) Representation and Description: It always follows the output of segmentation step that
is, raw pixel data, constituting either the boundary of an image or points in theregion itself. In
either case converting the data to a form suitable for computer processing is necessary.
ix) Recognition: It is the process that assigns label to an object based on its descriptors. It is
the last step of image processing which use artificial intelligence of software.
Knowledge base:
Knowledge about a problem domain is coded into an image processing system in the form of a
knowledge base. This knowledge may be as simple as detailing regions of an image where the
information of the interest in known to be located. Thus limiting search that has to be conducted
in seeking the information. The knowledge base also can be quite complex such interrelated
list of all major possible defects in a materials inspection problems or an image database
containing high resolution satellite images of a region in connection with change detection
application.
1.6 A Simple Image Model:
An image is denoted by a two dimensional function of the form f{x, y}. The value or amplitude
off at spatial coordinates {x, y} is a positive scalar quantity whose physical meaning is
determined by the source of the image. When an image is generated by a physical process, its
values are proportional to energy radiated by a physical source. As a consequence, f (x, y) must
be nonzero and finite; that is o<f(x,y) <co
The function f(x,y) may be characterized by two components-
(a) The amount of the source illumination incident on the scene being viewed.
(b) The amount of the source illumination reflected back by the objects in the scene
These are called illumination and reflectance components and are denoted by i (x,y) an r (x,y)
respectively.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
12. The functions combine as a product to form f(x,y). We call the intensity of a monochrome
image at any coordinates (x,y) the gray level (l) of the image at that point l= f (x, y.)
L min ≤ l ≤ Lmax
Lmin is to be positive and Lmax must be finite
Lmin = imin rmin
Lmax = imax rmax
The interval [Lmin, Lmax] is called gray scale. Common practice is to shift this interval
numerically to the interval [0, L-l] where l=0 is considered black and l= L-1 is considered white
on the gray scale. All intermediate values are shades of gray of gray varying from black to
white.
1.7 Image Sampling And Quantization:
To create a digital image, we need to convert the continuous sensed data into digital from. This
involves two processes – sampling and quantization. An image may be continuous with respect
to the x and y coordinates and also in amplitude. To convert it into digital form we have to
sample the function in both coordinates and in amplitudes.
Digitalizing the coordinate values is called sampling.
Digitalizing the amplitude values is called quantization.
There is a continuous the image along the line segment AB.
To simple this function, we take equally spaced samples along line AB. The location of each
samples is given by a vertical tick back (mark) in the bottom part. The samples are shown as
block squares superimposed on function the set of these discrete locations gives the sampled
function.
In order to form a digital, the gray level values must also be converted (quantized) into
discrete quantities. So we divide the gray level scale into eight discrete levels ranging from
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
13. eight level values. The continuous gray levels are quantized simply by assigning one of the
eight discrete gray levels to each sample. The assignment it made depending on the vertical
proximity of a simple to a vertical tick mark.
Starting at the top of the image and covering out this procedure line by line produces a two
dimensional digital image.
1.8 Digital Image definition:
A digital image f(m,n) described in a 2D discrete space is derived from an analog image f(x,y)
in a 2D continuous space through a sampling process that is frequently referred to as
digitization. The mathematics of that sampling process will be described in subsequent
Chapters. For now we will look at some basic definitions associated with the digital image. The
effect of digitization is shown in figure.
The 2D continuous image f(x,y) is divided into N rows and M columns. The intersection of a
row and a column is termed a pixel. The value assigned to the integer coordinates (m,n) with
m=0,1,2..N-1 and n=0,1,2…N-1 is f(m,n). In fact, in most cases, is actually a function of many
variables including depth, color and time (t).
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
14. There are three types of computerized processes in the processing of image
1) Low level process -these involve primitive operations such as image processing to reduce
noise, contrast enhancement and image sharpening. These kind of processes are characterized
by fact the both inputs and output are images.
2) Mid-level image processing - it involves tasks like segmentation, description of those
objects to reduce them to a form suitable for computer processing, and classification of
individual objects. The inputs to the process are generally images but outputs are attributes
extracted from images.
3) High level processing – It involves “making sense” of an ensemble of recognized objects,
as in image analysis, and performing the cognitive functions normally associated with vision.
1.9 Representing Digital Images:
The result of sampling and quantization is matrix of real numbers. Assume that an image f(x,y)
is sampled so that the resulting digital image has M rows and N Columns. The values of the
coordinates (x,y) now become discrete quantities thus the value of the coordinates at orgin
become 9X,y) =(o,o) The next Coordinates value along the first signify the iamge along the
first row. it does not mean that these are the actual values of physical coordinates when the
image was sampled.
Thus the right side of the matrix represents a digital element, pixel or pel. The matrix can be
represented in the following form as well. The sampling process may be viewed as
partitioning the xy plane into a grid with the coordinates of the center of each grid being a
pair of elements from the Cartesian products Z2 which is the set of all ordered pair of
elements (Zi, Zj) with Zi and Zj being integers from Z. Hence f(x,y) is a digital image if gray
level (that is, a real number from the set of real number R) to each distinct pair of coordinates
(x,y). This functional assignment is the quantization process. If the gray levels are also
integers, Z replaces R, the and a digital image become a 2D function whose coordinates and
she amplitude value are integers. Due to processing storage and hardware consideration, the
number gray levels typically is an integer power of 2.
L=2k
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
15. Then, the number, b, of bites required to store a digital image is B=M *N* k
When M=N, the equation become b=N2
*k
When an image can have 2k gray levels, it is referred to as “k- bit”. An image with 256
possible gray levels is called an “8- bit image” (256=28
).
1.10 Spatial and Gray level resolution:
Spatial resolution is the smallest discernible details are an image. Suppose a chart can be
constructed with vertical lines of width w with the space between the also having width W, so
a line pair consists of one such line and its adjacent space thus. The width of the line pair is 2w
and there is 1/2w line pair per unit distance resolution is simply the smallest number of
discernible line pair unit distance.
Gray levels resolution refers to smallest discernible change in gray levels. Measuring
discernible change in gray levels is a highly subjective process reducing the number of bits R
while repairing the spatial resolution constant creates the problem of false contouring.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
16. It is caused by the use of an insufficient number of gray levels on the smooth areas of the digital
image . It is called so because the rides resemble top graphics contours in a map. It is generally
quite visible in image displayed using 16 or less uniformly spaced gray levels.
1.11 Relationship between pixels:
(i) Neighbor of a pixel:
A pixel p at coordinate (x,y) has four horizontal and vertical neighbor whose coordinate can be
given by
(x+1, y) (X-1,y) (X ,y + 1) (X, y-1)
This set of pixel called the 4-neighbours of p is denoted by n4(p) ,Each pixel is a unit distance
from (x,y) and some of the neighbors of P lie outside the digital image of (x,y) is on the border
if the image . The four diagonal neighbor of P have coordinated
(x+1,y+1),(x+1,y+1),(x-1,y+1),(x-1,y-1)
And are deported by nd (p) .these points, together with the 4-neighbours are called 8 –
neighbors of P denoted by ns(p).
(ii) Adjacency:
Let v be the set of gray –level values used to define adjacency, in a binary image, v={1} if
we are reference to adjacency of pixel with value. Three types of adjacency
4- Adjacency – two pixel P and Q with value from V are 4 –adjacency if A is in the set n4(P)
8- Adjacency – two pixel P and Q with value from V are 8 –adjacency if A is in the set n8(P)
M-adjacency –two pixel P and Q with value from V are m – adjacency if
(i) Q is in n4 (p)or
(ii) Q is in nd (q) and the set N4(p) U N4(q) has no pixel whose values are fromV
(iii) Distance measures:
For pixel p,q and z with coordinate (x.y) ,(s,t) and (v,w) respectively D is a distance function
or metric if
D [p.q] ≥ O {D[p.q] = O iff p=q}
D [p.q] = D [p.q] and
D [p.q] ≥ O {D[p.q]+D(q,z)
The Education Distance between p and is defined as
De (p,q) = Iy – t I
The D4 Education Distance between p and is definedas
De (p,q) = Iy – tI
1.12 Image sensing and Acquisition:
The types of images in which we are interested are generated by the combination of an
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
17. Fig:Single Image sensor
Fig: Line Sensor
Fig: Array sensor
“illumination” source and the reflection or absorption of energy from that source by the
elements of the “scene” being imaged. We enclose illumination and scene in quotes to
emphasizethe fact that they are considerably more general than the familiar situation in which
a visible light source illuminates a common everyday 3-D (three-dimensional) scene. For
example, the illumination may originate from a source of electromagnetic energy such as radar,
infrared, or X-ray energy. But, as noted earlier, it could originate from less traditionalsources,
such as ultrasound or even a computer-generated illumination pattern. Similarly, the scene
elements could be familiar objects, but they can just as easily be molecules, buried rock
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
18. formations, or a human brain. We could even image a source, such as acquiring images of the
sun. Depending on the nature of the source, illumination energy is reflected from, or transmitted
through, objects. An example in the first category is light reflected from a planar surface. An
example in the second category is when X-rays pass through a patient’s body for thepurpose of
generating a diagnostic X-ray film. In some applications, the reflected or transmitted energy is
focused onto a photo converter (e.g., a phosphor screen), which converts the energy into visible
light. Electron microscopy and some applications of gamma imaging use this approach. The
idea is simple: Incoming energy is transformed into a voltage by the combination of input
electrical power and sensor material that is responsive to the particular type of energy being
detected. The output voltage waveform is the response of the sensor(s), and a digital quantity
is obtained from each sensor by digitizing its response. In this section, we look at the principal
modalities for image sensing and generation.
(i)Image Acquisition using a Single sensor:
The components of a single sensor. Perhaps the most familiar sensor of this type is the
photodiode, which is constructed of silicon materials and whose output voltage waveform is
proportional to light. The use of a filter in front of a sensor improves selectivity. For example,
a green (pass) filter in front of a light sensor favors light in the green band of the color spectrum.
As a consequence, the sensor output will be stronger for green light than for other components
in the visible spectrum.
In order to generate a 2-D image using a single sensor, there has to be relative displacements
in both the x- and y-directions between the sensor and the area to be imaged. Figure shows an
arrangement used in high-precision scanning, where a film negative is mounted onto a drum
whose mechanical rotation provides displacement in one dimension. The single sensor is
mounted on a lead screw that provides motion in the perpendicular direction. Since mechanical
motion can be controlled with high precision, this method is an inexpensive (but
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
19. slow) way to obtain high-resolution images. Other similar mechanical arrangements use a flat
bed, with the sensor moving in two linear directions. These types of mechanical digitizers
sometimes are referred to as microdensitometers.
(ii) Image Acquisition using a Sensor strips:
A geometry that is used much more frequently than single sensors consists of an in-line
arrangement of sensors in the form of a sensor strip, shows. The strip provides imaging
elements in one direction. Motion perpendicular to the strip provides imaging in the other
direction. This is the type of arrangement used in most flat bed scanners. Sensing deviceswith
4000 or more in-line sensors are possible. In-line sensors are used routinely in airborne imaging
applications, in which the imaging system is mounted on an aircraft that flies at a constant
altitude and speed over the geographical area to be imaged. One dimensional imaging sensor
strips that respond to various bands of the electromagnetic spectrum are mounted perpendicular
to the direction of flight. The imaging strip gives one line of an image at a time, and the motion
of the strip completes the other dimension of a two-dimensional image. Lenses or other
focusing schemes are used to project area to be scanned onto the sensors. Sensor strips mounted
in a ring configuration are used in medical and industrial imaging to obtain cross-sectional
(“slice”) images of 3-D objects.
(iii) Image Acquisition using a Sensor Arrays:
The individual sensors arranged in the form of a 2-D array. Numerous electromagnetic and
some ultrasonic sensing devices frequently are arranged in an array format. This is also the
predominant arrangement found in digital cameras. A typical sensor for these cameras is a
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
20. CCD array, which can be manufactured with a broad range of sensing properties and can be
packaged in rugged arrays of elements or more. CCD sensors are used widely in digital cameras
and other light sensing instruments. The response of each sensor is proportional to the integral
of the light energy projected onto the surface of the sensor, a property that is used in
astronomical and other applications requiring low noise images. Noise reduction is achieved
by letting the sensor integrate the input light signal over minutes or even hours. The two
dimensional, its key advantage is that a complete image can be obtained by focusing the energy
pattern onto the surface of the array. Motion obviously is not necessary, as is the case with the
sensor arrangements This figure shows the energy from an illumination source being reflected
from a scene element, but, as mentioned at the beginning of this section, the energy also could
be transmitted through the scene elements. The first function performed by the imaging system
is to collect the incoming energy and focus it onto an image plane. If the illumination is light,
the front end of the imaging system is a lens, which projects the viewed scene onto the lens
focal plane. The sensor array, which is coincident with the focal plane, produces outputs
proportional to the integral of the light received at each sensor. Digital and analog circuitry
sweep these outputs and convert them to a video signal, which is then digitized by another
section of the imaging system.
1.13 Image sampling and Quantization:
To create a digital image, we need to convert the continuous sensed data into digital form. This
involves two processes: sampling and quantization. A continuous image, f(x, y), that we want
to convert to digital form. An image may be continuous with respect to the x- and y-
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
21. coordinates, and also in amplitude. To convert it to digital form, we have to sample the function
in both coordinates and in amplitude. Digitizing the coordinate values is called sampling.
Digitizing the amplitude values is called quantization.
1.14 Digital Image representation:
Digital image is a finite collection of discrete samples (pixels) of any observable object. The
pixels represent a two- or higher dimensional “view” of the object, each pixel having its own
discrete value in a finite range. The pixel values may represent the amount of visible light,
infra red light, absortation of x-rays, electrons, or any other measurable value such as
ultrasound wave impulses. The image does not need to have any visual sense; it is sufficient
that the samples form a two-dimensional spatial structure that may be illustrated as an image.
The images may be obtained by a digital camera, scanner, electron microscope, ultrasound
stethoscope, or any other optical or non-optical sensor. Examples of digital image are:
• digital photographs
• satellite images
• radiological images (x-rays, mammograms)
• binary images, fax images, engineering drawings
Computer graphics, CAD drawings, and vector graphics in general are not considered in this
course even though their reproduction is a possible source of an image. In fact, one goal of
intermediate level image processing may be to reconstruct a model (e.g. vector
representation) for a given digital image.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
22. 1.15 Digitization:
Digital image consists of N M pixels, each represented by k bits. A pixel can thus have 2k
different values typically illustrated using a different shades of gray, see Figure . In practical
applications, the pixel values are considered as integers varying from 0 (black pixel) to 2k
-1
(white pixel).
Fig: Example of a digital Image
The images are obtained through a digitization process, in which the object is covered by a
two-dimensional sampling grid. The main parameters of the digitization are:
• Image resolution: the number of samples in the grid.
• pixel accuracy: how many bits are used per sample.
These two parameters have a direct effect on the image quality but also to the storage size of
the image (Table 1.1). In general, the quality of the images increases as the resolution and the
bits per pixel increase. There are a few exceptions when reducing the number of bits increases
the image quality because of increasing the contrast. Moreover, in an image with a very high
resolution only very few gray-levels are needed. In some applications it is more important to
have a high resolution for detecting details in the image whereas in other applications the
number of different levels (or colors) is more important for better outlook of the image. To sum
up, if we have a certain amount of bits to allocate for an image, it makes difference how to
choose the digitization parameters.
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
23. Fig: Effect of resolution and pixel accuracy to image quality
The properties of human eye imply some upper limits. For example, it is known that the
human eye can observe at most one thousand different gray levels in ideal conditions, but in
any practical situations 8 bits per pixel (256 gray level) is usually enough. The required levels
decreases even further as the resolution of the image increases. In a laser quality printing, as
in this lecture notes, even 6 bits (64 levels) results in quite satisfactory result. On the other
hand, if the application is e.g. in medical imaging or in cartography, the visual quality is not
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
24. the primary concern. For example, if the pixels represent some physical measure and/or the
image will be analyzed by a computer, the additional accuracy may be useful. Even if human
eye cannot detect any differences, computer analysis may recognize the difference. The
requirement of the spatial resolution depends both on the usage of the image and the image
content. If the default printing (or display) size of the image is known, the scanning resolution
can be chosen accordingly so that the pixels are not seen and the image appearance is not jagged
(blocky). However, the final reproduction size of the image is not always known but images
are often achieved just for “later use”. Thus, once the image is digitized it will most likely
(according to Murphy’s law) be later edited and enlarged beyond what was allowed by the
original resolution. The image content sets also some requirements to the resolution. If the
image has very fine structure exceeding the sampling resolution, it may cause so-called aliasing
effect where the digitized image has patterns that does not exists in the original.
Fig: Sensitivity of the eye to the intensity changes
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
25. 1.16 Image processing Techniques:
(i)Point Operations: map each input pixel to output pixel intensity according to an intensity
transformation. A simple linear point operation which maps the input gray level f(m,n) to an
output gray level g(m,n)is given by:
g(m, n) = af (m, n b
Where a and b are chosen to achieve a desired intensity variation in the image. Note that the
output g(m.n) here depends only on the input f(m,n).
(ii) Local Operations: determine the output pixel intensity as some function of a relatively
small neighborhood of input pixels in the vicinity of the output location. A general linear
operator can be expressed as weighted of picture elements within a local neighborhood N.
Simple local smoothing (for noise reduction) and sharpening (for deploring or edge
enhancement) operators can be both linear and non-linear.
(iii) Global Operations: the outputs depend on all input pixels values. If linear, global operators
can be expressed using two-dimensional convolution.
(iv) Adaptive Filters: whose coefficients depend on the input image
(v) Non-Linear Filters:
• Median/order statistics
• Non-linear local operations
• Homomorphic filters
In addition to enhancement and restoration, image processing generally includes issues of
representations, spatial sampling and intensity quantization, compression or coding and
Prepared By- MS Megha Deoray, Assistant Professor SSMV Junwani Bhilai
26. segmentation. As part of computer vision, image processing leads to feature extraction and
pattern recognition or scene analysis.