The document is a presentation on digital image processing. It begins with definitions of key terms like image, digital image, and digital image processing. It then covers different types of images like monochrome, grayscale, and color images. It discusses common image file formats and different types of noise that can affect images, such as Gaussian noise, salt and pepper noise, and Poisson noise. Filtering techniques for reducing noise are also mentioned. The presentation concludes with a list of references.
This document provides an introduction and overview of image processing in MATLAB. It discusses applications of image processing, preprocessing techniques like noise reduction and brightness adjustment, image segmentation, color models like RGB and HSL, and common image processing toolbox functions in MATLAB for tasks like reading/writing images, color conversion, filtering, edge detection, and more. It also advertises an upcoming demo that will showcase various image processing techniques in MATLAB like blurring, edge detection, segmentation, and solving problems like detecting road lines and sudoku puzzles.
It is the basic introduction of how the images will be captured and converted form analog to digital format by using sampling and quantization process and further algorithms will be apply on the digitized image.
Introduction to Digital Image Processing Using MATLABRay Phan
This was a 3 hour presentation given to undergraduate and graduate students at Ryerson University in Toronto, Ontario, Canada on an introduction to Digital Image Processing using the MATLAB programming environment. This should provide the basics of performing the most common image processing tasks, as well as providing an introduction to how digital images work and how they're formed.
You can access the images and code that I created and used here: https://www.dropbox.com/sh/s7trtj4xngy3cpq/AAAoAK7Lf-aDRCDFOzYQW64ka?dl=0
The document discusses the key components of an image processing system, including image sensing, digitization, storage, and display. It covers common image sensing devices like cameras, scanners, and MRI systems. It also describes digitizers, different types of digital storage, and principal display devices. Finally, it discusses concepts like spatial and gray-level resolution, sampling and quantization, and interpolation methods used for zooming and shrinking digital images.
Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. It have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry.
Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, videos, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Video games most often use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered "cut scenes" and intro movies that would be typical CGI applications.
The document discusses a workshop on image processing using MATLAB. It provides an overview of MATLAB and its image processing toolbox. It describes how to read, display, and convert between different image formats in MATLAB. It also demonstrates various image processing operations that can be performed, such as arithmetic operations, conversion between color and grayscale, image rotation, blurring and deblurring, and filling regions of interest. The document aims to introduce the basics of working with images in the MATLAB environment.
The document provides an overview of an introduction to computer graphics course. It discusses topics that will be covered like the history and applications of computer graphics, hardware concepts, 2D and 3D algorithms, modeling curves and 3D objects, animation, and textbooks. It also defines computer graphics and compares image processing versus computer graphics.
This document provides an introduction and overview of image processing in MATLAB. It discusses applications of image processing, preprocessing techniques like noise reduction and brightness adjustment, image segmentation, color models like RGB and HSL, and common image processing toolbox functions in MATLAB for tasks like reading/writing images, color conversion, filtering, edge detection, and more. It also advertises an upcoming demo that will showcase various image processing techniques in MATLAB like blurring, edge detection, segmentation, and solving problems like detecting road lines and sudoku puzzles.
It is the basic introduction of how the images will be captured and converted form analog to digital format by using sampling and quantization process and further algorithms will be apply on the digitized image.
Introduction to Digital Image Processing Using MATLABRay Phan
This was a 3 hour presentation given to undergraduate and graduate students at Ryerson University in Toronto, Ontario, Canada on an introduction to Digital Image Processing using the MATLAB programming environment. This should provide the basics of performing the most common image processing tasks, as well as providing an introduction to how digital images work and how they're formed.
You can access the images and code that I created and used here: https://www.dropbox.com/sh/s7trtj4xngy3cpq/AAAoAK7Lf-aDRCDFOzYQW64ka?dl=0
The document discusses the key components of an image processing system, including image sensing, digitization, storage, and display. It covers common image sensing devices like cameras, scanners, and MRI systems. It also describes digitizers, different types of digital storage, and principal display devices. Finally, it discusses concepts like spatial and gray-level resolution, sampling and quantization, and interpolation methods used for zooming and shrinking digital images.
Computer graphics is responsible for displaying art and image data effectively and beautifully to the user, and processing image data received from the physical world. The interaction and understanding of computers and interpretation of data has been made easier because of computer graphics. It have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry.
Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, videos, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.
Video games most often use real-time computer graphics (rarely referred to as CGI), but may also include pre-rendered "cut scenes" and intro movies that would be typical CGI applications.
The document discusses a workshop on image processing using MATLAB. It provides an overview of MATLAB and its image processing toolbox. It describes how to read, display, and convert between different image formats in MATLAB. It also demonstrates various image processing operations that can be performed, such as arithmetic operations, conversion between color and grayscale, image rotation, blurring and deblurring, and filling regions of interest. The document aims to introduce the basics of working with images in the MATLAB environment.
The document provides an overview of an introduction to computer graphics course. It discusses topics that will be covered like the history and applications of computer graphics, hardware concepts, 2D and 3D algorithms, modeling curves and 3D objects, animation, and textbooks. It also defines computer graphics and compares image processing versus computer graphics.
Digital Image Processing is an introduction to the topic that covers the definition of digital images and digital image processing. It provides a brief history of the field and examples of applications like medical imaging, satellite imagery analysis, and industrial inspection. The document concludes with an overview of the key stages in digital image processing like image acquisition, enhancement, and representation.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
Fundamental concepts and basic techniques of digital image processing. Algorithms and recent research in image transformation, enhancement, restoration, encoding and description. Fundamentals and basic techniques of pattern recognition.
This chapter discusses various graphics and image file formats, including bitmap, JPEG, and GIF formats. It also covers basic image types such as 1-bit black and white images and 8-bit grayscale images. Color images can be stored as 24-bit RGB images or 8-bit color images using a color lookup table. The chapter also introduces digital audio concepts and the Musical Instrument Digital Interface (MIDI) standard for controlling electronic musical instruments.
This document provides an overview of computer graphics and its applications. It discusses interactive graphics, where the user can control the image, versus passive graphics which produce images automatically. Interactive graphics allow for advantages like motion dynamics and update dynamics. The document then covers how interactive graphics displays work, using a frame buffer, monitor, and display controller. It concludes with a discussion of various applications of computer graphics, such as cartography, user interfaces, scientific visualization, CAD/CAM, simulation, art, process control and more.
This document provides lecture notes for a computer graphics course. It includes:
- An overview of the course description, prerequisites, objectives and outcomes.
- A taxonomy of different types of computer graphics such as static vs dynamic, color vs black and white, etc.
- Details of lecture topics such as drawing techniques, output picture types, and algorithms for drawing basic shapes.
- Programming assignments for students such as drawing lines and trees, and developing a game engine.
The students can learn about basics of image processing using matlab.
It explains the image operations with the help of examples and Matlab codes.
Students can fine sample images and .m code from the link given in slides.
Introduction to Digital Image ProcessingNagashree Bn
The document defines digital image processing and describes the key components of a digital image processing system. It discusses the major components which include image sensing, specialized hardware, computers, software modules to perform tasks like preprocessing, enhancement and compression. It also covers mass storage, image displays, hardcopy devices and networking capabilities required for a digital image processing system. Applications of digital image processing discussed include medical imaging, remote sensing, astronomy and more.
This is the basic introductory presentation for beginners. It gives you the idea about what is image processing means. The presentation consists of introduction to digital image processing, image enhancement, image filtering, finding an image edge, image analysis, tools for image processing and finally some application of digital image processing.
The document outlines the fundamental steps for digital image processing projects, including image acquisition, preprocessing, segmentation, representation and description, recognition and interpretation, and postprocessing. It discusses improving images for human or machine use, and describes common image processing techniques like enhancement, thresholding, representation, description, recognition, and interpretation. The overall methodology presented is meant to increase the likelihood of success for image processing projects.
This document discusses digital image processing using MATLAB. It begins by defining digital images and how they are represented by arrays of pixels in computer memory. It then discusses how images can be read into MATLAB and converted between color, grayscale, and binary representations. Various image processing operations are described such as edge detection, dilation, filling, and calculating region properties. Finally, examples are given of processing color images using intensity transformations and gamma correction.
This document outlines an introductory course on basic image processing taught by Dr. Arne Seitz at the Swiss Institute of Technology (EPFL). It discusses key topics like file formats, image viewers, representation and processing programs. Specific techniques covered include lookup tables, brightness/contrast adjustment, filtering, thresholding, and measurements. ImageJ is demonstrated as a tool for visualizing and manipulating digital images. The goal is to provide foundational concepts for working with and analyzing digital microscope images.
this slide is used to know how image processing is done and applications of image processing and its advantages in various sectors .And also some research topics related to image processing
The document discusses image processing and provides information on several key topics:
1. Image processing can be grouped into compression, preprocessing, and analysis. Preprocessing improves image quality by reducing noise and enhancing edges. Analysis extracts numeric or graphical information for tasks like classification.
2. Images are 2D matrices of intensity values represented by pixels. Common digital formats include grayscale, RGB, and RGBA. Higher bit depths allow more intensity levels to be represented.
3. Basic measurements of images include spatial resolution in pixels per unit, bit depth determining representable intensity levels, and factors like saturation and noise.
This document discusses multimedia elements and digital images. It defines multimedia as a combination of text, images, sound and video, especially on computers or for entertainment. The document then focuses on digital images, discussing their importance, types (binary, grayscale, color), representation using pixels, file size calculation, and image processing techniques like matching. Key points covered include how images are represented as 2D arrays, with pixels having luminance or RGB values, and how file size depends on image depth, size and resolution. Image processing is used in many fields like medicine, military and education.
Digital image processing refers to manipulating, enhancing, and analyzing digital images using computer algorithms and techniques. It involves applying mathematical operations to digital images, which are treated as two-dimensional arrays of pixels where each pixel represents a point of color and brightness. The basic steps in digital image processing are image acquisition, enhancement, restoration, segmentation, representation/description, analysis, synthesis/compression. Digital image processing is widely used in applications like medical imaging, computer vision, and multimedia.
Digital images can be defined as a 2-D function where x and y are spatial coordinates and the amplitude at each point represents intensity or gray level. Digital images can be raster images, represented as grids of pixels, or vector images, stored as mathematical descriptions of shapes. The file size of a digital image depends on its resolution in pixels, bit depth, and file format. Common file formats include JPEG, PNG, and TIFF, each suited for different types of images.
This document summarizes various topics related to image processing including image data types, file formats, acquisition, storage, processing, communication, display, and enhancement techniques. It discusses key concepts such as image fundamentals, color models, resolution, bit depth, file formats like JPEG, GIF, TIFF, compression techniques including lossless, lossy, intraframe, interframe, and algorithms like run length encoding and Shannon-Fano coding. Image enhancement topics covered are point processing, spatial filtering, and color image processing.
Digital Image Processing is an introduction to the topic that covers the definition of digital images and digital image processing. It provides a brief history of the field and examples of applications like medical imaging, satellite imagery analysis, and industrial inspection. The document concludes with an overview of the key stages in digital image processing like image acquisition, enhancement, and representation.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
Fundamental concepts and basic techniques of digital image processing. Algorithms and recent research in image transformation, enhancement, restoration, encoding and description. Fundamentals and basic techniques of pattern recognition.
This chapter discusses various graphics and image file formats, including bitmap, JPEG, and GIF formats. It also covers basic image types such as 1-bit black and white images and 8-bit grayscale images. Color images can be stored as 24-bit RGB images or 8-bit color images using a color lookup table. The chapter also introduces digital audio concepts and the Musical Instrument Digital Interface (MIDI) standard for controlling electronic musical instruments.
This document provides an overview of computer graphics and its applications. It discusses interactive graphics, where the user can control the image, versus passive graphics which produce images automatically. Interactive graphics allow for advantages like motion dynamics and update dynamics. The document then covers how interactive graphics displays work, using a frame buffer, monitor, and display controller. It concludes with a discussion of various applications of computer graphics, such as cartography, user interfaces, scientific visualization, CAD/CAM, simulation, art, process control and more.
This document provides lecture notes for a computer graphics course. It includes:
- An overview of the course description, prerequisites, objectives and outcomes.
- A taxonomy of different types of computer graphics such as static vs dynamic, color vs black and white, etc.
- Details of lecture topics such as drawing techniques, output picture types, and algorithms for drawing basic shapes.
- Programming assignments for students such as drawing lines and trees, and developing a game engine.
The students can learn about basics of image processing using matlab.
It explains the image operations with the help of examples and Matlab codes.
Students can fine sample images and .m code from the link given in slides.
Introduction to Digital Image ProcessingNagashree Bn
The document defines digital image processing and describes the key components of a digital image processing system. It discusses the major components which include image sensing, specialized hardware, computers, software modules to perform tasks like preprocessing, enhancement and compression. It also covers mass storage, image displays, hardcopy devices and networking capabilities required for a digital image processing system. Applications of digital image processing discussed include medical imaging, remote sensing, astronomy and more.
This is the basic introductory presentation for beginners. It gives you the idea about what is image processing means. The presentation consists of introduction to digital image processing, image enhancement, image filtering, finding an image edge, image analysis, tools for image processing and finally some application of digital image processing.
The document outlines the fundamental steps for digital image processing projects, including image acquisition, preprocessing, segmentation, representation and description, recognition and interpretation, and postprocessing. It discusses improving images for human or machine use, and describes common image processing techniques like enhancement, thresholding, representation, description, recognition, and interpretation. The overall methodology presented is meant to increase the likelihood of success for image processing projects.
This document discusses digital image processing using MATLAB. It begins by defining digital images and how they are represented by arrays of pixels in computer memory. It then discusses how images can be read into MATLAB and converted between color, grayscale, and binary representations. Various image processing operations are described such as edge detection, dilation, filling, and calculating region properties. Finally, examples are given of processing color images using intensity transformations and gamma correction.
This document outlines an introductory course on basic image processing taught by Dr. Arne Seitz at the Swiss Institute of Technology (EPFL). It discusses key topics like file formats, image viewers, representation and processing programs. Specific techniques covered include lookup tables, brightness/contrast adjustment, filtering, thresholding, and measurements. ImageJ is demonstrated as a tool for visualizing and manipulating digital images. The goal is to provide foundational concepts for working with and analyzing digital microscope images.
this slide is used to know how image processing is done and applications of image processing and its advantages in various sectors .And also some research topics related to image processing
The document discusses image processing and provides information on several key topics:
1. Image processing can be grouped into compression, preprocessing, and analysis. Preprocessing improves image quality by reducing noise and enhancing edges. Analysis extracts numeric or graphical information for tasks like classification.
2. Images are 2D matrices of intensity values represented by pixels. Common digital formats include grayscale, RGB, and RGBA. Higher bit depths allow more intensity levels to be represented.
3. Basic measurements of images include spatial resolution in pixels per unit, bit depth determining representable intensity levels, and factors like saturation and noise.
This document discusses multimedia elements and digital images. It defines multimedia as a combination of text, images, sound and video, especially on computers or for entertainment. The document then focuses on digital images, discussing their importance, types (binary, grayscale, color), representation using pixels, file size calculation, and image processing techniques like matching. Key points covered include how images are represented as 2D arrays, with pixels having luminance or RGB values, and how file size depends on image depth, size and resolution. Image processing is used in many fields like medicine, military and education.
Digital image processing refers to manipulating, enhancing, and analyzing digital images using computer algorithms and techniques. It involves applying mathematical operations to digital images, which are treated as two-dimensional arrays of pixels where each pixel represents a point of color and brightness. The basic steps in digital image processing are image acquisition, enhancement, restoration, segmentation, representation/description, analysis, synthesis/compression. Digital image processing is widely used in applications like medical imaging, computer vision, and multimedia.
Digital images can be defined as a 2-D function where x and y are spatial coordinates and the amplitude at each point represents intensity or gray level. Digital images can be raster images, represented as grids of pixels, or vector images, stored as mathematical descriptions of shapes. The file size of a digital image depends on its resolution in pixels, bit depth, and file format. Common file formats include JPEG, PNG, and TIFF, each suited for different types of images.
This document summarizes various topics related to image processing including image data types, file formats, acquisition, storage, processing, communication, display, and enhancement techniques. It discusses key concepts such as image fundamentals, color models, resolution, bit depth, file formats like JPEG, GIF, TIFF, compression techniques including lossless, lossy, intraframe, interframe, and algorithms like run length encoding and Shannon-Fano coding. Image enhancement topics covered are point processing, spatial filtering, and color image processing.
This document provides an overview of digital image processing. It discusses key concepts like image types (intensity, binary, indexed, RGB), image file formats (TIFF, JPEG), image resolutions, and the steps involved in digital image processing. The MATLAB Image Processing Toolbox is also mentioned as a tool for performing operations on images like visualization, analysis, and processing. Edge detection is highlighted as an important but difficult task in digital image processing.
Voice recognition and voice response systems allow for hands-free data entry using speech as the interface. Voice recognition systems analyze speech patterns to convert them to digital codes for computer input. Most require training a system to recognize a user's voice. Voice recognition is used in applications like manufacturing quality control and airline baggage sorting. Voice response systems provide verbal guidance for tasks using voice messaging and synthesis. Examples include automated phone systems and online services.
The document discusses image steganography and various related concepts. It introduces image steganography as hiding secret information in a cover image. Key points covered include:
- Huffman coding is used to encode the secret image before embedding. It assigns binary codes to image intensity values.
- Discrete wavelet transform (DWT) is applied to the cover image. The secret message is embedded in the high frequency DWT coefficients while preserving the low frequency coefficients to maintain image quality.
- Inverse DWT is applied to produce a stego-image containing the hidden secret image. Haar DWT is used in the described approach.
Lesson 6 discusses images in multimedia. It covers creating still images using bitmaps or vector graphics. Bitmaps use pixels to represent images while vector graphics use mathematical formulas. The document also discusses color models like RGB and HSB. Color palettes define the available colors and dithering is used to match colors. Common file formats for images on different platforms are also presented.
Images are an important element in multimedia. There are two main types of images: bitmaps, which use pixels to represent color information, and vector images, which use mathematical coordinates. Various tools can be used to create and edit images, including bitmap software, 3D modeling programs, and image capture and editing features. Color is a key aspect, with different color models and palettes used depending on the intended display and use of the images.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
This document is a mini project report on digital image processing using MATLAB. It discusses various image processing techniques and applications implemented in MATLAB, including image formats, operations, and tools. Applications demonstrated include text recognition, color tracking, solving an engineering problem using image processing, creating a virtual slate using laser tracking, face detection, and distance estimation. The report provides examples of MATLAB functions used for tasks like importing, displaying, converting and cropping images, as well as analyzing and manipulating them.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
This document discusses the history and advances in digital imaging technology used in orthodontics. It describes how digital imaging has evolved from early cephalometric films to current digital systems. Key points include:
- Early cephalometric films from the 1930s allowed for analysis of malocclusions. Digital imaging now offers 3D analysis capabilities.
- Digital images are composed of pixels arranged in a grid, whereas analog films have continuous shades of gray. Digital offers advantages like enhanced images and lower radiation exposure.
- Factors like resolution, file format, and compression influence image quality for applications like orthodontic photos. Higher resolution TIFF files preserve quality better than JPEG.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
This document discusses digital graphics technology, describing features of vector and bitmap images. Vector images are resolution-independent and scalable without quality loss, using mathematical expressions to represent lines and shapes. Bitmap images are made up of pixels that lose quality when resized. The document provides examples of how vector and bitmap images differ when resized, and discusses image capturing, output methods for print and screen, storage considerations like file size and organization, and naming conventions.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Recent advances digital imaging /certified fixed orthodontic courses by India...Indian dental academy
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and offering a wide range of dental certified courses in different formats.
Indian dental academy provides dental crown & Bridge,rotary endodontics,fixed orthodontics,
Dental implants courses.for details pls visit www.indiandentalacademy.com ,or call
0091-9248678078
Similar to Noise recognition in digital image (20)
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
Noise recognition in digital image
1. Gurunanak Institute of Technology (GNIT)
M.Tech (CSE)
Presentation on
Submitted by:
MD: Reyad Hossain
Submitted to:
Mr: Moloy Dhar
19/16/2015MD:Reyad Hossain (GNIT)
2. 9/16/2015 2MD:Reyad Hossain (GNIT)
1.What is an Image page- (3-4)
2.What is Digital Image page-(5)
3.What is Digital Image Processing page-(6-7)
4.Types of Images page-(8-10)
5.Formats of Images page-(11)
6.Image Noise page-(12-13)
7.Types of Noises in Image page-(14-30)
8.Filtering page-(31-35)
9.Conclussion page-(36)
10.Referencess page-(37)
3. An image (from Latin: imago) is an artefact that depicts or records visual
perception, for example a two-dimensional picture, that has a similar appearance
to some subject—usually a physical object or a person, thus providing
a depiction of it.
Images may be two-dimensional, such as a photograph, screen display, and as well
as a three-dimensional, such as a statue or hologram. They may
be captured by optical devices – such
as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and
phenomena, such as the human eye or water.
The word image is also used in the broader sense of any two-dimensional figure
such as a map, a graph, a pie chart, or a painting. In this wider sense, images can
also be rendered manually, such as by drawing, the art of painting, carving,
rendered automatically by printing or computer graphics technology,
or developed by a combination of methods, especially in a pseudo-photograph.
39/16/2015MD:Reyad Hossain (GNIT)
4. The word ‘Spatial Domain’ means that we have to work in the given space, in this
case, the image. In other words, the term spatial domain implies working with the
pixel values or working directly with the available raw data.
(0,0)
g( x , y )
(255,255)x
y
Let g(x , y) be the original image where g is the gray level value and (x , y) are the
image coordinates. For a 8-bit image, g can take values from 0-255 where 0
represents black, 255 represents white and all the intermediate values represent
shades of gray. In an image of size 256*256, x and y can take values from (0 , 0) to
(255 , 255) as shown in the figure.
49/16/2015MD:Reyad Hossain (GNIT)
5. DIGITAL IMAGES are electronic snapshots taken of a scene or scanned from
documents, such as photographs, manuscripts, printed texts, and artwork. The
digital image is sampled and mapped as a grid of dots or picture elements (pixels).
Each pixel is assigned a total value (black, white, shades of gray or colour), which is
represented in binary code (zeros and ones). The binary digits ("bits") for each pixel
are stored in a sequence by a computer and often reduced to a mathematical
representation (compressed). The bits are then interpreted and read by the
computer to produce an analogy version for display or printing
Pixel Values: As shown in this bifocal image, each pixel is assigned a tonal value,
in this example 0 for black and 1 for white.
59/16/2015MD:Reyad Hossain (GNIT)
6. The Digital Image Processing (DIP) refers to processing digital images by means
of a digital computer. The digital image processing encompasses a wide and
various field of applications. We can also say that this field of digital image
processing encompasses whose input and outputs are images and in addition
encompasses processes that extract attributes from images including the
recognition of the individual objects.
Digital image processing is the use of computer algorithms to perform image
processing on digital images. As a subcategory or field of digital signal
processing, digital image processing has many advantages over analogy image
processing. It allows a much wider range of algorithms to be applied to the input
data and can avoid problems such as the build-up of noise and signal distortion
during processing. Since images are defined over two dimensions (perhaps more)
digital image processing may be modelled in the form of multidimensional
systems
69/16/2015MD:Reyad Hossain (GNIT)
7. Many of the techniques of digital image processing, or digital picture
processing as it often was called, were developed in the 1960s at the Jet
Propulsion Laboratory, Massachusetts Institute of Technology, Bell
Laboratories, University of Maryland, and a few other research facilities, with
application to satellite imagery, wire-photo standards conversion, medical
imaging, videophone, character recognition, and photograph enhancement.
The cost of processing was fairly high, however, with the computing equipment
of that era. That changed in the 1970s, when digital image processing
proliferated as cheaper computers and dedicated hardware became available.
Images then could be processed in real time, for some dedicated problems such
as television standards conversion. As general-purpose computers became
faster, they started to take over the role of dedicated hardware for all but the
most specialized and computer-intensive operations.
79/16/2015MD:Reyad Hossain (GNIT)
8. It was stated earlier that images are 2-dimensional functions. Images are
classified as follows.
1.Monochrome Images: Monochrome images or Binary images. In this, each
pixel is stored as a single bit (0 or 1). Here, 0 represents black while 1 represents
white. It is a black and white image in the strictest sense. These images are also
called bit mapped images. In such images, we have only black and white pixels
and no other shades of gray.
2.Gray scale Image: Here, each pixel is usually stored as a byte (8-bits). Due to
this, each pixel can have values ranging from 0 (black) to 255 (white). Gray scale
images, as the name suggests have black, white and various shades of gray
presents in the image.
89/16/2015MD:Reyad Hossain (GNIT)
9. 3.Colour Image (24-bit): Colour images are based on the fact that a variety of
colours can be generated by mixing the three primary colours viz. Red, Green, and
Blue in proper proportions. In colour images, each pixel is composed of RGB values
and each of these colours require 8-bits (one byte) for its representation. Hence,
each pixel is represented by 24-bits [R (8-bits), G (8-bits), B (8-bits)].
A 24-bit colour image supports 16,777,216 different combination of colours.
Colour images can be easily converted to gray scale images using the equation.
X=0.30 R + 0.59 G + 0.11 B
An easier formula that could achieve similar results is.
X=
R + G + B
3
99/16/2015MD:Reyad Hossain (GNIT)
10. 4. Half Toning: We have all read newspapers at some point of time (hopefully).
The images do look like gray level images. But if you look closely, all the images
generated are basically using black colour.
Even the images that you see in most of the books (including this one) are
generated using black colour on a white background. In spite of this we do get an
Illusion of seeing gray levels. The technique to achieve an illusion of gray levels from
only black and white levels is called half-toning.
109/16/2015MD:Reyad Hossain (GNIT)
11. Image file formats are standardized means of organizing and storing digital
images. Image files are composed of digital data in one of these formats that can
be raster zed for use on a computer display or printer. An image file format may
store data in uncompressed, compressed, or vector formats. Once raster zed, an
image becomes a grid of pixels, each of which has a number of bits to designate its
colour equal to the colour depth of the device displaying it.
1. JPEG/JFIF: Joint Photographic Experts Group / JPEG File Interchange Format.
2. JPEG 2000: It’s the higher version of JPEG.
3. EXIF: Exchangeable Image File Format.
4. TIFF: Tagged Image File Format.
5. RIF: Raw Image Format.
6. GIF: Graphics Interchange Format.
7 BMP: Bitmap File Format.
8. PNG: Portable Network Graphic Format.
9. PPN: Portable Pixmap Format.
10. PGM: Portable Gray map Format.
11. PBM: Portable Bit map Format.
119/16/2015MD:Reyad Hossain (GNIT)
12. The principal sources of noise in a digital image arise during acquisition and during
transmission. No matter how much care one takes, some amount of noise always
creeps in. Based on the shapes (Probability Density Functions) of the noise.
Image noise is random (not present in the object imaged) variation of brightness
or colour information in images, and is usually an aspect of electronic noise. It can
be produced by the sensor and circuitry of a scanner or digital camera. Image noise
can also originate in film grain and in the unavoidable shot noise of an ideal photon
detector. Image noise is an undesirable by-product of image capture that adds
spurious and extraneous information.
The original meaning of "noise" was and remains "unwanted signal"; unwanted
electrical fluctuations in signals received by AM radios caused audible acoustic
noise ("static"). By analogy unwanted electrical fluctuations themselves came to be
known as "noise". Image noise is, of course, inaudible.
The magnitude of image noise can range from almost imperceptible specks on a
digital photograph taken in good light, to optical and radio astronomical images
that are almost entirely noise, from which a small amount of information can be
derived by sophisticated processing (a noise level that would be totally unacceptable
in a photograph since it would be impossible to determine even what the subject
was).
129/16/2015MD:Reyad Hossain (GNIT)
13. Degradation
Function
h( x , y )
Restoration
Function+f( x , y )
n(x , y)
g(x , y)
f(x , y)
Image degradation is said to occur when a certain image under goes loss of stored
information either due to digitization or conversion (i.e. algorithmic operations),
decreasing visual quality.
The initial image (source, f(x , y)) undergoes degradation due to various
operations, conversions and losses. This introduces Noise. This Noisy image is
further restored via restoration filters to make it visually acceptable for user.
Degraded Image=Degradation Function*Source + Noise
g(x , y) = h(x , y) * f(x , y) + n(x , y)
139/16/2015MD:Reyad Hossain (GNIT)
14. Image noise is random (not present in the object imaged) variation of brightness
or colour information in images, and is usually an aspect of electronic noise. It can
be produced by the sensor and circuitry of a scanner or digital camera. Image
noise can also originate in film grain and in the unavoidable shot noise of an ideal
photon detector.
1.Gaussian Noise.
2.Salt and pepper (Impulse) Noise.
3.Poisson Noise.
4.Erlang (Gamma) Noise.
5.Exponential Noise.
6.Uniform Noise.
149/16/2015MD:Reyad Hossain (GNIT)
15. Principal sources of Gaussian noise in digital images arise during acquisition e.g.
sensor noise caused by poor illumination and/or high temperature, and/or
transmission e.g. electronic circuit noise.
A typical model of image noise is Gaussian, additive, independent at eachpixel,
and independent of the signal intensity, caused primarily by Johnson–Nyquist
noise (thermal noise), including that which comes from the reset noise of
capacitors (“KTC noise").Amplifier noise is a major part of the "read noise" of an
image sensor, that is, of the constant noise level in dark areas of theimage.In color
cameras where more amplification is used in the blue colour channel than in the
green or red channel, there can be more noise in the bluechannel.At higher
exposures, however, image sensor noise is dominated by shot noise, which is not
Gaussian and not independent of signal intensity.
159/16/2015MD:Reyad Hossain (GNIT)
16. The probability density function (PDF) of Gaussian Noise is given by the expression.
2
.1
)(
22
2/)(
z
e
zp
2
1
2
607.0
z
2
z Gray Level
Mean of average value of z
Standard Deviation
Variance
169/16/2015MD:Reyad Hossain (GNIT)
17. If we plot this function, we notice that
70% of its value lies in the range and
95% of its value lies in the range
Gaussian noise has a maximum value at μ and then it starts falling off.
Let us consider the image shown in figure.
)](),[(
)]2(),2[(
a
b
a b Gray Level
Noofpixels
a b
Noofpixels
(a) (b) Histogram of the image
(c) Gaussian occur Histogram modified
Note: Gaussian
noise occur due to
circuit noise,
sensor noise, poor
illumination, high
temperature.
179/16/2015MD:Reyad Hossain (GNIT)
18. Fat-tail distributed or "impulsive" noise is sometimes called salt-and-pepper noise
or spike noise. An image containing salt-and-pepper noise will have dark pixels in
bright regions and bright pixels in dark regions. This type of noise can be caused
by analogy-to-digital converter errors, bit errors in transmission, etc. It can be
mostly eliminated by using dark frame subtraction, median filtering and
interpolating around dark/bright pixels.
Dead pixels in an LCD monitor produce a similar, but non-random, display.
The salt-and-pepper noise are also called shot noise, impulse noise or spike noise
that is usually caused by faulty memory locations ,malfunctioning pixel elements
in the camera sensors, or there can be timing errors in the process of digitization
.In the salt and pepper noise there are only two possible values exists that is a and
b and the probability of each is less than 0.2.If the numbers greater than this
numbers the noise will swamp out image. For 8-bit image the typical value for 255
for salt-noise and pepper noise is 0 Reasons for Salt and Pepper Noise: a. By
memory cell failure. b. By malfunctioning of camera’s sensor cells. c. By
synchronization errors in image digitizing or transmission.
189/16/2015MD:Reyad Hossain (GNIT)
19. Fig . Salt and Pepper noise
The PDF of the salt and pepper noise (bipolar noise) is:
;0
;
;
)( b
a
p
p
zp
For z=a
For z=b
elsewhere
If or is zero, this noise is called unipolar noise. The PDF of salt and pepper is
shown in figure in the next page.
ap bp
199/16/2015MD:Reyad Hossain (GNIT)
20. ap
bp
a b Gray Level
Generally, a and b are black and white gray levels respectively. Hence, for a 8-bit image,
a=0, b=255 because of which the noise is called salt (white) and pepper (black).
Sometimes, it is called as speckle noise.
Now take some images to understand it properly.
209/16/2015MD:Reyad Hossain (GNIT)
21. Let us take the same image as the one taken for the Gaussian example.
a b
When salt anf pepper creeps in, the image looks like
a b
219/16/2015MD:Reyad Hossain (GNIT)
22. Photon noise, also known as Poisson noise, is a basic form of uncertainty
associated with the measurement of light, inherent to the quantized nature of light
and the independence of photon detections. Its expected magnitude is signal
dependent and constitutes the dominant source of image noise except in low-light
conditions.
Image sensors measure scene irradiance by counting the number of discrete
photons incident on the sensor over a given time interval. In digital sensors, the
photoelectric effect is used to convert photons into electrons, whereas film based
sensors rely on photo-sensitive chemical reactions. In both cases, the
independence of random individual photon arrivals leads to photon noise, a signal
dependent form of uncertainty that is a property of the underlying signal itself
229/16/2015MD:Reyad Hossain (GNIT)
23. The dominant noise in the darker parts of an image from an image sensor is
typically that caused by statistical quantum fluctuations, that is, variation in the
number of photons sensed at a given exposure level. This noise is known as
photon shot noise. Shot noise has a root-mean-square value proportional to the
square root of the image intensity, and the noises at different pixels are
independent of one another. Shot noise follows a Poisson distribution, which
except at very low intensity levels approximates a Gaussian distribution.
In addition to photon shot noise, there can be additional shot noise from the dark
leakage current in the image sensor; this noise is sometimes known as "dark shot
noise "or "dark-current shot noise". Dark current is greatest at "hot pixels" within
the image sensor. The variable dark charge of normal and hot pixels can be
subtracted off (using "dark frame subtraction"), leaving only the shot noise, or
random component, of the leakage. If dark-frame subtraction is not done, or if the
exposure time is long enough that the hot pixel charge exceeds the linear charge
capacity, the noise will be more than just shot noise, and hot pixels appear as salt-
and-pepper noise.
Individual photon detections can be treated as independent events that follow a
random temporal distribution. As a result, photon counting is a classic Poisson
process, and the number of photons N measured by a given sensor element over a
time interval t is described by the discrete probability distribution .
!
)(
)(
k
te
KNp
kt
r
239/16/2015MD:Reyad Hossain (GNIT)
24. where λ is the expected number of photons per unit time interval, which is
proportional to the incident scene irradiance. This is a standard Poisson distribution
with a rate parameter λt that corresponds to the expected incident photon count.
The uncertainty described by this distribution is known as photon noise
Centre image Poisson noise occur
249/16/2015MD:Reyad Hossain (GNIT)
25. Gamma noise often is associated with processes related to waiting times between
random (Poisson-distributed) events. Gamma noise typically is generated as a
pseudorandom pattern of waiting times between events of a unit mean Poisson
process.
The shape of the Gamma noise is very similar to the Rayleigh distribution. The
Gamma noise distribution starts from zero. It is given by the following expression.
0
)!1()(
1
b
za
zp
b
b
az
e for z≥0
For <0
259/16/2015MD:Reyad Hossain (GNIT)
26. )(zp
k
ab /)1(
z
Here, a>0 and b is a positive integer. The mean and the variance of this
distribution are given by,
and
22
/
/
ab
ab
269/16/2015MD:Reyad Hossain (GNIT)
27. Exponential distribution has an exponential shape. It is given by the following
expression.
Here, a>0
The mean and the variance of the exponential noise is given by
0
)(
az
ae
zp
for z≥0
for z<0
2
2 1
1
a
a
)(zp
a
z
279/16/2015MD:Reyad Hossain (GNIT)
28. The noise caused by quantizing the pixels of a sensed image to a number of
discrete levels is known as quantization noise. It has an approximately uniform
distribution. Though it can be signal dependent, it will be signal independent if
other noise sources are big enough to cause dithering, or if dithering is explicitly
applied.
Quantization, in mathematics and digital signal processing, is the process of
mapping a large set of input values to a (countable) smaller
set. Rounding and truncation are typical examples of quantization processes.
Quantization is involved to some degree in nearly all digital signal processing, as
the process of representing a signal in digital form ordinarily involves rounding.
Quantization also forms the core of essentially all loss compression algorithms.
The difference between an input value and its quantized value (such as round-off
error) is referred to as quantization error. A device or algorithmic function that
performs quantization is called a quantize. An analogy-to-digital converter is an
example of a quantize.
289/16/2015MD:Reyad Hossain (GNIT)
29. The uniform noise cause by quantizing the pixels of image to a number of
distinct levels is known as quantization noise. It has approximately uniform
distribution. In the uniform noise the level of the gray values of the noise are
uniformly distributed across a specified range. Uniform noise can be used to
generate any different type of noise distribution. This noise is often used to
degrade images for the evaluation of image restoration algorithms. This noise
provides the most neutral or unbiased noise
299/16/2015MD:Reyad Hossain (GNIT)
30. As the name suggests, this noise is uniform over a certain band of gray levels.
The PDF of uniform noise is given by
if a ≤ z ≤ b
otherwise
The mean of the function is
The variance of this function is given by
0
1
)( abzp
2
ba
12
)( 2
2 ab
)(zp
ap
bp
a b
z
)(zp
ab
1
a b
z
309/16/2015MD:Reyad Hossain (GNIT)
31. Filtering in an image processing is a basis function that is used to achieve many
tasks such as noise reduction, interpolation, and re-sampling. Filtering image data
is a standard process used in almost all image processing systems. The choice of
filter is determined by the nature of the task performed by filter and behavior and
type of the data. Filters are used to remove noise from digital image while keeping
the details of image preserved is an necessary part of image processing. Filters can
be described by different categories:
Filtering without Detection: In this filtering there is a window mask which is
moved across the observed image. This mask is usually of the size (2N+1)/2, in
which N is a any positive integer. In this the centre element is the pixel of concern.
When the mask is start moving from left top corner to the right bottom corner of
the image, it perform some arithmetic operations without discriminating any pixel
of image
319/16/2015MD:Reyad Hossain (GNIT)
32. Detection followed by Filtering: This filtering involves two steps. In the first
step it identify the noisy pixels of image and in second step it filters those pixels of
image which contain noise. In this filtering also there is a mask which is moved
across the image. It performs some arithmetic operations to detect the noisy
pixels of image. Then the filtering operation is performed only on those pixels of
image which are found to be noisy in the first step, keeping the non-noisy pixel of
image intact.
Hybrid Filtering: In hybrid filtering scheme, two or more filters are used to filter
a corrupted location of a noisy image. The decision to apply a particular filter is
based on the noise level of noisy image at the test pixel location and the
performance of the filter which is used on a filtering mask.
329/16/2015MD:Reyad Hossain (GNIT)
33. Linear Filters: Linear filters are used to remove certain type of noise. Gaussian or
Averaging filters are suitable for this purpose. These filters also tend to blur the
sharp edges, destroy the lines and other fine details of image, and perform badly in
the presence of signal dependent noise.
Non-Linear Filters: In recent years, a variety of non-linear median type filters such
as rank conditioned, weighted median, relaxed median, rank selection have been
developed to overcome the shortcoming of linear filter.
Different Type of Linear and Non-Linear Filters:
Mean Filter: The mean filter is a simple spatial filter .It is a sliding-window filter
that replace the center value in the window. It replaces with the average mean of all
the pixel values in the kernel or window. The window is usually square but it can be
of any shape.
339/16/2015MD:Reyad Hossain (GNIT)
34. Advantage:
a. Easy to implement
b. b. Used to remove the impulse noise.
Disadvantage:
It does not preserve details of image. Some details are removes of image with
using the mean filter
349/16/2015MD:Reyad Hossain (GNIT)
35. Median Filter: Median Filter is a simple and powerful non-linear filter which is
based order statistics. It is easy to implement method of smoothing images. Median
filter is used for reducing the amount of intensity variation between one pixel and
the other pixel. In this filter, we do not replace the pixel value of image with the
mean of all neighbouring pixel values, we replaces it with the median value. Then
the median is calculated by first sorting all the pixel values into ascending order and
then replace the pixel being calculated with the middle pixel value. If the
neighbouring pixel of image which is to be consider contain an even numbers of
pixels, than the average of the two middle pixel values is used to replace. The median
filter gives best result when the impulse noise percentage is less than 0.1 %. When
the quantity of impulse noise is increased the median filter not gives best result
359/16/2015MD:Reyad Hossain (GNIT)
36. Enhancement of an noisy image is necessary task in digital image processing.
Filters are used best for removing noise from the images. In this paper we
describe various type of noise models and filters techniques. Filters
techniques are divided into two parts linear and non-linear techniques. After
studying linear and non-linear filter each of have limitations and advantages.
In the hybrid filtering schemes, there are two or more filters are
recommended to filter a corrupted location .The decision to apply a which
particular filter is based on the different noise level at the different test pixel
location or performance of the filter scheme on a filtering mask.
369/16/2015MD:Reyad Hossain (GNIT)
37. [1]. A. K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall of India,
First Edition, 1989.
[2]. Rafael C .Gonzalez and Richard E. woods, “Digital Image Processing”, Pearson
Education, Second Edition, 2005
[3]. K. S. Srinivasan and D. Ebenezer, “A New Fast and Efficient Decision-Based
Algorithm for Removal of High-density Impulse Noises,” IEEE Signal Processing
Letters, Vol. 14, No. 3, March 2007.
[4]. H. Hwang and R. A. Haddad”Adaptive Median Filters: New Algorithms and
Results” IEEE Transactions on image processing vol 4. P.no 499-502, Apr 1995.
[5]. Nachtegael, M, Schulte, S, Vander We ken. Kerre, E.E.2005.Fuzzy Filters for
Noise Reduction: The Case of Gaussian Noise. IEEE Explore, 201-206 D, De Witte. V,
206.
[6]. Suresh Kumar, Papendra Kumar, Manoj Gupta, Ashok Kumar Nagawat,
“Performance Comparison of Median and Wiener Filter in Image De-noising”
,International Journal of Computer Applications (0975 – 8887) Volume 12– No.4,
November 2010
379/16/2015MD:Reyad Hossain (GNIT)