The document discusses key concepts in image processing including image sensing, acquisition, formation, sampling, quantization, and digital representation. It describes how the human eye forms images and contains photoreceptor cells. There are three main types of image sensors: single, line, and array. Sampling converts a continuous image to digital by selecting pixel values at regular intervals while quantization assigns discrete brightness levels. Together they allow images to be represented digitally as matrices of pixel values.
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
Image Enhancement: Introduction to Spatial Filters, Low Pass Filter and High Pass Filters. Here Discussed Image Smoothing and Image Sharping, Gaussian Filters
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Introduction to digital image processing, image processing, digital image, analog image, formation of digital image, level of digital image processing, components of a digital image processing system, advantages of digital image processing, limitations of digital image processing, fields of digital image processing, ultrasound imaging, x-ray imaging, SEM, PET, TEM
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Image Interpolation Techniques with Optical and Digital Zoom Concepts -semina...mmjalbiaty
full details about Spatial and Intensity Resolution , optical and digital zoom concepts and the common three interpolation algorithms for implementing zoom in image processing
This slide best explains the introduction of CT, basis and types of CT image reconstructions with detailed explanation about Interpolation, convolution, Fourier slice theorem, Fourier transformation and brief explanation about the image domain i.e digital image processing.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Introduction
What is ML, DL, AL?
Decision Tree
Definition
Why Decision Tree?
Basic Terminology
Challenges
Random Forest
Definition
Why Random Forest
How does it work?
Advantages & Disadvantages
Definition: According to Arthur Samuel (1950) “Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed”.
Machine learning is the study and design of algorithms which can learn by processing input (learning samples) data.
The most widely used definition of machine learning is that of Carnegie Mellon University Professor Tom Mitchell: “A computer program is said to learn from experience ‘E’, with respect to some class of tasks ‘T’ and performance measure ‘P’ if its performance at tasks in ‘T’ as measured by ‘P’ improves with experience ‘E’”.
Decision Tree
Definition
Why Decision Tree?
Basic Terminology
Challenges
Random Forest
Definition
Why Random Forest
How does it work?
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data
Inheritance in java introduces the concept of reusability by implementing a mechanism in which one object acquires all the properties and behaviors of the parent object.
Inheritance in Java is a mechanism in which one object acquires all the properties and behaviors of a parent object. It is an important part of OOPs (Object Oriented programming system).
The idea behind inheritance in Java is that you can create new classes that are built upon existing classes. When you inherit from an existing class, you can reuse methods and fields of the parent class. Moreover, you can add new methods and fields in your current class also.
Inheritance represents the IS-A relationship which is also known as a parent-child relationship.
Java provides a data structure, the array, which stores a fixed-size sequential collection of elements of the same type.
An array is used to store a collection of data, but it is often more useful to think of an array as a collection of
variables of the same type.
Data Mining is defined as extracting information from huge sets of data. In other words, we can say that data mining is the procedure of mining knowledge from data.
According to Inmon, a data warehouse is a subject oriented,
integrated, time-variant, and non-volatile collection of data. He defined the terms
in the sentence as follows:
DISTRIBUTED DATABASE WITH RECOVERY TECHNIQUESAAKANKSHA JAIN
Distributed Database Designs are nothing but multiple, logically related Database systems, physically distributed over several sites, using a Computer Network, which is usually under a centralized site control.
Distributed database design refers to the following problem:
Given a database and its workload, how should the database be split and allocated to sites so as to optimize certain objective function
There are two issues:
(i) Data fragmentation which determines how the data should be fragmented.
(ii) Data allocation which determines how the fragments should be allocated.
This presentation several topics of subjects RDBMS and DBMS including Distributed Database Design,Architecture of Distributed database processing system,Data Communication concept,Concurrency control and recovery. All the topics are briefly described according to syllabus of BCA II and BCA III year subjects.
DETECTION OF MALICIOUS EXECUTABLES USING RULE BASED CLASSIFICATION ALGORITHMSAAKANKSHA JAIN
Slide present statistical mining of Malicious-Executable dataset collected from various antivirus log-files and other sources.
Further classifications of malicious code as per their impact on user's system & distinguishes threats on the muse in their connected severity.
Implementation of the algorithms JRIP ,PART and RIDOR in additional economical manner to acquire a level of accuracy to the classification results.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
2. Topics we discuss:-
• Elements Of visual perception
• Image Sensing and Acquisition
• A simple Image formation model
• Basic concept of Sampling and Quantization
First of all lets discuss about basics of IMAGE
PROCESSING.
WHAT IS IMAGE PROCESSING ?
3. Image processing is defined as conversion
method in which an image changes into digital
form by performing some process like
mathematical operation on it. Resultant image is
enhanced image and we can extract some useful
information from it.
Image processing basically includes the following
three steps.
1. Import the image
2. Analyze and manipulation the image
3. Output is altered image or report that is based
on image analysis.
5. 1. Elements Of Visual Perception
We are interested in learning the physical limitations of
human vision in terms of factors that also are used in our work with
digital images.Thus, factors such as how human and electronic
imaging compare in terms
of resolution and ability to adapt to changes in illumination are not
only interesting, they also are important from a practical point of
view.
1.1 Structure of human eye:
The eye is nearly a sphere,
with an average diameter of
approximately 20 mm
Three membranes enclose the eye:
1. cornea
2. sclera outer cover
3. choroid
6. Continue…
The cornea is a tough transparent tissue in the front part of the eye.
The sclera is an opaque membrane that encloses the remainder of
the optic globe.
The choroid lies directly below the sclera. It contains network of
blood vessels that serve nutrition to the eye .At
its anterior extreme, the choroid is divided into the ciliary
body and the iris diaphragm.
The central opening of the iris (the pupil) varies in diameter
from approximately 2 to 8 mm. The front of the iris contains the
visible pigment of the eye, whereas the back contains a black
pigment.
The central opening of the iris (the pupil) varies in diameter
from approximately 2 to 8 mm. The front of the iris contains the
visible pigment of the eye, whereas the back contains a black
pigment.
7. 1.2 Image formation in human eye:
As illustrated in previous figure the radius of curvature
of the anterior surface of the lens is greater than the radius of its
posterior surface. The shape of the lens is controlled by
tension in the fibers of the ciliary body
When the eye focuses on an object farther away than about 3 m,
the lens exhibits its lowest refractive power . When the eye focuses
on a nearby object, the lens is most strongly refractive. This
information makes it easy to calculate the size of the retinal
image of any object.
Below figure is example of watching a tree at distance 100m.
8. 2. Image Sensing and Acquisition
There are 3 principal sensor arrangements (produce an electrical
output proportional to light
intensity).
(i)Single imaging Sensor (ii)Line sensor (iii)Array sensor
(i)
(ii)
9. Continue…
(i) Image Acquisition using a single sensor
The most common sensor of this type is the photodiode, which is
constructed of silicon materials
and whose output voltage waveform is proportional to light. The use of a
filter in front of a
sensor improves selectivity. For example, a green (pass) filter in front of
a light sensor favours
light in the green band of the color spectrum. As a consequence, the
sensor output will be
stronger for green light than for other components in the visible
spectrum.
10. Continue…
(ii) Image Acquisition using Sensor Strips
The strip provides imaging elements in one direction. Motion
perpendicular to the strip provides imaging in the other direction. This is
the type of arrangement used in most flatbed scanners. Sensing devices
with 4000 or more in-line sensors are possible.
Sensor strips mounted in a ring configuration are used in medical and
industrial imaging to obtain crosssectional (“slice”) images of 3-D
objects. A rotating X-ray source provides illumination and the
portion of the sensors opposite the source collect the X-ray energy that
pass through the object. This is the basis for medical and
industrial computerized axial tomography (CAT) imaging.
11. Continue…
(iii) Image Acquisition using Sensor Arrays
This type of arrangement is found in digital cameras. A typical sensor for
these cameras is a CCD array, which can be manufactured with a broad
range of sensing properties and can be packaged in rugged arrays of
4000 * 4000 elements or more. CCD sensors are used widely in
digital cameras and other light sensing instruments. The response of
each sensor is proportional to the integral of the light energy projected
onto the surface of the sensor, a property that is used in astronomical
and other applications requiring low noise images.
12. 2. Image Sensing and Acquisition
There are 3 principal sensor arrangements (produce an electrical
output proportional to light
intensity).
(i)Single imaging Sensor (ii)Line sensor (iii)Array sensor
(i)
(ii)
13. 3. A Simple Image Formation Model
An image is defined by two dimensional function f(x,y). The value
or amplitude of f at spatial coordinates (x,y) is a positive scalar
quantity. When an image is generated from a physical
process, its value are proportional to energy radiated by physical
source. As a consequence, f(x,y) must be nonzero and finite;
that is
0 < f(x, y) <q
The function f(x,y) may be characterized by two components:
(1) the amount of source illumination incident on the scene being
viewed.
(2) the amount of illumination reflected by the objects in the
scene.
These are called illumination and reflectance components
denoted by i(x,y) and r(x,y) respectively.
14. Continue…
The two function combine as product to form f(x,y):
f(x,y)=i(x,y) r(x,y)
Where 0 < i(x,y)< ∞
and 0 <r(x,y)< 1
r(x,y)=0 means total absorption
r(x,y)=1 means total reflectance
We call the intensity of a monochrome image at any
coordinates (x,y) the gray level (l) of the image at that point.
That is l=f(x,y).
The interval of l ranges from [0,L-1]. Where l=0 indicates
black and l=1 indicates white. All the intermediate values are
shades of gray varying from black to white.
15. 4. IMAGE SAMPLING
& QUANTIZATION
Sampling and quantization are the two important processes used
to convert continuous analog image into digital image.
Image sampling refers to discretization of spatial coordinates (along x axis)
whereas quantization refers to discretization of gray level values (amplitude
(along y axis)).
(Given a continuous image, f(x,y), digitizing the coordinate values is called
sampling and digitizing the amplitude (intensity) values is called quantization.)
Figure (a) shows a continuous image, f(x, y), that we want to convert to digital
form . The one-dimensional function shown in Fig.(b) is a plot of amplitude
(gray level) values of the continuous image along the line segment AB in
Fig. (a).The random variations are due to image noise . To sample this function,
we take equally spaced samples along line AB, fig. (c) shows the gray level
scale divided into eight discrete levels, ranging from black to white. The result of
both sampling and quantization are shown in fig. (d)
16. Generating a digital image.(a) Continuous image.
(b)A scan line from A to B in the continuous image, used to illustrate the
concepts of sampling and quantization
(c) Sampling and quantization. (d) Digital scan line.
17. 4.1 Representing Digital Image
• An image may be defined as a two-dimensional function, f(x, y),
where x and y are spatial (plane) coordinates, and the amplitude of „f
‟ at any pair of coordinates (x, y) is called the intensity or gray level of
the image at that point
The notation introduced in the preceding paragraph allows us to write the
complete M*N digital image in the following compact matrix form shown
above:
18. Continue…
If k is the number of bits per pixel, then the number of gray levels,
L, is an integer power of 2.
L = 2 k
When an image can have 2 k gray levels, it is common practice to
refer to the image as a “k-bit image”. For example, an image with
256 possible grey level values is called an 8 bit image.
Therefore the number of bits required to store a digitalized image
of size M*N is
b =M * N *k
When M=N then b = N 2 *k
4.2 Spatial and Gray-Level Resolution
Spatial resolution is the smallest discernable (detect with
difficulty )change in an image. Graylevel resolution is the smallest
discernable (detect with difficulty) change in gray level.
19. Continue…
Image resolution quantifies how much close two lines (say one dark and one
light) can be to each other and still be visibly resolved. The resolution can be
specified as number of lines per unit distance, say 10 lines per mm or 5 line
pairs per mm. Another measure of image resolution is dots per inch, i.e. the
number of discernible dots per inch.
Fig : A 1024*1024, 8-bit image subsampled down to size 32*32
pixels. The number of allowable gray levels was kept at 256.
20. Continue…
Image of size 1024*1024 pixels whose gray levels are
represented by 8 bits is as shown in fig above. The results of
subsampling the 1024*1024 image. The subsampling was
accomplished by deleting the appropriate number of rows
and columns from the original image. For example, the
512*512 image was obtained by deleting every other row
and column from the 1024*1024 image. The 256*256 image
was generated by deleting every other row and column in the
512*512 image, and so on.
4.3 Aliasing and Moiré Patterns
Shannon sampling theorem tells us that, if the function is
sampled at a rate equal to or greater than twice its highest
frequency(fs≥fm )it is possible to recover completely the
original function from the samples. If the function is under
sampled, then a phenomenon called aliasing(distortion) (If
two pattern or spectrum overlap, the overlapped portion is
called aliased) corrupts the sampled image.
21. Continue…
The corruption is in the form of
additional frequency components
being introduced into the sampled
function. These are called
aliased frequencies.
The principal approach for reducing the aliasing effects on an
image is to reduce its high frequency components by blurring
the image prior to sampling. However, aliasing is always
present in a sampled image. The effect of aliased frequencies
can be seen under the right conditions in the form of so
called Moiré patterns. A moiré pattern is a secondary and
visually evident superimposed pattern created, for example,
when two identical patterns on a flat or curved surface (such
as closely spaced straight lines drawn radiating from a point
or taking the form of a grid) are overlaid while
displaced or rotated a small amount from one another.
22. 4.4 Zooming and Shrinking Digital Images
• Zooming may be viewed as oversampling and shrinking many be
viewed as under sampling. Zooming is a method of increasing the
size of a given image. Zooming requires two steps:
1. Creation of new pixel locations.
2. Assigning new grey level values to those new locations.
The key difference between these
two operations and sampling and quantizing an original continuous
image is
that zooming and shrinking are applied to a digital image.
Fig: Top row: images zoomed from 128*128, 64*64, and 32*32 pixels to
1024*1024 pixels,using nearest neighbor gray-level interpolation. Bottom row:
same sequence, but using bilinear interpolation.
23. • Nearest neighbor interpolation: Nearest neighbor
interpolation is the simplest method and
basically makes the pixels bigger. The intensity of a
pixel in the new image is the intensity of the
nearest pixel of the original image. If you enlarge
200%, one pixel will be enlarged to a 2 x 2
area of 4 pixels with the same color as the original
pixel.
Bilinear interpolation: Bilinear interpolation
considers the closest 2x2 neighborhood of known
pixel values surrounding the unknown pixel. It then
takes a weighted average of these 4 pixels to
arrive at its final interpolated value. This results in
much smoother looking images than nearest
neighbor.
Thanks