Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
Sparse representation models uses a linear combination of a few atoms selected from an over-completed
dictionary to code an image patch which have given good results in different image restitution applications. The
reconstruction of the original image is not so accurate using traditional models of sparse representation to solve
degradation problems which are blurring, noisy, and down-sampled. The goal of image restitution is to suppress
the sparse coding noise and to improve the image quality by using the concept of sparse representation. To
obtain a good sparse coding coefficients of the original image we exploit the image non-local self similarity and
then by centralizing the sparse coding coefficients of the observation image to those estimates. This non-locally
centralized sparse representation model outperforms standard sparse representation models in all aspects of
image restitution problems including de-noising, de-blurring, and super-resolution.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
GENERIC APPROACH FOR VISIBLE WATERMARKINGEditor IJCATR
In this paper generic image watermarking technique is used for the copyright protection of color images. Watermarking
with monochrome and translucent images based on One-to-One compound mapping of the values of the image pixels, which provide us
the recovered image without any loss. Both the translucent full color and Opaque monochrome images are used in this paper. Two-fold
monotonically increasing compound mapping is used to get more typical visible watermarks in the image. Measures have been taken to
protect it from hackers.
Image Restitution Using Non-Locally Centralized Sparse Representation ModelIJERA Editor
Sparse representation models uses a linear combination of a few atoms selected from an over-completed
dictionary to code an image patch which have given good results in different image restitution applications. The
reconstruction of the original image is not so accurate using traditional models of sparse representation to solve
degradation problems which are blurring, noisy, and down-sampled. The goal of image restitution is to suppress
the sparse coding noise and to improve the image quality by using the concept of sparse representation. To
obtain a good sparse coding coefficients of the original image we exploit the image non-local self similarity and
then by centralizing the sparse coding coefficients of the observation image to those estimates. This non-locally
centralized sparse representation model outperforms standard sparse representation models in all aspects of
image restitution problems including de-noising, de-blurring, and super-resolution.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
ALEXANDER FRACTIONAL INTEGRAL FILTERING OF WAVELET COEFFICIENTS FOR IMAGE DEN...sipij
The present paper, proposes an efficient denoising algorithm which works well for images corrupted with
Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which
works by the construction of fractional masks window computed using alexander polynomial. Prior to the
application of the designed filter, the corrupted image is decomposed using symlet wavelet from which only
the horizontal, vertical and diagonal components are denoised using the alexander integral filter.
Significant increase in the reconstruction quality was noticed when the approach was applied on the
wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results
are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images
corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly
outperforms the existing methods.
GENERIC APPROACH FOR VISIBLE WATERMARKINGEditor IJCATR
In this paper generic image watermarking technique is used for the copyright protection of color images. Watermarking
with monochrome and translucent images based on One-to-One compound mapping of the values of the image pixels, which provide us
the recovered image without any loss. Both the translucent full color and Opaque monochrome images are used in this paper. Two-fold
monotonically increasing compound mapping is used to get more typical visible watermarks in the image. Measures have been taken to
protect it from hackers.
A Compressed Sensing Approach to Image Reconstructionijsrd.com
compressed sensing is a new technique that discards the Shannon Nyquist theorem for reconstructing a signal. It uses very few random measurements that were needed traditionally to recover any signal or image. The need of this technique comes from the fact that most of the information is provided by few of the signal coefficients, then why do we have to acquire all the data if it is thrown away without being used. A number of review articles and research papers have been published in this area. But with the increasing interest of practitioners in this emerging field it is mandatory to take a fresh look at this method and its implementations. The main aim of this paper is to review the compressive sensing theory and its applications.
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
Due to the degradation of observed image the noisy, blurred, Distorted image can be occurred .for restoring image information we propose the sparse representations by conventional modelsmay not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration,In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model,for denoising the image here we use the Histogram clipping method by using histogram based sparse representation effectively reduce the noise.and also implement the TMR filter for Quality image.various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
Image Interpolation Techniques in Digital Image Processing: An OverviewIJERA Editor
In current digital era the image interpolation techniques based on multi-resolution technique are being discovered and developed. These techniques are gaining importance due to their application in variety if field (medical, geographical, space information) where fine and minor details are important. This paper presents an overview of different interpolation techniques, (nearest neighbor, Bilinear, Bicubic, B-spline, Lanczos, Discrete wavelet transform (DWT) and Kriging). Our results show bicubic interpolations gives better results than nearest neighbor and bilinear, whereas DWT and Kriging give finer details.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Unified PDE model for image multi-phase segmentation and grey-scale inpaint...vijayakrishna rowthu
A Unified PDE model for image multi-phase segmentation and grey-scale inpainting phd-kanpur.
Cahn-Hilliard equation and Histogram are the key elements in this research work. Convexity-Splitting scheme with Fourier-spectral method solves this numerically.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
A Compressed Sensing Approach to Image Reconstructionijsrd.com
compressed sensing is a new technique that discards the Shannon Nyquist theorem for reconstructing a signal. It uses very few random measurements that were needed traditionally to recover any signal or image. The need of this technique comes from the fact that most of the information is provided by few of the signal coefficients, then why do we have to acquire all the data if it is thrown away without being used. A number of review articles and research papers have been published in this area. But with the increasing interest of practitioners in this emerging field it is mandatory to take a fresh look at this method and its implementations. The main aim of this paper is to review the compressive sensing theory and its applications.
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
Due to the degradation of observed image the noisy, blurred, Distorted image can be occurred .for restoring image information we propose the sparse representations by conventional modelsmay not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration,In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model,for denoising the image here we use the Histogram clipping method by using histogram based sparse representation effectively reduce the noise.and also implement the TMR filter for Quality image.various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
Image Interpolation Techniques in Digital Image Processing: An OverviewIJERA Editor
In current digital era the image interpolation techniques based on multi-resolution technique are being discovered and developed. These techniques are gaining importance due to their application in variety if field (medical, geographical, space information) where fine and minor details are important. This paper presents an overview of different interpolation techniques, (nearest neighbor, Bilinear, Bicubic, B-spline, Lanczos, Discrete wavelet transform (DWT) and Kriging). Our results show bicubic interpolations gives better results than nearest neighbor and bilinear, whereas DWT and Kriging give finer details.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Unified PDE model for image multi-phase segmentation and grey-scale inpaint...vijayakrishna rowthu
A Unified PDE model for image multi-phase segmentation and grey-scale inpainting phd-kanpur.
Cahn-Hilliard equation and Histogram are the key elements in this research work. Convexity-Splitting scheme with Fourier-spectral method solves this numerically.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
Texture is the term used to characterize the surface of a given object or phenomenon and is an
important feature used in image processing and pattern recognition. Our aim is to compare
various Texture analyzing methods and compare the results based on time complexity and
accuracy of classification. The project describes texture classification using Wavelet Transform
and Co occurrence Matrix. Comparison of features of a sample texture with database of
different textures is performed. In wavelet transform we use the Haar, Symlets and Daubechies
wavelets. We find that, thee ‘Haar’ wavelet proves to be the most efficient method in terms of
performance assessment parameters mentioned above. Comparison of Haar wavelet and Cooccurrence
matrix method of classification also goes in the favor of Haar. Though the time
requirement is high in the later method, it gives excellent results for classification accuracy
except if the image is rotated.
In this paper person identification is done based on sets of facial images. Each facial image is considered as the scattered point of logistic regression. The vertical distance of scattered point of facial image and the regression line is considered as the parameter to determine whether the image is of same person or not. The ratio of Euclidian distance (in terms of number of pixel of gray scale image based on ‘imtool’ of Matlab 13.0) between nasal and eye points are determined. The variance of the ration is considered another parameter to identify a facial image. The concept is combined with ghost image of Principal Component Analysis; where the mean square error and signal to noise ratio (SNR) in dB is considered as the parameters of detection. The combination of three methods, enhance the degree of accuracy compared to individual one.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
We would send hard copy of Journal by speed post to the address of correspondence author after online publication of paper.
We will dispatched hard copy to the author within 7 days of date of publication
Performance Improvement of Vector Quantization with Bit-parallelism HardwareCSCJournals
Vector quantization is an elementary technique for image compression; however, searching for the nearest codeword in a codebook is time-consuming. In this work, we propose a hardware-based scheme by adopting bit-parallelism to prune unnecessary codewords. The new scheme uses a “Bit-mapped Look-up Table” to represent the positional information of the codewords. The lookup procedure can simply refer to the bitmaps to find the candidate codewords. Our simulation results further confirm the effectiveness of the proposed scheme.
Fast Object Recognition from 3D Depth Data with Extreme Learning MachineSoma Boubou
Object recognition from RGB-D sensors has recently emerged as a renowned and challenging research topic. The current systems often require large amounts of time to train the models and to classify new data. We proposed an effective and fast object recognition approach from 3D data acquired from depth sensors such as Structure or Kinect sensors.
Our contribution in this work} is to present a novel fast and effective approach for real-time object recognition from 3D depth data:
- First, we extract simple but effective frame-level features, which we name as differential frames, from the raw depth data.
- Second, we build a recognition system based on Extreme Learning Machine classifier with a Local Receptive Field (ELM-LRF).
Face Recognition using PCA-Principal Component Analysis using MATLABSindhi Madhuri
It describes about a biometric technique to recognize people at a particular environment using MATLAB. It simply forms EIGENFACES and compares Principal components instead of each and every pixel of an image.
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...IJERD Editor
Resolution enhancement (RE) schemes which are not based on wavelets has one of the major
drawbacks of losing high frequency contents which results in blurring. The discrete wavelet- transform-based
(DWT) Resolution Enhancement scheme generates artifacts (due to a DWT shift-variant property). A wavelet-
Domain approach based on dual-tree complex wavelet transform (DT-CWT) & nonlocal means (NLM) is
proposed for RE of the satellite images. A satellite input image is decomposed by DT-CWT (which is nearly
shift invariant) to obtain high-frequency sub bands. Here the Lanczos interpolator is used to interpolate the highfrequency
sub bands & the low-resolution (LR) input image. The high frequency sub bands are passed through
an NLM filter to cater for the artifacts generated by DT-CWT (despite of it’s nearly shift invariance). The
filtered high-frequency sub bands and the LR input image are combined by using inverse DTCWT to obtain a
resolution-enhanced image. Objective and subjective analyses show superiority of the new proposed technique
over the conventional and state-of-the-art RE techniques.
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
Similar to On image intensities, eigenfaces and LDA (20)
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
On image intensities, eigenfaces and LDA
1. D
RA
FT
IMAGE PROCESSING, RETRIEVAL, AND ANALYSIS II: REPORT ON RESULTS
Raghunandan Palakodety
Universit¨at Bonn
Institut f¨ur Informatik
Bonn
ABSTRACT
This report presents the problem statements for three different
projects and illustrate the results that followed from practical
implementations in C++, using OpenCV framework. In im-
age processing and information theory, data compression is
found to be useful for transmitting or representing data with
relatively few number of bits. In the case of images, the prob-
ability distribution is not uniform and assigning equal number
of bits to each pixel can prove to be redundant. Now, im-
age quantization corresponds to reducing the number of bits
used for representing the image pixels at the expense of data
loss. This loss is not much noticeable. For this task, we used
iterative Lloyd-Max quantizer design which is non-uniform
quantizer. Dealing furthermore on image intensities, many
of face recognition pipelines include an image pre-processing
step. One such step is illumination compensation, employed
to cope with varying illumination. For addressing this prob-
lem, we used Retinex theory. The next project follows on
computing eigenfaces, an approach addressing high-level vi-
sual problem, face recognition. In this approach, we trans-
form face images into a small set of characteristic feature im-
ages known as eigenfaces, which are principal components of
the initial training set of images. Later, recognition is per-
formed by projecting a new image onto the subspace spanned
by eigenfaces. The final project includes two tasks. First task
being binary classification based on Fisher’s linear discrimi-
nant or LDA. Second task accentuates the merits of tensorial
discriminant classification, utilizes the concepts from tensor
algebra for the task of visual object recognition. This ap-
proach outperforms conventional LDA in terms of training
time and also addresses the singular scatter matrices, that is,
the small sample size problem.
Index Terms— image intensities, quantization, illumina-
tion correction, principal component analysis, linear discrim-
inant analysis, tensor contractions.
1. INTRODUCTION
This paper summarizes and highlights results of three projects
given. Problem specifications on image intensities, eigen-
faces and linear discriminant analysis were given and the im-
plementations were done using OpenCV framework, C++ and
QCustomplot. The outcomes of the given projects were
discussed to the best of our abilities, based on relevant ques-
tions that were posed.
The first project contains two tasks. First is implementation
of Llyod-Max algorithm for grey value quantization. Second
is estimating illumination plane parameters of an image that
corresponds to the best-fit plane from the image intensities.
The second project consists computing eigenfaces using a col-
lection of 2429 tiny face images of size 19x19. In this project,
we wish to find the principal components of the distribution
of faces and treating each image as a point in a very high di-
mensional space.
The third project focuses on object recognition and it con-
sists of two tasks. First is implementation of a binary clas-
sifier based on traditional Linear Discriminant Analysis or
LDA. Second task is tensor based Linear Discriminant Anal-
ysis which involves treating images as higher order tensors
instead of vectorizing the same. The theory behind this task
is taken from the paper [1] in which tensor contractions are
repeatedly applied to the given set of training examples and
uses alternating least squares to obtain an ρ term projection
tensor.
This paper is organized into sections in which the theoreti-
cal background of the project, the task specifications and out-
comes are discussed. This document ends with a conclusion
section, in which recent advents and improvements pertaining
to the projects are discussed.
2. THEORETICAL BACKGROUND FOR IMAGE
QUANTIZATION
This section describes image quantization and summarizes
the need for an algorithm or procedure to achieve end re-
sults. Quantization reduces ranges of values in a signal to
a single value. A quantizer maps the continuous variable
x into a discrete xq which takes values from a finite set
{r1, r2, r3, ...., rL} of numbers. The quantizer minimizes the
mean squared error for a given number of quantization levels
L. Let x, with 0 ≤ x ≤ A be a real scalar random variable
with a continuous probability density function (PDF) pX(x).
It is desired to find optimum boundaries or (decision) av and
2. D
RA
FT
the quantization (representation or reconstruction) points bv
for an L-level quantizer such that mean square error (MSE)
or quantization error E drops below a threshold or does not
improve significantly.
2.1. Llyod-Max quantization algorithm
For visualizing quantization curves, an intensity histogram
h(x) of a grey value image is converted into a density func-
tion p(x) using the following transformation,
p(x) =
h(x)
y h(y)
(1)
.
The following steps 1 and 2 describe the initialization of
boundaries and quantization points. Steps 3 and 4 are com-
puted iteratively [2].
1. Initialize the boundaries av of the quantization intervals
as
a0 = 0 (2)
av = v.
256
L
(3)
aL+1 = 256 (4)
2. Initialize the quantization or representation points bv as
follows
bv = v.
256
L
+
256
2.L
(5)
3. Iterate the following two steps
av =
bv + bv−1
2
(6)
4.
bv =
av+1
av
x.p(x).dx
av+1
av
p(x).dx
(7)
The above steps 3 and 4 are computed iteratively until the
quantization error drops below a threshold.
E =
L
v=1
av+1
av
(x − bv)2
.p(x).dx (8)
3. THEORETICAL BACKGROUND FOR
ILLUMINATION COMPENSATION
In image pre-processing algorithms it is necessary to com-
pensate for non-uniform lighting conditions. Illumination
conditions have an impact on facial features which attribute
towards robust face recognition. A study [3] performed by
NIST, on the progress made on face recognition under con-
trolled and uncontrolled illumination constraints, shows that
illumination has substantial effect on the recognition process.
The reason behind such illumination compensation proved to
be conducive for face recognition systems. Due to 3D shape
of human faces, a direct lighting source can produce strong
shadows that accentuate or diminish certain facial features.
In such a case, face recognition becomes arduous [4]. The
classic solution to this problem is applying histogram equal-
ization which produces optimal global contrast for a given
image. However, histogram equalization was considered as
a crude approach. Another approach proposed in [5] per-
forms logarithmic transformations to enhance low gray levels
and compress the higher ones. To recover an image under
assumed lighting condition, Quotient Image proposed in [6]
outperformed PCA.
Assuming reader is cognizant of lambertian surfaces, the
object surface’s irradiance is modeled using a mathemati-
cal equation, Quotient Image extracts the object’s surface
reflectance as an illumination invariant. More on model-
ing reflectance of opaque surfaces can be found BRDF (bi-
directional reflectance distribution function) theory.
The following approach is based on Retinex theory and
plane-subtraction or illumination gradient compensation al-
gorithm, which calculates a best-brightness plane to the im-
age under analysis and later subtracting this plane to the im-
age [4].
3.1. Illumination compensation
The reflectance model used in many cases can be expressed
as
I(x, y) = R(x, y).L(x, y) (9)
where I(x, y) is image pixel value, R(x, y) is the reflectance
and L(x, y) is the illumination at each point (x, y). The nature
of L(x, y) is determined by lighting source while R(x, y) is
determined by characteristics of the surface of object. There-
fore, R(x, y) which can be regarded as illumination insen-
sitive measure. Separating the reflectance R and the illumi-
nance L from real images is an ill-posed problem.
It is known from image pre-processing techniques that illu-
mination plane IP(x, y) of an image I(x, y) corresponds to
best-fit plane from the image intensities. IP(x, y) is a linear
approximation of I(x, y), given by
ansatz : IP(x, y) = ax + by + c (10)
Here, IP(x, y) is the intensity value of the pixel at loca-
tion (x, y)The above equation addresses 3-D regression plane
fitting problem [7]. The plane parameters a, b and c are esti-
mated by the linear regression formula as follows
p = (XT
X)−1
XT
x (11)
where p ∈ R3
is a vector that comprises the plane parameters
(a, b and c) and x ∈ Rn
is I(x, y) in a vector form where n is
the number of pixels. X ∈ Rnx3
is a matrix which holds the
3. D
RA
FT
(a) (b)
Fig. 1. The image in (a) has an uneven illumination, while (b)
is the illumination compensated image.
(a) (b)
Fig. 2. The plot in (a) shows the image function f(x, y) with
x and y, while (b) image function f(x, y) along with illumi-
nation plane IP(x, y) and the contours.
pixels co-ordinates of the image under analysis. The first col-
umn contains the horizontal coordinates, the second column
the vertical coordinates and the entries in the third column are
set to value 1.
After estimating IP(x, y), this plane is subtracted from
I(x, y). This allows reducing shadows caused by extreme
lighting angles[4]. The results from our task on a set of two
input images are shown below. Figure 1 shows the same.
Another result of our experiment is shown in figure 3.
However, the changes are not conspicuous, but on perusal the
results show compensation. An additional step of Histogram
equalization can improve the results.
3-dimensional plot of the image function f(x, y) (shown in
figure 2a)along with the estimated illumination plane model
is shown in the figure 2b.
4. THEORETICAL BACKGROUND FOR
EIGENFACES AND PRINCIPAL COMPONENT
ANALYSIS
The eigenface approach for this classical pattern recognition
problem is to find the principal components of the distribu-
(a) (b)
Fig. 3. The image in (a) has an uneven illumination, while (b)
is illumination corrected image.
tion of the faces or the eigenvectors of the covariance matrix
of the set of face images, treating an image as in a very high
dimensional space. In this approach, we project training im-
age patches onto a lower-dimension space (sub-space) where
recognition is carried out. Since, we vectorize all the training
image patches before such a projection, each face image patch
I ∈ Rmxn
generates a huge dimensional input face space Rd
,
where d is m.n. Due to memory storage constraints and lim-
ited computational capacity, obtaining a parameterized model
in this high dimensional space is very difficult.
Dimensionality reduction of the input face space is the so-
lution and principal component analysis or PCA is one such
projection algorithm used, in order to obtain a reduced repre-
sentation of face images. Later in [8], these PCA projections
are used as feature vectors and similarity functions or distance
metrics such as Mahalobnis distance, Euclidean distance are
employed to to solve the problem of face recognition.
PCA was invented by Karl Pearson in 1901 and first published
in German as Karhunen-Lo`eve transformation or KLT[9], in
which a continuous transformation for de-correlating signals
was proposed. In this task, PCA is a powerful unsupervised
method for dimensionality reduction in data. It can be il-
lustrated using a two dimensional dataset. Consider the plot
shown in 5 for illustration of PCA. PCA finds the principal
axes in the data and explains the importance of those axes
that which describe the data distribution. Consider another
plot shown in figure 6, in which one of the vectors is no longer
than the other. This implies that data in the direction of longer
vector has significance greater than the data towards shorter
vector. After removing 5% of variance of this dataset and re-
projecting the data points on to the vector, the resulting plot
is shown in figure 4. The light shaded points are the original
data points and the dark blue points are the projected version.
This can be understood as dimensionality reduction.
Another approach for the task of face recognition is using
Fisher’s Linear discriminant analysis as projection algorithm
which will be dealt along with a novel and fast approach pro-
posed by [1].
4. D
RA
FT
Fig. 4. Approximating the dataset in lower dimension or di-
mensionality reduction
Fig. 5. 2 dimensional scatter plot
4.1. Computing eigenfaces
In this task we are given a collection of 2429 tiny face images
or image patches, each of size 19x19. From the given collec-
tion, we randomly chose 2186 as training images and rest 243
as test images. The images are read into a matrices X361x2186
train
and X361x243
test . The data matrix X361x2186
train is centered at zero
mean as
Xtrain = Xtrain − Xmean. (12)
The mean image computed is shown in the figure 9. Later,
the covariance matrix C. C = XtrainXT
train is computed in
a way that is conducive for eigenvalue decomposition. Note
that here covariance matrix is C361x361
. To compute the eigen
vectors of covariance matrix C = XtrainXT
train, we multiply
Fig. 6. Principal axes
both sides of the equation with data matrix XT
train. Upon do-
ing the same, the equation looks the following way,
XT
trainXtrain(XT
trainvi) = λi(XT
trainvi) (13)
Compute the eigenvalues λi and vi of the covariance matrix
C and the resulting eigenvectors are orthogonal to each other.
To this end, eigendecomposition of C is carried out to obtain
the eigenvalues and eigenvectors. The spectrum of covariance
matrix is shown in the figure 10. The set of eigenvalues are
arranged in a descending order. Here, the eigenvalues rep-
resent the variance of the data along the eigenvector direc-
tions. From the plot 10, we considered first 20 eigenvectors
vi ∈ R361
where i = 0, 1, .., 19.
Upon visualizing first 20 eigenvectors (corresponding to 20
largest eigenvalues), the results are shown in figure 8. From
these results, we can understand that each image patch (with
mean subtracted) in the training set can be represented as a
linear combination of the best 20 eigenvectors. In general the
equation is as follows,
ˆIi − Imean =
K
j=1
wjuj (14)
and wj = uT
j Ii.
in which we call the uj as eigenfaces.
As mentioned in the section 4.1, Xtest holds test patch images
vectorized similar to those of training image patches and the
test data is centered with respect to training data mean, as
shown below
Xtest = Xtest − Xmean. (15)
We selected 10 random test image patches, computed the
euclidean distance to all training image patches and plotted
5. D
RA
FT
(a)
(b)
Fig. 7. The plot in (a) shows distances of test image 0 to
all the training data while (b) displays the same except in a
lower-dimensional space.
the distances in descending order as shown in figure 7a. Fur-
thermore, we projected all training and test data onto a sub-
space spanned by k = 20 eigenvectors vi. Later, we com-
puted and plotted the euclidean distances (in descending or-
der) of same test vectors to all the training vectors in this
lower dimensional space or subspace as shown in figure 7b.
5. THEORETICAL BACKGROUND FOR LINEAR
DISCRIMINANT ANALYSIS
In the previous section 4, a projection method for dimension-
ality reduction, PCA, was discussed. PCA is a general method
for identifying the linear directions in which a set of vectors
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
(k) (l) (m) (n) (o) (p) (q) (r) (s) (t)
Fig. 8. Visualizing 20 eigen vectors
Fig. 9. Mean image computed from training samples.
Fig. 10. Spectrum of covariance
are best represented, allowing a dimension reduced by choos-
ing the directions of largest variance.
As we have seen in the previous section, dimensionality re-
duction depends on linear methods such as PCA, which finds
the directions of maximal variance in high dimensional data.
By selecting only those axes that have the largest variance,
PCA aims to capture the directions that contain most infor-
mation about the training image vectors, so we can express as
much as possible with a minimal number of dimensions. PCA
gives out components that well describe this pattern, however,
the question remains whether those components are necessar-
ily good for distinguishing between classes. This question
arises during the recognition role of the system. For address-
ing this problem, we need discriminative features instead of
descriptive features. This claim can be supported by allowing
a supervised learning setting, that is, using labeled training
image patches. Furthermore, an additional question arises on
the definition of discriminant and separability of classes.
Fisher’s linear discriminant analysis or LDA is used to find an
optimal linear projection W, that captures major difference
between classes, in other words, that maximizes the separa-
bility between two classes in a two class problem setting. In
the projected discriminative subspace, data are then clustered
[10]. Linear Discriminant Analysis or LDA searches for the
projection axes on which the input vectors of two different
classes are far away from each other and at the same time in-
put vectors of same class are close to each other [10]. Among
all such infinitely many projection axes or lines, a line is cho-
sen that maximally separates the projected data [11]. The
solution to this problem is obtained by solving the general
eigensystem of within-class and between-class scatter matri-
ces.
6. D
RA
FT
LDA for binary classification requires supervised setting. A
collection of n labeled training data
{(xi, yi)}n
i=1 (16)
where the data vectors xi ∈ Rm
are from two classes C1
and C2 and the labels yi ∈ {+1, −1} indicate class member-
ship in such way,
yi =
+1, if xi ∈ C1
−1, if xi ∈ C2
the task requires us to determine a classifier y(x) that assigns
or predicts an unknown/unseen or new data point, a class la-
bel [11].
One way to view a linear classification model is in terms
of dimensionality reduction. Consider first the case of two
classes, and suppose we take m dimensional input vector xi
and project it down to one dimension using
y = wT
x (17)
If we place a threshold on y and classify y ≥ −w0 as class C1
and otherwise class C2. In general, the projection onto one
dimension leads to a considerable loss of information, and
classes that are well separated in the original m dimensional
space may become strongly overlapping in one dimension.
The simplest measure of separation of the classes when pro-
jected onto w is the separation of the projected class means.
The problem boils down to choosing w so as to maximize
m2 − m1 = wT
(m2 − m1) (18)
where,
mk = wT
mk (19)
is the mean of the projected images from class Ck. The pro-
jection formula shown in (17) transforms the set of labeled
data points in x into a labeled set in the one-dimensional space
y. The within-class variance of the transformed data from
class Ck is given by
s2
k =
n∈Ck
(yn − mk)2
(20)
where yn = wT
xn. From [11] we can derive total within
class-variance for the whole dataset to be simply s2
1 + s2
2 as
shown below
s2
k =
n∈Ck
(yn − mk)2
=
n∈Ck
(wT
x − wT
mk)2
=
n∈Ck
wT
(x − mk)(x − mk)T
w
= wT
Skw
(21)
Now using equation (21), in the process of yielding
Raleigh co-efficient, rewrite within-class scatter matrix as,
SW = S1 + S2 (22)
s2
1 + s2
2 = wT
S1w + wT
S2w
= wT
SW w
(23)
Following [11], we want the distance between the pro-
jected means m1 and m2 to be as large as possible.
| m1 − m2 |2
=| wT
m1 − wT
m2 |2
(24)
where projected means m1 and m2 are as shown in equa-
tion (25).
m1 =
1
N1
x∈C1
wT
x
m2 =
1
N2
x∈C2
wT
x
(25)
The equation in (24) can be written as,
| m1 − m2 |2
=| wT
m1 − wT
m2 |2
= wT
(m1 − m2)(m1 − m2)T
w
= wT
SBw
(26)
Following [11], Fisher’s linear discriminant is defined
as the linear function wT
x that maximizes the following ob-
jective/distortion function J(w),
J(w) =
(m1 − m2)2
s2
1 + s2
2
(27)
Substituting (23), (24) in (27) and we need to find an op-
timal w∗
, that maximizes (27) and must satisfy
SBw = λSW w (28)
From [11], optimal projection is,
w∗
= S−1
W (m1 − m2) (29)
The intuition behind the equation (29) is projecting the
data on to one dimension that maximizes the ratio of between-
class scatter and total within-class scatter.
The first task of this project measures the performance lin-
ear discriminant analysis or LDA for the case of binary clas-
sification. The second task of this project uses tensors [12] of
rank 2 as training vectors (instead of vectorizing the training
images) for the same task of binary classification.
7. D
RA
FT
5.1. Applying Fisher’s linear discriminant analysis: Ex-
perimental setting
A collection of 2556 training image patches of which 2442
are patches of background, tagged as class label C2, whereas
the rest 124 are patches of containing cars and tagged as class
label C1. Each of these ground truth image patches is of size
81 × 31. The 2D visualization of projection vector w com-
puted is shown in figure 11. From this figure, which is ob-
tained from least squares regression training, there is no car-
like structural traits upon visualization of w.
Fig. 11. 2D visualization of projection vector
w = (XT
X)−1
XT
y
5.2. Applying classifier on test data
We used k = 1, 2, ...10 different classifiers as shown below
y(x) =
+1, if wT
x ≥ θk
−1, otherwise
where θk ∈ [µ1, µ2]. µ1 and µ2 are projected means.
Before applying the best performing classifier on test set
of 170 images, we plotted the precision-recall curve on the
training set. Precision and recall often show an inverse rela-
tionship, that is, increasing one goes along with the cost of
reducing the other. Applying the best performing classifier
(among the 10 classifiers), the figure 12 shows results on an
image with single target (car).
6. THEORETICAL BACKGROUND FOR TENSOR
LINEAR DISCRIMINANT ANALYSIS
In the previous approach discussed in section 5, where train-
ing image patches x ∈ Rm×n
of size m × n are vectorized
(a) (b)
Fig. 12. The figure in (a) and (b) shows a car bounded by a
rectangle upon applying the classifier with threshold.
(a) (b) (c)
Fig. 13. W =
ρ
r=1 urvT
r
into mn, instead, treating images for what they are, we use
tensors [1]. In the procedure proposed in [1], we compute
the projection tensor by applying tensor contractions to the
given set of training image patches and use alternating least
squares.
A tensor also known as n-way array or multidimensional
matrix or n-mode matrix, is a higher order generalization
of a vector (first order tensor) and a matrix (second order
tensor). In this short description on second order tensors
X ∈ Rm×n
, we use calligraphic upper-case letters X ,
to represent grey-value images of size m × n. A train-
ing set {(X α
, yα
)}α=1,2,...N of N image patches, where
X α
∈ Rm×n
is given. Tensor discriminant analysis re-
quires a projection tensor W which solves the regression
problem[1],
W = arg min
W∗
α
(yα
− W∗
X α
)2
(30)
6.1. Applying tensor discriminant analysis: Experimen-
tal setting
As described in section 5.1, we use the same image collection
for training and test data. We determine a projector W where,
W =
ρ
r=1
urvT
r (31)
A random initialization of u, we compute a set of vectors
xα
from tensor contractions X α
kluk and inserting them into
a design matrix X and use equation w = (XT
X)−1
XT
y to
compute v. Now, having v, we compute for u and iteratively
until the error converges ur(t) − ur(t − 1) ≤ . Follow-
ing the algorithm [1] for computing a second order tensor
discriminant classifier W, we compute ρ-term solution of
second order projection tensor as W = r ur ⊗ vr.
Visualizing the ρ-term solution of second order projection
tensors, we observe (shown in figure 13) car-like structural
traits which was not in the case of conventional linear dis-
criminant analysis [1]. The figures for (a)ρ = 1, (b)ρ = 3 and
(c)ρ = 9 show the projection tensors respectively.
The mutlilinear classifier maps the training samples onto
the best discriminant direction, the results of the implemen-
tation proposed in [1] are shown in the figures 14a, 14b and
14c. In figure 14c, an overlap is observed.
While implementation, the training time of this approach no-
ticeably outperforms the conventional LDA (running time is
8. D
RA
FT
(a) (b) (c)
Fig. 14. Projections produced by the tensor predictor
not reported). Adding to the list of advantages is that this
approach addresses the problem of singular matrices (which
is often in the case where dimensionality of input space is
greater than the number of samples).
7. CONCLUSION
In discriminant analysis, linear discriminant analysis com-
putes a transformation that maximizes the between-class scat-
ter while minimizing the within-class scatter. Such a trans-
formation must retain the class separability while reducing
the variation due to sources other than illumination. While
conventional LDA takes huge running time for training the
projector, tensorial based approach outperforms the former in
this aspect. Also to alleviate the small sample size problem,
we can perform two projections. PCA can be applied to the
data set to reduce its dimensionality and LDA is then applied
further reduce the dimensionality. However, the major advan-
tage of tensor discriminant classifiers is that rank deficiency
constraint considerably reduces the number of free parame-
ters which makes the multi-linear classifiers faster and pre-
ferred.
In the case of linear methods for dimensionality reduction and
unsupervised techniques, in PCA, there are limitations on the
kinds of feature dimensions that can be extracted. For many
generalized object detection problems, the features that mat-
ter are not easy to express. It becomes really difficult to select
those features where the algorithm needs to classify apart cats,
from faces, from cars. We need to extract information rich di-
mensions from our input images.
Autoencoders overcome these limitations by exploiting the in-
herent non-linearity of neural networks. An autoencoder [13]
comes under the category of unsupervised learning that uti-
lizes a neural network to produce a low-dimensional repre-
sentation of a high-dimensional input. It consists of two ma-
jor parts, the encoder and the decoder networks, in which, the
former is used during both training and testing, latter being
used only during training.
8. REFERENCES
[1] C. Bauckhage and T. Kaster, “Benefits of separa-
ble, multilinear discriminant classification,” in Pattern
Recognition, 2006. ICPR 2006. 18th International Con-
ference on, Aug 2006, vol. 4, pp. 959–959.
[2] Prof. Christian Bauckhage, “Image processing,retrieval,
and analysis (ii),” [online], 2015, https://sites.
google.com/site/bitimageprocessing/
home/lecture-notes-ii.
[3] P Jonathon Phillips, W Todd Scruggs, Alice J OToole,
Patrick J Flynn, Kevin W Bowyer, Cathy L Schott, and
Matthew Sharpe, “Frvt 2006 and ice 2006 large-scale
results,” 2007.
[4] Javier Ruiz-del Solar and Julio Quinteros, “Illumi-
nation compensation and normalization in eigenspace-
based face recognition: A comparative study of different
pre-processing approaches,” Pattern Recognition Let-
ters, vol. 29, no. 14, pp. 1966–1979, 2008.
[5] Hong Liu, Wen Gao, Jun Miao, Debin Zhao, Gang
Deng, and Jintao Li, “Illumination compensation and
feedback of illumination feature in face detection,” in
Info-tech and Info-net, 2001. Proceedings. ICII 2001
- Beijing. 2001 International Conferences on, 2001,
vol. 3, pp. 444–449 vol.3.
[6] Amnon Shashua and Tammy Riklin-Raviv, “The quo-
tient image: Class-based re-rendering and recognition
with varying illuminations,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 23, no. 2, pp. 129–139, Feb. 2001.
[7] Prof. Christian Bauckhage, “Image processing,retrieval,
and analysis (ii),” [online], 2015, https://sites.
google.com/site/bitimageprocessing/
home/lecture-notes-ii.
[8] Matthew Turk, Alex P Pentland, et al., “Face recogni-
tion using eigenfaces,” in Computer Vision and Pattern
Recognition, 1991. Proceedings CVPR’91., IEEE Com-
puter Society Conference on. IEEE, 1991, pp. 586–591.
[9] K. Karhunen, Ueber lineare Methoden in der
Wahrscheinlichkeitsrechnung, Annales Academiae sci-
entiarum Fennicae. Series A. 1, Mathematica-physica.
1947.
[10] Ying Wu, “Principal component analysis and lin-
ear discriminant analysis,” Electrical Engineering and
Computer Science, Northwestern University, Evanston,
wykład, 2014.
[11] Prof. Christian Bauckhage, “Image processing,retrieval,
and analysis (ii),” [online], 2015, https://sites.
google.com/site/bitimageprocessing/
home/lecture-notes-ii.
[12] Prof. Christian Bauckhage, “Image processing,retrieval,
and analysis (ii),” [online], 2015, https://sites.
google.com/site/bitimageprocessing/
home/lecture-notes-ii.
9. D
RA
FT
[13] Yoshua Bengio, “Learning deep architectures for ai,”
Foundations and Trends in Machine Learning, vol. 2,
no. 1, pp. 1–127, 2009.