This document provides an overview of bag-of-words models for image classification. It discusses how bag-of-words models originated from texture recognition and document classification. Images are represented as histograms of visual word frequencies. A visual vocabulary is learned by clustering local image features, and each cluster center becomes a visual word. Both discriminative methods like support vector machines and generative methods like Naive Bayes are used to classify images based on their bag-of-words representations.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
An Autoencoder is a type of Artificial Neural Network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise.”
Getting started on your natural language processing project? First you'll need to extract some features from your corpus. Frequency, Syntax parsing, word vectors are good ones to start with.
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the world of natural language - unstructured data that by its very nature has important latent information for humans. NLP practitioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python, the Natural Language Toolkit (NLTK), and to a lesser extent, the Gensim Library.
NLTK is an excellent library for machine learning-based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. Gensim provides vector-based topic modeling, which is currently absent in both NLTK and Scikit-Learn. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
An Autoencoder is a type of Artificial Neural Network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise.”
Getting started on your natural language processing project? First you'll need to extract some features from your corpus. Frequency, Syntax parsing, word vectors are good ones to start with.
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the world of natural language - unstructured data that by its very nature has important latent information for humans. NLP practitioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python, the Natural Language Toolkit (NLTK), and to a lesser extent, the Gensim Library.
NLTK is an excellent library for machine learning-based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. Gensim provides vector-based topic modeling, which is currently absent in both NLTK and Scikit-Learn. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
In this presentation we described important things about Image processing and computer vision. If you have any query about this presentation then feels free to visit us at:
http://www.siliconmentor.com/
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Simplilearn
This presentation on Machine Learning will help you understand why Machine Learning came into picture, what is Machine Learning, types of Machine Learning, Machine Learning algorithms with a detailed explanation on linear regression, decision tree & support vector machine and at the end you will also see a use case implementation where we classify whether a recipe is of a cupcake or muffin using SVM algorithm. Machine learning is a core sub-area of artificial intelligence; it enables computers to get into a mode of self-learning without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change, and develop by themselves. So, to put simply, the iterative aspect of machine learning is the ability to adapt to new data independently. Now, let us get started with this Machine Learning presentation and understand what it is and why it matters.
Below topics are explained in this Machine Learning presentation:
1. Why Machine Learning?
2. What is Machine Learning?
3. Types of Machine Learning
4. Machine Learning Algorithms
- Linear Regression
- Decision Trees
- Support Vector Machine
5. Use case: Classify whether a recipe is of a cupcake or a muffin using SVM
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
We recommend this Machine Learning training course for the following professionals in particular:
1. Developers aspiring to be a data scientist or Machine Learning engineer
2. Information architects who want to gain expertise in Machine Learning algorithms
3. Analytics professionals who want to work in Machine Learning or artificial intelligence
4. Graduates looking to build a career in data science and Machine Learning
Learn more at: https://www.simplilearn.com/
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
In this presentation we described important things about Image processing and computer vision. If you have any query about this presentation then feels free to visit us at:
http://www.siliconmentor.com/
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Simplilearn
This presentation on Machine Learning will help you understand why Machine Learning came into picture, what is Machine Learning, types of Machine Learning, Machine Learning algorithms with a detailed explanation on linear regression, decision tree & support vector machine and at the end you will also see a use case implementation where we classify whether a recipe is of a cupcake or muffin using SVM algorithm. Machine learning is a core sub-area of artificial intelligence; it enables computers to get into a mode of self-learning without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change, and develop by themselves. So, to put simply, the iterative aspect of machine learning is the ability to adapt to new data independently. Now, let us get started with this Machine Learning presentation and understand what it is and why it matters.
Below topics are explained in this Machine Learning presentation:
1. Why Machine Learning?
2. What is Machine Learning?
3. Types of Machine Learning
4. Machine Learning Algorithms
- Linear Regression
- Decision Trees
- Support Vector Machine
5. Use case: Classify whether a recipe is of a cupcake or a muffin using SVM
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
We recommend this Machine Learning training course for the following professionals in particular:
1. Developers aspiring to be a data scientist or Machine Learning engineer
2. Information architects who want to gain expertise in Machine Learning algorithms
3. Analytics professionals who want to work in Machine Learning or artificial intelligence
4. Graduates looking to build a career in data science and Machine Learning
Learn more at: https://www.simplilearn.com/
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Quantitative Data AnalysisReliability Analysis (Cronbach Alpha) Common Method...2023240532
Quantitative data Analysis
Overview
Reliability Analysis (Cronbach Alpha)
Common Method Bias (Harman Single Factor Test)
Frequency Analysis (Demographic)
Descriptive Analysis
4. Origin 1: Texture recognition
• Texture is characterized by the repetition of basic
elements or textons
• For stochastic textures, it is the identity of the textons,
not their spatial arrangement, that matters
Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid
2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003
6. Origin 2: Bag-of-words models
• Orderless document representation: frequencies
of words from a dictionary Salton & McGill (1983)
7. Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloud
http://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies
of words from a dictionary Salton & McGill (1983)
8. Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloud
http://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies
of words from a dictionary Salton & McGill (1983)
9. Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloud
http://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies
of words from a dictionary Salton & McGill (1983)
11. 1. Extract features
2. Learn “visual vocabulary”
Bags of features for image
classification
12. 1. Extract features
2. Learn “visual vocabulary”
3. Quantize features using visual vocabulary
Bags of features for image
classification
13. 1. Extract features
2. Learn “visual vocabulary”
3. Quantize features using visual vocabulary
4. Represent images by frequencies of
“visual words”
Bags of features for image
classification
20. 2. Learning the visual vocabulary
Clustering
…
Slide credit: Josef Sivic
21. 2. Learning the visual vocabulary
Clustering
…
Slide credit: Josef Sivic
Visual vocabulary
22. K-means clustering
• Want to minimize sum of squared
Euclidean distances between points xi and
their nearest cluster centers mk
• Algorithm:
• Randomly initialize K cluster centers
• Iterate until convergence:
– Assign each data point to the nearest center
– Recompute each cluster center as the mean of
all points assigned to it
k
k
i
ki mxMXD
cluster
cluster
inpoint
2
)(),(
23. From clustering to vector quantization
• Clustering is a common method for learning a visual
vocabulary or codebook
– Unsupervised learning process
– Each cluster center produced by k-means becomes a
codevector
– Codebook can be learned on separate training set
– Provided the training set is sufficiently representative,
the codebook will be “universal”
• The codebook is used for quantizing features
– A vector quantizer takes a feature vector and maps it to
the index of the nearest codevector in a codebook
– Codebook = visual vocabulary
– Codevector = visual word
26. Visual vocabularies: Issues
• How to choose vocabulary size?
– Too small: visual words not representative of all
patches
– Too large: quantization artifacts, overfitting
• Generative or discriminative learning?
• Computational efficiency
– Vocabulary trees
(Nister & Stewenius, 2006)
28. Image classification
• Given the bag-of-features representations of
images from different classes, how do we
learn a model for distinguishing them?
30. Image classification
• Given the bag-of-features representations of
images from different classes, how do we
learn a model for distinguishing them?
31. Discriminative methods
• Learn a decision rule (classifier) assigning bag-
of-features representations of images to
different classes
Zebra
Non-zebra
Decision
boundary
32. Classification
• Assign input vector to one of two or more
classes
• Any decision rule divides input space into
decision regions separated by decision
boundaries
33. Nearest Neighbor Classifier
• Assign label of nearest training data point
to each test data point
Voronoi partitioning of feature space
for two-category 2D and 3D data
from Duda et al.
Source: D. Lowe
34. • For a new point, find the k closest points from training
data
• Labels of the k points “vote” to classify
• Works well provided there is lots of data and the distance
function is good
K-Nearest Neighbors
k = 5
Source: D. Lowe
35. Functions for comparing histograms
• L1 distance
• χ2 distance
• Quadratic distance (cross-bin)
N
i
ihihhhD
1
2121 |)()(|),(
Jan Puzicha, Yossi Rubner, Carlo Tomasi, Joachim M. Buhmann: Empirical Evaluation of
Dissimilarity Measures for Color and Texture. ICCV 1999
N
i ihih
ihih
hhD
1 21
2
21
21
)()(
)()(
),(
ji
ij jhihAhhD
,
2
2121 ))()((),(
36. Earth Mover’s Distance
• Each image is represented by a signature S consisting of a
set of centers {mi } and weights {wi }
• Centers can be codewords from universal vocabulary,
clusters of features in the image, or individual features (in
which case quantization is not required)
• Earth Mover’s Distance has the form
where the flows fij are given by the solution of a
transportation problem
Y. Rubner, C. Tomasi, and L. Guibas: A Metric for Distributions with Applications to Image
Databases. ICCV 1998
ji ij
jiij
f
mmdf
SSEMD
,
21
21
),(
),(
37. Linear classifiers
• Find linear function (hyperplane) to separate
positive and negative examples
0:negative
0:positive
b
b
ii
ii
wxx
wxx
Which hyperplane
is best?
Slide: S. Lazebnik
38. Support vector machines
• Find hyperplane that maximizes the margin
between the positive and negative examples
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
39. Support vector machines
• Find hyperplane that maximizes the margin
between the positive and negative examples
1:1)(negative
1:1)(positive
by
by
iii
iii
wxx
wxx
MarginSupport vectors
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
Distance between point
and hyperplane: ||||
||
w
wx bi
For support vectors, 1bi wx
Therefore, the margin is 2 / ||w||
40. Finding the maximum margin
hyperplane
1. Maximize margin 2/||w||
2. Correctly classify all training data:
• Quadratic optimization problem:
• Minimize
Subject to yi(w·xi+b) ≥ 1
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
wwT
2
1
1:1)(negative
1:1)(positive
by
by
iii
iii
wxx
wxx
41. Finding the maximum margin
hyperplane
• Solution: i iii y xw
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
Support
vector
learned
weight
42. Finding the maximum margin hyperplane
• Solution:
b = yi – w·xi for any support vector
• Classification function (decision boundary):
• Notice that it relies on an inner product between
the test point x and the support vectors xi
• Solving the optimization problem also involves
computing the inner products xi · xj between all
pairs of training points
i iii y xw
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
byb i iii xxxw
43. • Datasets that are linearly separable work out great:
• But what if the dataset is just too hard?
• We can map it to a higher-dimensional space:
0 x
0 x
0 x
x2
Nonlinear SVMs
Slide credit: Andrew Moore
44. Φ: x → φ(x)
Nonlinear SVMs
• General idea: the original input space can
always be mapped to some higher-
dimensional feature space where the training
set is separable:
Slide credit: Andrew Moore
45. Nonlinear SVMs
• The kernel trick: instead of explicitly
computing the lifting transformation φ(x),
define a kernel function K such that
K(xi,xj) = φ(xi ) · φ(xj)
• (to be valid, the kernel function must satisfy
Mercer’s condition)
• This gives a nonlinear decision boundary in
the original feature space:
bKy
i
iii ),( xx
C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and
Knowledge Discovery, 1998
46. Kernels for bags of features
• Histogram intersection kernel:
• Generalized Gaussian kernel:
• D can be Euclidean distance, χ2 distance, Earth
Mover’s Distance, etc.
N
i
ihihhhI
1
2121 ))(),(min(),(
2
2121 ),(
1
exp),( hhD
A
hhK
J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, Local Features and Kernels for Classifcation
of Texture and Object Categories: A Comprehensive Study, IJCV 2007
47. Summary: SVMs for image classification
1. Pick an image representation (in our case, bag
of features)
2. Pick a kernel function for that representation
3. Compute the matrix of kernel values between
every pair of training examples
4. Feed the kernel matrix into your favorite SVM
solver to obtain support vectors and weights
5. At test time: compute kernel values for your
test example and each support vector, and
combine them with the learned weights to get
the value of the decision function
48. What about multi-class SVMs?
• Unfortunately, there is no “definitive” multi-class SVM
formulation
• In practice, we have to obtain a multi-class SVM by
combining multiple two-class SVMs
• One vs. others
– Traning: learn an SVM for each class vs. the others
– Testing: apply each SVM to test example and assign to it the
class of the SVM that returns the highest decision value
• One vs. one
– Training: learn an SVM for each pair of classes
– Testing: each learned SVM “votes” for a class to assign to the
test example
Slide: S. Lazebnik
49. SVMs: Pros and cons
• Pros
– Many publicly available SVM packages:
http://www.kernel-machines.org/software
– Kernel-based framework is very powerful, flexible
– SVMs work very well in practice, even with very
small training sample sizes
• Cons
– No “direct” multi-class SVM, must combine two-
class SVMs
– Computation, memory
• During training time, must compute matrix of kernel
values for every pair of examples
• Learning can take a very long time for large-scale
problems
Slide: S. Lazebnik
50. Summary: Discriminative methods
• Nearest-neighbor and k-nearest-neighbor classifiers
– L1 distance, χ2 distance, quadratic distance,
Earth Mover’s Distance
• Support vector machines
– Linear classifiers
– Margin maximization
– The kernel trick
– Kernel functions: histogram intersection, generalized
Gaussian, pyramid match
– Multi-class
• Of course, there are many other classifiers out there
– Neural networks, boosting, decision trees, …
52. Generative methods
• We will cover two models, both inspired by
text document analysis:
– Naïve Bayes
– Probabilistic Latent Semantic Analysis
53. The Naïve Bayes model
Csurka et al. 2004
• Assume that each feature is conditionally
independent given the class
fi: ith feature in the image
N: number of features in the image
N
i
iN cfpcffp
1
1 )|()|,,(
54. The Naïve Bayes model
Csurka et al. 2004
M
j
wn
j
N
i
iN
j
cwpcfpcffp
1
)(
1
1 )|()|()|,,(
• Assume that each feature is conditionally
independent given the class
wj: jth visual word in the vocabulary
M: size of visual vocabulary
n(wj): number of features of type wj in the image
fi: ith feature in the image
N: number of features in the image
55. The Naïve Bayes model
Csurka et al. 2004
• Assume that each feature is conditionally
independent given the class
No. of features of type wj in training images of class c
Total no. of features in training images of class c
p(wj | c) =
M
j
wn
j
N
i
iN
j
cwpcfpcffp
1
)(
1
1 )|()|()|,,(
56. The Naïve Bayes model
Csurka et al. 2004
• Assume that each feature is conditionally
independent given the class
No. of features of type wj in training images of class c + 1
Total no. of features in training images of class c + M
p(wj | c) =
(Laplace smoothing to avoid zero counts)
M
j
wn
j
N
i
iN
j
cwpcfpcffp
1
)(
1
1 )|()|()|,,(
60. Probabilistic Latent Semantic Analysis
• Unsupervised technique
• Two-level generative model: a document is a
mixture of topics, and each topic has its own
characteristic word distribution
wd z
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
document topic word
P(z|d) P(w|z)
61. Probabilistic Latent Semantic Analysis
• Unsupervised technique
• Two-level generative model: a document is a
mixture of topics, and each topic has its own
characteristic word distribution
wd z
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
K
k
jkkiji dzpzwpdwp
1
)|()|()|(
62. The pLSA model
K
k
jkkiji dzpzwpdwp
1
)|()|()|(
Probability of word i
in document j
(known)
Probability of
word i given
topic k
(unknown)
Probability of
topic k given
document j
(unknown)
63. The pLSA model
Observed codeword
distributions
(M×N)
Codeword distributions
per topic (class)
(M×K)
Class distributions
per image
(K×N)
K
k
jkkiji dzpzwpdwp
1
)|()|()|(
p(wi|dj) p(wi|zk)
p(zk|dj)
documents
words
words
topics
topics
documents
=
64. Maximize likelihood of data:
Observed counts of
word i in document j
M … number of codewords
N … number of images
Slide credit: Josef Sivic
Learning pLSA parameters
66. Inference
)|(maxarg dzpz
z
• Finding the most likely topic (class) for an image:
z
zz dzpzwp
dzpzwp
dwzpz
)|()|(
)|()|(
maxarg),|(maxarg
• Finding the most likely topic (class) for a visual
word in a given image:
67. Topic discovery in images
J. Sivic, B. Russell, A. Efros, A. Zisserman, B. Freeman, Discovering Objects and their Location
in Images, ICCV 2005
68. Application of pLSA: Action recognition
Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action
Categories Using Spatial-Temporal Words, IJCV 2008.
Space-time interest points
69. Application of pLSA: Action recognition
Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action
Categories Using Spatial-Temporal Words, IJCV 2008.
70. pLSA model
– wi = spatial-temporal word
– dj = video
– n(wi, dj) = co-occurrence table
(# of occurrences of word wi in video dj)
– z = topic, corresponding to an action
K
k
jkkiji dzpzwpdwp
1
)|()|()|(
Probability of word i
in video j
(known)
Probability of
word i given
topic k
(unknown)
Probability of
topic k given
video j
(unknown)
74. Summary: Generative models
• Naïve Bayes
– Unigram models in document analysis
– Assumes conditional independence of words given
class
– Parameter estimation: frequency counting
• Probabilistic Latent Semantic Analysis
– Unsupervised technique
– Each document is a mixture of topics (image is a
mixture of classes)
– Can be thought of as matrix decomposition
– Parameter estimation: Expectation-Maximization
75. Adding spatial information
• Computing bags of features on sub-windows
of the whole image
• Using codebooks to vote for object position
• Generative part-based models
76. Spatial pyramid representation
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
level 0
Lazebnik, Schmid & Ponce (CVPR 2006) Slide: S. Lazebnik
77. Spatial pyramid representation
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
level 0 level 1
Lazebnik, Schmid & Ponce (CVPR 2006) Slide: S. Lazebnik
78. Spatial pyramid representation
level 0 level 1 level 2
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
Lazebnik, Schmid & Ponce (CVPR 2006) Slide: S. Lazebnik