SlideShare a Scribd company logo
Pattern learning and recognition on
statistical manifolds:
An information-geometric review
Frank Nielsen
Frank.Nielsen@acm.org
www.informationgeometry.org
Sony Computer Science Laboratories, Inc.

July 2013, SIMBAD, York, UK.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

1/64
Praise for Computational Information Geometry
◮

What is Information? = Essence of data (datum=“thing”)
(make it tangible → e.g., parameters of generative models)

◮

Can we do Intrinsic computing?
(unbiased by any particular “data representation” → same
results after recoding data)

◮

Geometry −→ Science of invariance
(mother of Science, compass & ruler, Descartes
analytic=coordinate/Cartesian, imaginaries, ...).
The open-ended poetic mathematics...

?!

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

2/64
Rationale for Computational Information Geometry
◮

Information is ...never void! → lower bounds
◮
◮
◮
◮

◮

Geometry:
◮

◮

◮

Cram´r-Rao lower bound and Fisher information (estimation)
e
Bayes error and Chernoff information (classification)
Coding and Shannon entropy (communication)
Program and Kolmogorov complexity (compression).
(Unfortunately not computable!)

Language (point, line, ball, dimension, orthogonal, projection,
geodesic, immersion, etc.)
Power of characterization (eg., intersection of two
pseudo-segments not admitting closed-form expression)

Computing: Information computing. Seeking for mathematical
convenience and mathematical tricks (eg., kernel).
Do you know the “space of functions” ?!?
(Infinity and 1-to-1 mapping, language vs continuum)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

3/64
This talk: Information-geometric Pattern Recognition
◮

Focus on
statistical pattern recognition ↔ geometric computing.

◮

Consider probability distribution families (parameter spaces)
and statistical manifolds.
Beyond traditional Riemannian and Hilbert sphere
√
representations (p → p)

◮

Describe dually flat spaces induced by convex functions:
◮
◮
◮

Legendre transformation → dual and mixed coordinate systems
Dual similarities/divergences
Computing-friendly dual affine geodesics

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

4/64
SIMBAD 2013
Information-geometric Pattern Recognition

By departing from vector-space representations one is confronted
with the challenging problem of dealing with (dis)similarities that
do not necessarily possess the Euclidean behavior or
not even obey the requirements of a metric. The lack of the
Euclidean and/or metric properties undermines the very
foundations of traditional pattern recognition theories and
algorithms, and poses totally
new theoretical/computational questions and challenges.

Thank you to Prof. Edwin Hancock (U. York, UK) and Prof.
Marcello Pelillo (U. Venice, IT)
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

5/64
Statistical Pattern Recognition
Models data with distributions (generative) or stochastic processes:
◮
◮
◮

parametric (Gaussians, histograms) [model size ∼ D],
semi-parametric (mixtures) [model size ∼ kD],

non-parametric (kernel density estimators [model size ∼ n],
Dirichlet/Gaussian processes [model size ∼ D log n],)
Data = Pattern (→information) + noise (independent)

Statistical machine learning hot topic: Deep learning
(restricted Boltzmann machines)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

6/64
Example I: Information-geometric PR (I)
Pattern = Gaussian mixture models (universal class)
Statistical (dis)similarity/distance: total Bregman divergence
(tBD, tKL).
Invariance: ..., pi (x) ∼ N(µi , Σi ), y = A(x) = Lx + t,
yi ∼ N(Lµi + t, LΣi L⊤ ), D(X1 : X2 ) = D(Y1 : Y2 )
(L: any invertible affine transformation, t a translation)

Shape Retrieval using Hierarchical Total Bregman Soft Clustering [11],
IEEE PAMI, 2012.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

7/64
Example II: Information-geometric PR
DTI: diffusion ellipsoids, tensor interpolation.
Pattern = zero-centered “Gaussians“
Statistical (dis)similarity/distance: total Bregman divergence
(tBD, tKL).
Invariance: ..., D(A⊤ PA : A⊤ QA) = D(P : Q), A ∈ SL(d):
orthogonal matrix
(volume/orientation preserving)
total Bregman divergence (tBD).

(3D rat corpus callosum)
c

Total Bregman Divergence and its Applications to DTI Analysis [34],
IEEE TMI. 2011. Science Laboratories, Inc.
2013 Frank Nielsen, Sony Computer

8/64
Statistical mixtures: Generative models of data sets
GMM = feature descriptor for information retrieval (IR)
→ classification [21], matching, etc.
Increase dimension using color image patches.
Low-frequency information encoded into compact statistical model.
Generative model → statistical image by GMM sampling.

→ A mixture k=1 wi N(µi , Σi ) is interpreted as a weighted point
i
set in a parameter space: {wi , θi = (µi , Σi )}k=1 .
i
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

9/64
Statistical invariance
Riemannian structure (M, g ) on {p(x; θ) | θ ∈ Θ ⊂ RD }
◮

θ-Invariance under non-singular parameterization:
ρ(p(x; θ), p(x; θ ′ )) = ρ(p(x; λ(θ)), p(x; λ(θ ′ )))

◮

Normal parameterization (µ, σ) or (µ, σ 2 ) yields same distance
x-Invariance under different x-representation:
Sufficient statistics (Fisher, 1922):
Pr(X = x|t(X ) = t, θ) = Pr(X = x|T (X ) = t)
All information for θ is contained in T .
→ Lossless information data reduction (exponential families).
Markov kernel = statistical morphism (Chentsov 1972,[6, 7]).
A particular Markov kernel is a deterministic mapping
T : X → Y with y = T (x), py = px T −1 .
Invariance if and only if g ∝ Fisher information matrix

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

10/64
f -divergences (1960’s)
A statistical non-metric distance between two probability measures:
If (p : q) =

f

p(x)
q(x)

q(x)dx

f : continuous convex function with f (1) = 0, f ′ (1) = 0, f ′′ (1) = 1.
→ asymmetric (not a metric, except TV), modulo affine term.
→ can always be symmetrized using s = f + f ∗ , with
f ∗ (x) = xf (1/x).
include many well-known statistical measures: Kullback-Leibler,
α-divergences, Hellinger, Chi squared, total variation (TV), etc.
f -divergences are the only statistical divergences that preserves
equivalence wrt. sufficient statistic mapping:
If (p : q) ≥ If (pM : qM )
with equality if and only if M = T (monotonicity property).
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

11/64
Information geometry: Dually flat spaces, α-geometry
Statistical invariance also obtained using (M, g , ∇, ∇∗ ) where
∇ and ∇∗ are dual affine connections.

Riemannian structure (M, g ) is particular case for ∇ = ∇∗ = ∇0 ,
Levi-Civita connection: (M, g ) = (M, g , ∇(0) , ∇(0) )
Dually flat space (α = ±1) are algorithmically-friendly:
◮

Statistical mixtures of exponential families

◮

Learning & simplifying mixtures (k-MLE)

◮

Bregman Voronoi diagrams & dually ⊥ triangulations

Goal: Algorithmics of Gaussians/histograms wrt. Kullback-Leibler divergence.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

12/64
Exponential Family Mixture Models (EFMMs)
Generalize Gaussian & Rayleigh MMs to many usual distributions.
k

m(x) =
i =1

wi pF (x; λi ) with ∀i wi > 0,
pF (x; λ) = e

k
i =1 wi

=1

t(x),θ −F (θ)+k(x)

F : log-Laplace transform (partition, cumulant function):
e

F (θ) = log

t(x),θ +k(x)

dx,

x∈X

θ∈Θ=

θ

e

t(x),θ +k(x)

x∈X

dx < ∞

the natural parameter space.
◮
◮

d: Dimension of the support X .

D: order of the family (= dimΘ). Statistic: t(x) : Rd → RD .

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

13/64
Statistical mixtures: Rayleigh MMs [33, 22]
IntraVascular UltraSound (IVUS) imaging:
Rayleigh distribution:
x2

x
p(x; λ) = λ2 e − 2λ2
x ∈ R+
d = 1 (univariate)
D = 1 (order 1)
1
θ = − 2λ2
Θ = (−∞, 0)
F (θ) = − log(−2θ)
t(x) = x 2
k(x) = log x
(Weibull k = 2)

Coronary plaques: fibrotic tissues, calcified tissues, lipidic tissues
Rayleigh Mixture Models (RMMs):
for segmentation and classification tasks
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

14/64
Statistical mixtures: Gaussian MMs [8, 22, 9]
Gaussian mixture models (GMMs): model low frequency.
Color image interpreted as a 5D xyRGB point set.
Gaussian distribution p(x; µ, Σ):
1
1
e − 2 DΣ−1 (x−µ,x−µ)
d√
(2π) 2

|Σ|

Squared Mahalanobis distance:
DQ (x, y ) = (x − y )T Q(x − y )
x ∈ Rd
d (multivariate)
D = d(d+3) (order)
2
θ = (Σ−1 µ, 1 Σ−1 ) = (θv , θM )
2
d
Θ = R × S++
1 T −1
F (θ) = 4 θv θM θv − 1 log |θM | +
2
d
log π
2
t(x) = (x, −xx T )
k(x) = 0
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

15/64
Sampling from a Gaussian Mixture Model
To sample a variate x from a GMM:
◮

Choose a component l according to the weight distribution
w1 , ..., wk ,

◮

Draw a variate x according to N(µl , Σl ).

→ Sampling is a doubly stochastic process:
◮

throw a biased dice with k faces to choose the component:
l ∼ Multinomial(w1 , ..., wk )

(Multinomial is also an EF, normalized histogram.)
◮

then draw at random a variate x from the l -th component
x ∼ Normal(µl , Σl )
x = µ + Cz with Cholesky: Σ = CC T√
and z = [z1 ... zd ]T
standard normal random variate:zi = −2 log U1 cos(2πU2 )

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

16/64
Relative entropy for exponential families
◮
◮

Distance between features (e.g., GMMs)
Kullback-Leibler divergence (cross-entropy minus entropy):
KL(P : Q) =
=

p(x)
dx ≥ 0
q(x)
1
1
p(x) log
dx − p(x) log
dx
q(x)
p(x)

p(x) log

H × (P:Q)

H(p)=H × (P:P)

= F (θQ ) − F (θP ) − θQ − θP , ∇F (θP )
= BF (θQ : θP )

◮

Bregman divergence BF defined for a strictly convex and
differentiable function up to some affine terms.
Proof KL(P : Q) = BF (θQ : θP ) follows from
X ∼ EF (θ) =⇒ E [t(X )] = ∇F (θ)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

17/64
Convex duality: Legendre transformation
◮

For a strictly convex and differentiable function F : X → R:
F ∗ (y ) = sup { y , x − F (x)}
x∈X

lF (y ;x);
◮

Maximum obtained for y = ∇F (x):
∇x lF (y ; x) = y − ∇F (x) = 0 ⇒ y = ∇F (x)

◮

Maximum unique from convexity of F (∇2 F ≻ 0):
∇2 lF (y ; x) = −∇2 F (x) ≺ 0
x

◮

Convex conjugates:
(F , X ) ⇔ (F ∗ , Y),

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

Y = {∇F (x) | x ∈ X }
18/64
Legendre duality: Geometric interpretation
Consider the epigraph of F as a convex object:
◮ convex hull (V -representation), versus
◮ half-space (H-representation).
z
O

F

Dual 
coordinate systems:
 x
P = P
HP : yP = F ′ (xP )
Q

HQ
HP +

: z = (x − xQ )F ′ (p) + F (xQ )

HP : z = (x − xP )F ′ (xP ) + F (xP )

zP = F (xP )

P : (x, F (x))
xP

0

x

(0, F (xP ) − xP F ′ (xP ) = −F ∗ (yP ))

Legendre transform also called “slope” transform.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

19/64
Legendre duality & Canonical divergence
◮

◮
◮

Convex conjugates have functional inverse gradients
∇F −1 = ∇F ∗
∇F ∗ may require numerical approximation
(not always available in analytical closed-form)
Involution: (F ∗ )∗ = F with ∇F ∗ = (∇F )−1 .
Convex conjugate F ∗ expressed using (∇F )−1 :
F ∗ (y ) =
=

◮

x, y − F (x), x = ∇y F ∗ (y )

(∇F )−1 (y ), y − F ((∇F )−1 (y ))

Fenchel-Young inequality at the heart of canonical divergence:
F (x) + F ∗ (y ) ≥ x, y
AF (x : y ) = AF ∗ (y : x) = F (x) + F ∗ (y ) − x, y ≥ 0

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

20/64
Dual Bregman divergences & canonical divergence [26]
p(x)
≥0
q(x)
= BF (θQ : θP ) = BF ∗ (ηP : ηQ )

KL(P : Q) = EP log

= F (θQ ) + F ∗ (ηP ) − θQ , ηP

= AF (θQ : ηP ) = AF ∗ (ηP : θQ )

with θQ (natural parameterization) and ηP = EP [t(X )] = ∇F (θP )
(moment parameterization).
1
1
dx − p(x) log
dx
KL(P : Q) = p(x) log
q(x)
p(x)
H × (P:Q)

H(p)=H × (P:P)

Shannon cross-entropy and entropy of EF [26]:
H × (P : Q) = F (θQ ) − θQ , ∇F (θP ) − EP [k(x)]
H(P) = F (θP ) − θP , ∇F (θP ) − EP [k(x)]

H(P) = −F ∗ (ηP ) − EP [k(x)]
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

21/64
Bregman divergence: Geometric interpretation (I)
Potential function F , graph plot F : (x, F (x)).
DF (p : q) = F (p) − F (q) − p − q, ∇F (q)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

22/64
Bregman divergence: Geometric interpretation (III)
Bregman divergence and path integrals
B(θ1 : θ2 ) = F (θ1 ) − F (θ2 ) − θ1 − θ2 , ∇F (θ2 ) ,

(1)

θ1

=
θ2
η2

=
η1
∗

∇F (t) − ∇F (θ2 ), dt ,

(2)

∇F ∗ (t) − ∇F ∗ (η1 ), dt ,

(3)

= B (η2 : η1 )

(4)

η = ∇F (θ)

η1

η2

θ2

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

θ1

θ
23/64
Bregman divergence: Geometric interpretation (II)
Potential function f , graph plot F : (x, f (x)).
Bf (p||q) = f (p) − f (q) − (p − q)f ′ (q)
F

p
ˆ

Bf (p||q)

q
ˆ
Hq
X

q

p

Bf (.||q): vertical distance between the hyperplane Hq tangent to
F at lifted point q , and the translated hyperplane at p .
ˆ
ˆ
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

24/64
total Bregman divergence (tBD)
By analogy to least squares and total least squares
total Bregman divergence (tBD) [11, 34, 12]
δf (x, y ) =

bf (x, y )
1 + ∇f (y )

2

Proved statistical robustness of tBD.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

25/64
Bregman sided centroids [25, 21]
Bregman centroids = unique minimizers of average Bregman
divergences (BF convex in right argument)
1
¯
θ = argminθ
n
1
¯
θ ′ = argminθ
n

¯
θ =

1
n

n

BF (θi : θ)
i =1
n

BF (θ : θi )
i =1

n

θi , center of mass, independent of F
i =1

¯
θ ′ = (∇F )−1

1
n

n

(∇F )(θi )
i =1

→ Generalized Kolmogorov-Nagumo f -means.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

26/64
Bregman divergences BF and ∇F -means

Bijection quasi-arithmetic means (∇F ) ⇔ Bregman divergence BF .
Bregman divergence BF
(entropy/loss function F )

F

←
→

f = F′

f −1 = (F ′ )−1

f -mean
(Generalized means)

Squared Euclidean distance
(half squared loss)
Kullback-Leibler divergence

1 x2
2

←
→

x

x

x log x − x

←
→

log x

exp x

Arithmetic mean
n
1
j =1 n xj
Geometric mean

− log x

←
→

1
−x

1
−x

Harmonic mean

(Ext. neg. Shannon entropy)
Itakura-Saito divergence
(Burg entropy)

(

1
n
n
j =1 xj )
n
1
n
j =1 xj

∇F strictly increasing (like cumulative distribution functions)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

27/64
Bregman sided centroids [25]
¯
¯
Two sided centroids C and C ′ expressed using two θ/η coordinate
systems: = 4 equations.
¯
¯ ¯
C : θ , η′
′
¯
¯ ¯
C : θ′, η

¯
C :θ =

1
n

n

θi
i =1

¯
η ′ = ∇F (θ)
¯
C′ : η =
¯

1
n

n

ηi
i =1

¯
θ ′ = ∇F ∗ (¯)
η
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

28/64
Bregman centroids and Jacobian fields
Centroid θ zeroes the left-sided Jacobian vector field:
∇θ

wt A(θ : ηt )
t

Sum of tangent vectors of geodesics from centroid to points = 0
∇θ A(θ : η ′ ) = η − η ′
∇θ (
since

t

t

wt = 1, with η =
¯

wt A(θ : ηt )) = η − η
¯
t

wt ηt .

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

29/64
Bregman information [25]
Bregman information = minimum of loss function
IF (P) =
=
=

1
n
1
n
1
n

n

¯
BF (θi : θ)
i =1
n
i =1
n
i =1

¯
¯
¯
F (θi ) − F (θ) − θi − θ, ∇F (θ)
¯
F (θi ) − F (θ) −

1
n

n
i =1

¯
¯
θi − θ, ∇F (θ)

=0

= JF (θ1 , ..., θn )
Jensen diversity index (e.g., Jensen-Shannon for F (x) = x log x)
◮ For squared Euclidean distance, Bregman information =
cluster variance,
◮ For Kullback-Leibler divergence, Bregman information related
to mutual information.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

30/64
Bregman k-means clustering [4]
Bregman k-means: Find k centers C = {C1 , ..., Ck } that minimizes
the loss function:
LF (P : C) =
BF (P : C) =

P∈P

BF (P : C)

min

i ∈{1,...,k}

BF (P : Ci )

→ generalize Lloyd’ s quadratic error in Vector Quantization (VQ)
LF (P : C) = IF (P) − IF (C)
IF (P)
IF (C)
LF (P : C)

→
→
→

total Bregman information
between-cluster Bregman information
within-cluster Bregman information

total Bregman information = within-cluster Bregman information + between-cluster Bregman information

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

31/64
Bregman k-means clustering [4]
IF (P) = LF (P : C) + IF (C)

Bregman clustering amounts to find the partition C ∗ that minimizes
the information loss:
LF ∗ = LF (P : C ∗ ) = min(IF (P) − IF (C))
C

Bregman k-means :
◮ Initialize distinct seeds: C1 = P1 , ..., Ck = Pk
◮ Repeat until convergence
◮

Assign point Pi to its closest centroid:
Ci = {P ∈ P | BF (P : Ci ) ≤ BF (P : Cj ) ∀j = i}

◮

Update cluster centroids by taking their center of mass:
1
Ci = |Ci | P∈Ci P.

Loss function monotonically decreases and converges to a local
optimum. (Extend to weighted point sets using barycenters.)
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

32/64
Bregman k-means++ [1]: Careful seeding (only?!)
(also called Bregman k-medians since min
Extend the D 2 -initialization of k-means++

i

1
BF (pi : x)).

Only seeding stage yields probabilistically guaranteed global
approximation factor:
Bregman k-means++:
◮
◮

Choose C = {Cl } for l uniformly random in {1, ..., n}
While |C| < k
◮

Choose P ∈ P with probability
BF (P:C)
BF (P:C)
n
BF (Pi :C) = LF (P:C)
i =1

→ Yields a O(log k) approximation factor (with high probability).
Constant in O(·) depends on ratio of min/max ∇2 F .
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

33/64
Anisotropic Voronoi diagram (for MVN MMs) [10, 14]
Learning mixtures, Talk of O. Schwander on EM, k-MLE
From the source color image (a), we buid a 5D GMM with k = 32
components, and color each pixel with the mean color of the
anisotropic Voronoi cell it belongs to. (∼ weighted squared
Mahalanobis distance per center)
log pF (x; θi ) ∝ −BF ∗ (t(x) : ηi ) + k(x)

Most likely component segmentation ≡ Bregman Voronoi diagram
(squared Mahalanobis for Gaussians)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

34/64
Voronoi diagrams in dually flat spaces...

Voronoi diagram, dual ⊥ Delaunay triangulation (general position)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

35/64
Bregman dual bisectors: Hyperplanes &
hypersurfaces [5, 24, 27]
Right-sided bisector: → Hyperplane (θ-hyperplane)
HF (p, q) = {x ∈ X | BF (x : p) = BF (x : q)}.
HF :
∇F (p) − ∇F (q), x + (F (p) − F (q) + q, ∇F (q) − p, ∇F (p) ) = 0
Left-sided bisector: → Hypersurface (η-hyperplane)
′
HF (p, q) = {x ∈ X | BF (p : x) = BF (q : x)}
′
HF : ∇F (x), q − p + F (p) − F (q) = 0

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

36/64
Visualizing Bregman bisectors
Primal coordinates θ
natural parameters

Dual coordinates η
expectation parameters
Gradient Space: Bernouilli
p’(1.93116855,-1.80332178) q’(2.56517944,0.45980247)
D*(p’,q’)=0.60649981 D*(q’,p’)=0.49561129

q

q’

Source Space: Logistic loss

p

p(0.87337870,0.14144719) q(0.92858669,0.61296731)
D(p,q)=0.49561129 D(q,p)=0.60649981

p’
Gradient Space: Itakura-Saito dual
p’(-1.88760873,-1.38808518) q’(-1.16516903,-3.43833618)
D*(p’,q’)=0.44835617 D*(q’,p’)=0.66969016

p’
p

Source Space: Itakura-Saito
p(0.52977081,0.72041688) q(0.85824458,0.29083834)

q’
q

D(p,q)=0.66969016 D(q,p)=0.44835617

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

37/64
Bregman Voronoi diagrams as minimization diagrams [5]
A subclass of affine diagrams which have all non-empty cells .
Minimization diagram of the n functions
Di (x) = BF (x : pi ) = F (x) − F (pi ) − x − pi , ∇F (pi ) .
≡ minimization of n linear functions:
Hi (x) = (pi − x)T ∇F (qi ) − F (pi )

⇐⇒

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

38/64
Bregman dual Delaunay triangulations

Delaunay

Exponential

◮

empty Bregman sphere property,

◮

Hellinger-like

geodesic triangles.

BVDs extends Euclidean Voronoi diagrams with similar complexity/algorithms.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

39/64
Non-commutative Bregman Orthogonality
3-point property (generalized law of cosines):
BF (p : r ) = BF (p : q) + BF (q : r ) − (p − q)T (∇F (r ) − ∇F (q)))
P

Γ∗ Q
P

DF (P ||R)

DF (P ||Q)

ΓQR
Q

DF (Q||R)

R

(pq)θ Bregman orthogonal to (qr )η iff.
BF (p : r ) = BF (p : q) + BF (q : r )
(Equivalent to θp − θq , ηr − ηq = 0)
Extend Pythagoras theorem
(pq)θ ⊥F (qr )η

→ ⊥F is not commutative...
... except in the squared Euclidean/Mahalanobis case,

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

40/64
Dually orthogonal Bregman Voronoi & Triangulations
Ordinary Voronoi diagram is perpendicular to Delaunay
triangulation.
Dual line segment geodesics:
(pq)θ = {θ = θp + (1 − λ)θq |λ ∈ [0, 1]}

(pq)η = {η = ηp + (1 − λ)ηq |λ ∈ [0, 1]}
Bisectors:
Bθ (p, q) :
Bη (p, q) :

x, θq − θp + F (θp ) − F (θq ) = 0

x, ηq − ηp + F ∗ (ηp ) − F ∗ (ηq ) = 0

Dual orthogonality:
Bη (p, q) ⊥ (pq)η

(pq)θ ⊥ Bθ (p, q)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

41/64
Dually orthogonal Bregman Voronoi & Triangulations
Bη (p, q) ⊥ (pq)η

(pq)θ ⊥ Bθ (p, q)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

42/64
Simplifying mixture: Kullback-Leibler projection theorem
An exponential family mixture model p = k=1 wi pF (x; θi )
˜
i
Right-sided KL barycenter p ∗ of components interpreted as the
¯
projection of the mixture model p ∈ P onto the model exponential
˜
family manifold EF [32]:
p
p ∗ = arg min KL(˜ : p)
¯
p∈EF

P
p
e-geodesic

p
˜
m-geodesic

p∗
¯

exponential family
EF mixture sub-manifold
manifold of probability distribution

Right-sided KL centroid = Left-sided Bregman centroid
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

43/64
A simple proof
argminθ KL(m(x) : pθ (x)) = Em [log m] − Em [log pθ ]

= Em [log m] − Em [k(x)] − Em [ t(x), θ − F (θ)]

≡ argmaxθ Em [ t(x), θ ] − F (θ)

= argmaxθ Em [t(x)], θ ] − F (θ)
Em [t(x)] =

wt Epθt [t(x)] =
t

wt ηt = η
¯
t

∇F (θ) = η = η
¯
˘
θ = ∇F ∗ (¯)
η
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

44/64
Left-sided or right-sided Kullback-Leibler centroids?
Left/right Bregman centroids=Right/left entropic centroids (KL of exp. fam.)
Left-sided/right-sided centroids: different (statistical) properties:
◮ Right-sided entropic centroid: zero-avoiding (cover support of
pdfs.)
◮ Left-sided entropic centroid: zero-forcing (captures highest
mode).
Left-side Kullback-Leibler centroid
(zero-forcing)

N2 = (5, 0.64)

Symmetrized centroid
Right-sided Kullback-Leibler centroid
zero-avoiding
N1 = N (−4, 4)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

45/64
Hierarchical clustering of GMMs (Burbea-Rao)
Hierarchical clustering of GMMs wrt. Bhattacharyya distance.
Simplify the number of components of an initial GMM.

(a) source

(b) k = 48

(c) k = 16

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

46/64
Two symmetrizations of Bregman divergences
◮

Jeffreys-Bregman divergences.

SF (p; q) =
=
◮

BF (p, q) + BF (q, p)
2
1
p − q, ∇F (p) − ∇F (q) ,
2

Jensen-Bregman divergences (diversity index).
JF (p; q) =
=

BF (p, p+q ) + BF (q, p+q )
2
2
2
p+q
F (p) + F (q)
−F
2
2

= BRF (p, q)

Skew Jensen divergence [21, 29]
(α)
(α)
JF (p; q) = αF (p)+(1−α)F (q)−F (αp +(1−α)q) = BRF (p; q)
(Jeffreys and Jensen-Shannon symmetrization of Kullback-Leibler)
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

47/64
Burbea-Rao centroids (α-skewed Jensen centroids)
Minimum average divergence:
n
x

(α)

wi JF (x, pi ) = arg min L(x)

OPT : c = arg min

x

i =1

Equivalent to minimize:
n

n

E (c) = (
i =1

wi α)F (c) −

i =1

wi F (αc + (1 − α)pi )

Sum E = F + G of convex F + concave G function ⇒
Convex-ConCave Procedure (CCCP)
Start from arbitrary c0 , and iteratively update as:
∇F (ct+1 ) = −∇G (ct )
⇒ guaranteed convergence to a local minimum.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

48/64
ConCave Convex Procedure (CCCP)
minx E (x) = F (x) + G (x)
∇F (ct+1 ) = −∇G (ct )

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

49/64
Iterative algorithm for Burbea-Rao centroids
Apply CCCP scheme
n

∇F (ct+1 ) =

i =1

wi ∇F (αct + (1 − α)pi )

n

ct+1 = ∇F −1

i =1

wi ∇F (αct + (1 − α)pi )

Get arbitrarily fine approximations of the (skew) Burbea-Rao
centroids and barycenters.
Unique GLOBAL minimum when divergence is separable [21].
Unique GLOBAL minimum for matrix mean [23] for the logDet
divergence.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

50/64
Information-geometric computing on statistical manifolds

◮

Cram´r-Rao Lower Bound and Information Geometry [13]
e
(overview article)

◮

Chernoff information on statistical exponential family
manifold [19]

◮

Hypothesis testing and Bregman Voronoi diagrams [18]

◮

Jeffreys centroid [20]

◮

Learning mixtures with k-MLE [17]

◮

Closed-form divergences for statistical mixtures [16]

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

51/64
Summary
Information-geometric pattern recognition:
◮

◮

Statistical manifold (M, g ): Rao’s distance and Fisher-Rao
curved Riemannian geometry.
Statistical manifold (M, g , ∇, ∇∗ ): dually flat spaces,
Bregman divergences, geodesics are straight lines in either θ/η
parameter space. ±1-geometry, special case of α-geometry

◮

Clustering in dually flat spaces

◮

Software library: jMEF [8] (Java), pyMEF [31] (Python)

◮

... but also many other geometries to explore: Hilbertian,
Finsler [3], K¨hler, Wasserstein, Contact, Symplectic, etc. (it
a
is easy to require non-Euclidean geometry but then space is
wild open!)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

52/64
Edited book (MIG) and proceedings (GSI)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

53/64
Thank you.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

54/64
Exponential families & statistical distances
Universal density estimators [2] generalizing Gaussians/histograms
(single EF density approximates any smooth density)
Explicit formula for
◮ Shannon entropy, cross-entropy, and Kullback-Leibler
divergence [26]:
◮ R´nyi/Tsallis entropy and divergence [28]
e
◮ Sharma-Mittal entropy and divergence [30]. A 2-parameter
family extending extensive R´nyi (for β → 1) and
e
non-extensive Tsallis entropies (for β → α)
1
Hα,β (p) =
1−β

◮
◮
◮

α

p(x) dx

1−β
1−α

−1 ,

with α > 0, α = 1, β = 1.
Skew Jensen and Burbea-Rao divergence [21]
Chernoff information and divergence [15]
Mixtures: total Least square, Jensen-R´nyi, Cauchy-Schwarz
e
divergence [16].

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

55/64
Statistical invariance: Markov kernel
Probability family: p(x; θ).
(X , σ) and (X ′ , σ ′ ) two measurable spaces.
σ: A σ-algebra on X
(non-empty, closed under complementation and countable union).
Markov kernel = transition probability kernel
K : X × σ ′ → [0, 1]:
◮
◮

∀E ′ ∈ σ ′ , K (·, E ′ ) measurable map,

∀x ∈ X , K (x, ·) is a probability measure on (X ′ , σ ′ ).

p a pm. on (X , σ) induces Kp a pm., with
Kp(E ′ ) =
X

K (x, E ′ )p(dx), ∀E ′ ⊂ σ ′

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

56/64
Space of Bregman spheres and Bregman balls [5]
Dual Bregman balls (bounding Bregman spheres):
Ballr (c, r ) = {x ∈ X | BF (x : c) ≤ r }
F

and BalllF (c, r ) = {x ∈ X | BF (c : x) ≤ r }
Legendre duality:
BalllF (c, r ) = (∇F )−1 (Ballr ∗ (∇F (c), r ))
F

Illustration for Itakura-Saito divergence, F (x) = − log x
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

57/64
Space of Bregman spheres: Lifting map [5]
F : x → x = (x, F (x)), hypersurface in Rd+1 .
ˆ
Hp : Tangent hyperplane at p , z = Hp (x) = x − p, ∇F (p) + F (p)
ˆ
◮ Bregman sphere σ −→ σ with supporting hyperplane
ˆ
Hσ : z = x − c, ∇F (c) + F (c) + r .
(// to Hc and shifted vertically by r )
σ = F ∩ Hσ .
ˆ
◮ intersection of any hyperplane H with F projects onto X as a
Bregman sphere:
H : z = x, a +b → σ : BallF (c = (∇F )−1 (a), r = a, c −F (c)+b)

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

58/64
Bibliographic references I
Marcel R. Ackermann and Johannes Bl¨mer.
o
Bregman clustering for separable instances.
In Scandinavian Workshop on Algorithm Theory (SWAT), pages 212–223, 2010.
Yasemin Altun, Alexander J. Smola, and Thomas Hofmann.
Exponential families for conditional random fields.
In Uncertainty in Artificial Intelligence (UAI), pages 2–9, 2004.
Marc Arnaudon and Frank Nielsen.
Medians and means in Finsler geometry.
LMS Journal of Computation and Mathematics, 15, 2012.
Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh.
Clustering with Bregman divergences.
Journal of Machine Learning Research, 6:1705–1749, 2005.
Jean-Daniel Boissonnat, Frank Nielsen, and Richard Nock.
Bregman Voronoi diagrams.
Discrete & Computational Geometry, 44(2):281–307, 2010.
Nikolai Nikolaevich Chentsov.
Statistical Decision Rules and Optimal Inferences.
Transactions of Mathematics Monograph, numero 53, 1982.
Published in russian in 1972.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

59/64
Bibliographic references II
Jos´ Manuel Corcuera and Federica Giummol´.
e
e
A characterization of monotone and regular divergences.
Annals of the Institute of Statistical Mathematics, 50(3):433–450, 1998.
Vincent Garcia and Frank Nielsen.
Simplification and hierarchical representations of mixtures of exponential families.
Signal Processing (Elsevier), 90(12):3197–3212, 2010.
Vincent Garcia, Frank Nielsen, and Richard Nock.
Levels of details for Gaussian mixture models.
In Asian Conference on Computer Vision (ACCV), volume 2, pages 514–525, 2009.
Fran¸ois Labelle and Jonathan Richard Shewchuk.
c
Anisotropic Voronoi diagrams and guaranteed-quality anisotropic mesh generation.
In Proceedings of the nineteenth annual symposium on Computational geometry, SCG ’03, pages 191–200,
New York, NY, USA, 2003. ACM.
Meizhu Liu, Baba C. Vemuri, Shun-ichi Amari, and Frank Nielsen.
Shape retrieval using hierarchical total Bregman soft clustering.
Transactions on Pattern Analysis and Machine Intelligence, 34(12):2407–2419, 2012.
Meizhu Liu, Baba C. Vemuri, Shun ichi Amari, and Frank Nielsen.
Total Bregman divergence and its applications to shape retrieval.
In International Conference on Computer Vision (CVPR), pages 3463–3468, 2010.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

60/64
Bibliographic references III
Frank Nielsen.
Cram´r-rao lower bound and information geometry.
e
In Rajendra Bhatia, C. S. Rajan, and Ajit Iqbal Singh, editors, Connected at Infinity II: A selection of
mathematics by Indians. Hindustan Book Agency (Texts and Readings in Mathematics, TRIM).
arxiv 1301.3578.
Frank Nielsen.
Visual Computing: Geometry, Graphics, and Vision.
Charles River Media / Thomson Delmar Learning, 2005.
Frank Nielsen.
Chernoff information of exponential families.
arXiv, abs/1102.2684, 2011.
Frank Nielsen.
Closed-form information-theoretic divergences for statistical mixtures.
In International Conference on Pattern Recognition (ICPR), pages 1723–1726, 2012.
Frank Nielsen.
k-MLE: A fast algorithm for learning statistical mixture models.
In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, 2012.
preliminary, technical report on arXiv.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

61/64
Bibliographic references IV
Frank Nielsen.
Hypothesis testing, information divergence and computational geometry.
In F. Nielsen and F. Barbaresco, editors, Geometric Science of Information (GSI), volume 8085 of LNCS,
pages 241–248. Springer, 2013.
Frank Nielsen.
An information-geometric characterization of Chernoff information.
IEEE Signal Processing Letters, 20(3):269–272, 2013.
Frank Nielsen.
Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation
for frequency histograms.
Signal Processing Letters, IEEE, PP(99):1–1, 2013.
Frank Nielsen and Sylvain Boltz.
The Burbea-Rao and Bhattacharyya centroids.
IEEE Transactions on Information Theory, 57(8):5455–5466, August 2011.
Frank Nielsen and Vincent Garcia.
Statistical exponential families: A digest with flash cards, 2009.
arXiv.org:0911.4863.

c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

62/64
Bibliographic references V
Frank Nielsen, Meizhu Liu, Xiaojing Ye, and Baba C. Vemuri.
Jensen divergence based SPD matrix means and applications.
In International Conference on Pattern Recognition (ICPR), 2012.
Frank Nielsen and Richard Nock.
The dual Voronoi diagrams with respect to representational Bregman divergences.
In International Symposium on Voronoi Diagrams (ISVD), pages 71–78, 2009.
Frank Nielsen and Richard Nock.
Sided and symmetrized Bregman centroids.
IEEE Transactions on Information Theory, 55(6):2048–2059, June 2009.
Frank Nielsen and Richard Nock.
Entropies and cross-entropies of exponential families.
In International Conference on Image Processing (ICIP), pages 3621–3624, 2010.
Frank Nielsen and Richard Nock.
Hyperbolic Voronoi diagrams made easy.
In International Conference on Computational Science and its Applications (ICCSA), volume 1, pages
74–80, Los Alamitos, CA, USA, march 2010. IEEE Computer Society.
Frank Nielsen and Richard Nock.
On r´nyi and tsallis entropies and divergences for exponential families.
e
arXiv, abs/1105.3259, 2011.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

63/64
Bibliographic references VI
Frank Nielsen and Richard Nock.
Skew Jensen-Bregman Voronoi diagrams.
Transactions on Computational Science, 14:102–128, 2011.
Frank Nielsen and Richard Nock.
A closed-form expression for the Sharma-Mittal entropy of exponential families.
Journal of Physics A: Mathematical and Theoretical, 45(3), 2012.
Olivier Schwander and Frank Nielsen.
PyMEF - A framework for exponential families in Python.
In IEEE/SP Workshop on Statistical Signal Processing (SSP), 2011.
Olivier Schwander and Frank Nielsen.
Learning mixtures by simplifying kernel density estimators.
In Frank Nielsen and Rajendra Bhatia, editors, Matrix Information Geometry, pages 403–426, 2012.
Jose Seabra, Francesco Ciompi, Oriol Pujol, Josepa Mauri, Petia Radeva, and Joao Sanchez.
Rayleigh mixture model for plaque characterization in intravascular ultrasound.
IEEE Transaction on Biomedical Engineering, 58(5):1314–1324, 2011.
Baba Vemuri, Meizhu Liu, Shun ichi Amari, and Frank Nielsen.
Total Bregman divergence and its applications to DTI analysis.
IEEE Transactions on Medical Imaging, 2011.
c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc.

64/64

More Related Content

What's hot

Generative models : VAE and GAN
Generative models : VAE and GANGenerative models : VAE and GAN
Generative models : VAE and GAN
SEMINARGROOT
 
Pattern Recognition
Pattern RecognitionPattern Recognition
Pattern Recognition
Eunho Lee
 
sada_pres
sada_pressada_pres
sada_pres
Stephane Senecal
 
A discussion on sampling graphs to approximate network classification functions
A discussion on sampling graphs to approximate network classification functionsA discussion on sampling graphs to approximate network classification functions
A discussion on sampling graphs to approximate network classification functions
LARCA UPC
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesis
Valentin De Bortoli
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction models
tuxette
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...
Umberto Picchini
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
Matt Moores
 
Continuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGDContinuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGD
Valentin De Bortoli
 
Gtti 10032021
Gtti 10032021Gtti 10032021
Gtti 10032021
Valentin De Bortoli
 
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural NetworksQuantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
 
Fcv hum mach_geman
Fcv hum mach_gemanFcv hum mach_geman
Fcv hum mach_geman
zukun
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Frank Nielsen
 
Pre-computation for ABC in image analysis
Pre-computation for ABC in image analysisPre-computation for ABC in image analysis
Pre-computation for ABC in image analysis
Matt Moores
 
Some sampling techniques for big data analysis
Some sampling techniques for big data analysisSome sampling techniques for big data analysis
Some sampling techniques for big data analysis
Jae-kwang Kim
 
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
The Statistical and Applied Mathematical Sciences Institute
 
Lesson 4: Calculating Limits (Section 21 slides)
Lesson 4: Calculating Limits (Section 21 slides)Lesson 4: Calculating Limits (Section 21 slides)
Lesson 4: Calculating Limits (Section 21 slides)
Matthew Leingang
 
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
The Statistical and Applied Mathematical Sciences Institute
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Frank Nielsen
 
Learning from (dis)similarity data
Learning from (dis)similarity dataLearning from (dis)similarity data
Learning from (dis)similarity data
tuxette
 

What's hot (20)

Generative models : VAE and GAN
Generative models : VAE and GANGenerative models : VAE and GAN
Generative models : VAE and GAN
 
Pattern Recognition
Pattern RecognitionPattern Recognition
Pattern Recognition
 
sada_pres
sada_pressada_pres
sada_pres
 
A discussion on sampling graphs to approximate network classification functions
A discussion on sampling graphs to approximate network classification functionsA discussion on sampling graphs to approximate network classification functions
A discussion on sampling graphs to approximate network classification functions
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesis
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction models
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
Continuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGDContinuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGD
 
Gtti 10032021
Gtti 10032021Gtti 10032021
Gtti 10032021
 
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural NetworksQuantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
 
Fcv hum mach_geman
Fcv hum mach_gemanFcv hum mach_geman
Fcv hum mach_geman
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
 
Pre-computation for ABC in image analysis
Pre-computation for ABC in image analysisPre-computation for ABC in image analysis
Pre-computation for ABC in image analysis
 
Some sampling techniques for big data analysis
Some sampling techniques for big data analysisSome sampling techniques for big data analysis
Some sampling techniques for big data analysis
 
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
MUMS Opening Workshop - Quantifying Nonparametric Modeling Uncertainty with B...
 
Lesson 4: Calculating Limits (Section 21 slides)
Lesson 4: Calculating Limits (Section 21 slides)Lesson 4: Calculating Limits (Section 21 slides)
Lesson 4: Calculating Limits (Section 21 slides)
 
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Learning from (dis)similarity data
Learning from (dis)similarity dataLearning from (dis)similarity data
Learning from (dis)similarity data
 

Viewers also liked

80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
observantoutfit37
 
el paro de maestros
el paro de maestrosel paro de maestros
el paro de maestros
diana margarita romero
 
Lecturas
LecturasLecturas
Aprendiendo java
Aprendiendo javaAprendiendo java
Aprendiendo java
Dina Lara Lara
 
Military.com and iMentorCorps
Military.com and iMentorCorpsMilitary.com and iMentorCorps
Military.com and iMentorCorps
Jon Stan Hjartberg
 
Projeto tacômetro com arduino
Projeto  tacômetro com arduinoProjeto  tacômetro com arduino
Projeto tacômetro com arduino
Jonaspm
 
Manual de uso de ms outlook de erick israel reyna gonzález 6° f
Manual de uso de ms outlook de erick israel reyna gonzález 6° fManual de uso de ms outlook de erick israel reyna gonzález 6° f
Manual de uso de ms outlook de erick israel reyna gonzález 6° f
Erick Reyna
 
PPT Kelompok 4 D3 Akuntansi AK-1
PPT Kelompok 4 D3 Akuntansi AK-1PPT Kelompok 4 D3 Akuntansi AK-1
PPT Kelompok 4 D3 Akuntansi AK-1Alvin Rizkyanto
 
ιστορια και αρχιτεκτονικη του μεσαιωνα
ιστορια και αρχιτεκτονικη του μεσαιωναιστορια και αρχιτεκτονικη του μεσαιωνα
ιστορια και αρχιτεκτονικη του μεσαιωνα
Georgia Siabalioti
 
(Modelos y teorías) t. de principiante a experta
(Modelos y teorías) t. de principiante a experta(Modelos y teorías) t. de principiante a experta
(Modelos y teorías) t. de principiante a experta
diamiarieldoris
 
Iron Values TPC Chapter 8
Iron Values TPC Chapter 8Iron Values TPC Chapter 8
Iron Values TPC Chapter 8
Radiochocolate
 
Cerrejon (1)
Cerrejon (1)Cerrejon (1)
Cerrejon (1)
Isa Bel
 
DENUNCIA A EX ALCALDESA DE SHAPAJA
DENUNCIA A EX ALCALDESA DE SHAPAJA DENUNCIA A EX ALCALDESA DE SHAPAJA
DENUNCIA A EX ALCALDESA DE SHAPAJA
Shapajino Shapajino
 

Viewers also liked (13)

80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
80 Percent Of Homeowners Behind On Mortgage Ineligible For Loan Modification ...
 
el paro de maestros
el paro de maestrosel paro de maestros
el paro de maestros
 
Lecturas
LecturasLecturas
Lecturas
 
Aprendiendo java
Aprendiendo javaAprendiendo java
Aprendiendo java
 
Military.com and iMentorCorps
Military.com and iMentorCorpsMilitary.com and iMentorCorps
Military.com and iMentorCorps
 
Projeto tacômetro com arduino
Projeto  tacômetro com arduinoProjeto  tacômetro com arduino
Projeto tacômetro com arduino
 
Manual de uso de ms outlook de erick israel reyna gonzález 6° f
Manual de uso de ms outlook de erick israel reyna gonzález 6° fManual de uso de ms outlook de erick israel reyna gonzález 6° f
Manual de uso de ms outlook de erick israel reyna gonzález 6° f
 
PPT Kelompok 4 D3 Akuntansi AK-1
PPT Kelompok 4 D3 Akuntansi AK-1PPT Kelompok 4 D3 Akuntansi AK-1
PPT Kelompok 4 D3 Akuntansi AK-1
 
ιστορια και αρχιτεκτονικη του μεσαιωνα
ιστορια και αρχιτεκτονικη του μεσαιωναιστορια και αρχιτεκτονικη του μεσαιωνα
ιστορια και αρχιτεκτονικη του μεσαιωνα
 
(Modelos y teorías) t. de principiante a experta
(Modelos y teorías) t. de principiante a experta(Modelos y teorías) t. de principiante a experta
(Modelos y teorías) t. de principiante a experta
 
Iron Values TPC Chapter 8
Iron Values TPC Chapter 8Iron Values TPC Chapter 8
Iron Values TPC Chapter 8
 
Cerrejon (1)
Cerrejon (1)Cerrejon (1)
Cerrejon (1)
 
DENUNCIA A EX ALCALDESA DE SHAPAJA
DENUNCIA A EX ALCALDESA DE SHAPAJA DENUNCIA A EX ALCALDESA DE SHAPAJA
DENUNCIA A EX ALCALDESA DE SHAPAJA
 

Similar to Pattern learning and recognition on statistical manifolds: An information-geometric review

On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
Frank Nielsen
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
Frank Nielsen
 
Information-theoretic clustering with applications
Information-theoretic clustering  with applicationsInformation-theoretic clustering  with applications
Information-theoretic clustering with applications
Frank Nielsen
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
Christian Robert
 
Slides: Hypothesis testing, information divergence and computational geometry
Slides: Hypothesis testing, information divergence and computational geometrySlides: Hypothesis testing, information divergence and computational geometry
Slides: Hypothesis testing, information divergence and computational geometry
Frank Nielsen
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Zheng Mengdi
 
main
mainmain
Introduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural Networks
Federico Cerutti
 
Matrix Computations in Machine Learning
Matrix Computations in Machine LearningMatrix Computations in Machine Learning
Matrix Computations in Machine Learning
butest
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
Christian Robert
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep Learning
RayKim51
 
Pres metabief2020jmm
Pres metabief2020jmmPres metabief2020jmm
Pres metabief2020jmm
Mercier Jean-Marc
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Umberto Picchini
 
Image formation
Image formationImage formation
Image formation
potaters
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
The Statistical and Applied Mathematical Sciences Institute
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo Methods
Frank Kienle
 
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
The Statistical and Applied Mathematical Sciences Institute
 
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
NTNU
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 

Similar to Pattern learning and recognition on statistical manifolds: An information-geometric review (20)

On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
 
Information-theoretic clustering with applications
Information-theoretic clustering  with applicationsInformation-theoretic clustering  with applications
Information-theoretic clustering with applications
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Slides: Hypothesis testing, information divergence and computational geometry
Slides: Hypothesis testing, information divergence and computational geometrySlides: Hypothesis testing, information divergence and computational geometry
Slides: Hypothesis testing, information divergence and computational geometry
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
main
mainmain
main
 
Introduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural Networks
 
Matrix Computations in Machine Learning
Matrix Computations in Machine LearningMatrix Computations in Machine Learning
Matrix Computations in Machine Learning
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep Learning
 
Pres metabief2020jmm
Pres metabief2020jmmPres metabief2020jmm
Pres metabief2020jmm
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
 
Image formation
Image formationImage formation
Image formation
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo Methods
 
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
 
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
An Importance Sampling Approach to Integrate Expert Knowledge When Learning B...
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
Data structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdfData structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdf
TIPNGVN2
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
Data structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdfData structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdf
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 

Pattern learning and recognition on statistical manifolds: An information-geometric review

  • 1. Pattern learning and recognition on statistical manifolds: An information-geometric review Frank Nielsen Frank.Nielsen@acm.org www.informationgeometry.org Sony Computer Science Laboratories, Inc. July 2013, SIMBAD, York, UK. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 1/64
  • 2. Praise for Computational Information Geometry ◮ What is Information? = Essence of data (datum=“thing”) (make it tangible → e.g., parameters of generative models) ◮ Can we do Intrinsic computing? (unbiased by any particular “data representation” → same results after recoding data) ◮ Geometry −→ Science of invariance (mother of Science, compass & ruler, Descartes analytic=coordinate/Cartesian, imaginaries, ...). The open-ended poetic mathematics... ?! c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 2/64
  • 3. Rationale for Computational Information Geometry ◮ Information is ...never void! → lower bounds ◮ ◮ ◮ ◮ ◮ Geometry: ◮ ◮ ◮ Cram´r-Rao lower bound and Fisher information (estimation) e Bayes error and Chernoff information (classification) Coding and Shannon entropy (communication) Program and Kolmogorov complexity (compression). (Unfortunately not computable!) Language (point, line, ball, dimension, orthogonal, projection, geodesic, immersion, etc.) Power of characterization (eg., intersection of two pseudo-segments not admitting closed-form expression) Computing: Information computing. Seeking for mathematical convenience and mathematical tricks (eg., kernel). Do you know the “space of functions” ?!? (Infinity and 1-to-1 mapping, language vs continuum) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 3/64
  • 4. This talk: Information-geometric Pattern Recognition ◮ Focus on statistical pattern recognition ↔ geometric computing. ◮ Consider probability distribution families (parameter spaces) and statistical manifolds. Beyond traditional Riemannian and Hilbert sphere √ representations (p → p) ◮ Describe dually flat spaces induced by convex functions: ◮ ◮ ◮ Legendre transformation → dual and mixed coordinate systems Dual similarities/divergences Computing-friendly dual affine geodesics c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 4/64
  • 5. SIMBAD 2013 Information-geometric Pattern Recognition By departing from vector-space representations one is confronted with the challenging problem of dealing with (dis)similarities that do not necessarily possess the Euclidean behavior or not even obey the requirements of a metric. The lack of the Euclidean and/or metric properties undermines the very foundations of traditional pattern recognition theories and algorithms, and poses totally new theoretical/computational questions and challenges. Thank you to Prof. Edwin Hancock (U. York, UK) and Prof. Marcello Pelillo (U. Venice, IT) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 5/64
  • 6. Statistical Pattern Recognition Models data with distributions (generative) or stochastic processes: ◮ ◮ ◮ parametric (Gaussians, histograms) [model size ∼ D], semi-parametric (mixtures) [model size ∼ kD], non-parametric (kernel density estimators [model size ∼ n], Dirichlet/Gaussian processes [model size ∼ D log n],) Data = Pattern (→information) + noise (independent) Statistical machine learning hot topic: Deep learning (restricted Boltzmann machines) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 6/64
  • 7. Example I: Information-geometric PR (I) Pattern = Gaussian mixture models (universal class) Statistical (dis)similarity/distance: total Bregman divergence (tBD, tKL). Invariance: ..., pi (x) ∼ N(µi , Σi ), y = A(x) = Lx + t, yi ∼ N(Lµi + t, LΣi L⊤ ), D(X1 : X2 ) = D(Y1 : Y2 ) (L: any invertible affine transformation, t a translation) Shape Retrieval using Hierarchical Total Bregman Soft Clustering [11], IEEE PAMI, 2012. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 7/64
  • 8. Example II: Information-geometric PR DTI: diffusion ellipsoids, tensor interpolation. Pattern = zero-centered “Gaussians“ Statistical (dis)similarity/distance: total Bregman divergence (tBD, tKL). Invariance: ..., D(A⊤ PA : A⊤ QA) = D(P : Q), A ∈ SL(d): orthogonal matrix (volume/orientation preserving) total Bregman divergence (tBD). (3D rat corpus callosum) c Total Bregman Divergence and its Applications to DTI Analysis [34], IEEE TMI. 2011. Science Laboratories, Inc. 2013 Frank Nielsen, Sony Computer 8/64
  • 9. Statistical mixtures: Generative models of data sets GMM = feature descriptor for information retrieval (IR) → classification [21], matching, etc. Increase dimension using color image patches. Low-frequency information encoded into compact statistical model. Generative model → statistical image by GMM sampling. → A mixture k=1 wi N(µi , Σi ) is interpreted as a weighted point i set in a parameter space: {wi , θi = (µi , Σi )}k=1 . i c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 9/64
  • 10. Statistical invariance Riemannian structure (M, g ) on {p(x; θ) | θ ∈ Θ ⊂ RD } ◮ θ-Invariance under non-singular parameterization: ρ(p(x; θ), p(x; θ ′ )) = ρ(p(x; λ(θ)), p(x; λ(θ ′ ))) ◮ Normal parameterization (µ, σ) or (µ, σ 2 ) yields same distance x-Invariance under different x-representation: Sufficient statistics (Fisher, 1922): Pr(X = x|t(X ) = t, θ) = Pr(X = x|T (X ) = t) All information for θ is contained in T . → Lossless information data reduction (exponential families). Markov kernel = statistical morphism (Chentsov 1972,[6, 7]). A particular Markov kernel is a deterministic mapping T : X → Y with y = T (x), py = px T −1 . Invariance if and only if g ∝ Fisher information matrix c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 10/64
  • 11. f -divergences (1960’s) A statistical non-metric distance between two probability measures: If (p : q) = f p(x) q(x) q(x)dx f : continuous convex function with f (1) = 0, f ′ (1) = 0, f ′′ (1) = 1. → asymmetric (not a metric, except TV), modulo affine term. → can always be symmetrized using s = f + f ∗ , with f ∗ (x) = xf (1/x). include many well-known statistical measures: Kullback-Leibler, α-divergences, Hellinger, Chi squared, total variation (TV), etc. f -divergences are the only statistical divergences that preserves equivalence wrt. sufficient statistic mapping: If (p : q) ≥ If (pM : qM ) with equality if and only if M = T (monotonicity property). c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 11/64
  • 12. Information geometry: Dually flat spaces, α-geometry Statistical invariance also obtained using (M, g , ∇, ∇∗ ) where ∇ and ∇∗ are dual affine connections. Riemannian structure (M, g ) is particular case for ∇ = ∇∗ = ∇0 , Levi-Civita connection: (M, g ) = (M, g , ∇(0) , ∇(0) ) Dually flat space (α = ±1) are algorithmically-friendly: ◮ Statistical mixtures of exponential families ◮ Learning & simplifying mixtures (k-MLE) ◮ Bregman Voronoi diagrams & dually ⊥ triangulations Goal: Algorithmics of Gaussians/histograms wrt. Kullback-Leibler divergence. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 12/64
  • 13. Exponential Family Mixture Models (EFMMs) Generalize Gaussian & Rayleigh MMs to many usual distributions. k m(x) = i =1 wi pF (x; λi ) with ∀i wi > 0, pF (x; λ) = e k i =1 wi =1 t(x),θ −F (θ)+k(x) F : log-Laplace transform (partition, cumulant function): e F (θ) = log t(x),θ +k(x) dx, x∈X θ∈Θ= θ e t(x),θ +k(x) x∈X dx < ∞ the natural parameter space. ◮ ◮ d: Dimension of the support X . D: order of the family (= dimΘ). Statistic: t(x) : Rd → RD . c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 13/64
  • 14. Statistical mixtures: Rayleigh MMs [33, 22] IntraVascular UltraSound (IVUS) imaging: Rayleigh distribution: x2 x p(x; λ) = λ2 e − 2λ2 x ∈ R+ d = 1 (univariate) D = 1 (order 1) 1 θ = − 2λ2 Θ = (−∞, 0) F (θ) = − log(−2θ) t(x) = x 2 k(x) = log x (Weibull k = 2) Coronary plaques: fibrotic tissues, calcified tissues, lipidic tissues Rayleigh Mixture Models (RMMs): for segmentation and classification tasks c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 14/64
  • 15. Statistical mixtures: Gaussian MMs [8, 22, 9] Gaussian mixture models (GMMs): model low frequency. Color image interpreted as a 5D xyRGB point set. Gaussian distribution p(x; µ, Σ): 1 1 e − 2 DΣ−1 (x−µ,x−µ) d√ (2π) 2 |Σ| Squared Mahalanobis distance: DQ (x, y ) = (x − y )T Q(x − y ) x ∈ Rd d (multivariate) D = d(d+3) (order) 2 θ = (Σ−1 µ, 1 Σ−1 ) = (θv , θM ) 2 d Θ = R × S++ 1 T −1 F (θ) = 4 θv θM θv − 1 log |θM | + 2 d log π 2 t(x) = (x, −xx T ) k(x) = 0 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 15/64
  • 16. Sampling from a Gaussian Mixture Model To sample a variate x from a GMM: ◮ Choose a component l according to the weight distribution w1 , ..., wk , ◮ Draw a variate x according to N(µl , Σl ). → Sampling is a doubly stochastic process: ◮ throw a biased dice with k faces to choose the component: l ∼ Multinomial(w1 , ..., wk ) (Multinomial is also an EF, normalized histogram.) ◮ then draw at random a variate x from the l -th component x ∼ Normal(µl , Σl ) x = µ + Cz with Cholesky: Σ = CC T√ and z = [z1 ... zd ]T standard normal random variate:zi = −2 log U1 cos(2πU2 ) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 16/64
  • 17. Relative entropy for exponential families ◮ ◮ Distance between features (e.g., GMMs) Kullback-Leibler divergence (cross-entropy minus entropy): KL(P : Q) = = p(x) dx ≥ 0 q(x) 1 1 p(x) log dx − p(x) log dx q(x) p(x) p(x) log H × (P:Q) H(p)=H × (P:P) = F (θQ ) − F (θP ) − θQ − θP , ∇F (θP ) = BF (θQ : θP ) ◮ Bregman divergence BF defined for a strictly convex and differentiable function up to some affine terms. Proof KL(P : Q) = BF (θQ : θP ) follows from X ∼ EF (θ) =⇒ E [t(X )] = ∇F (θ) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 17/64
  • 18. Convex duality: Legendre transformation ◮ For a strictly convex and differentiable function F : X → R: F ∗ (y ) = sup { y , x − F (x)} x∈X lF (y ;x); ◮ Maximum obtained for y = ∇F (x): ∇x lF (y ; x) = y − ∇F (x) = 0 ⇒ y = ∇F (x) ◮ Maximum unique from convexity of F (∇2 F ≻ 0): ∇2 lF (y ; x) = −∇2 F (x) ≺ 0 x ◮ Convex conjugates: (F , X ) ⇔ (F ∗ , Y), c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. Y = {∇F (x) | x ∈ X } 18/64
  • 19. Legendre duality: Geometric interpretation Consider the epigraph of F as a convex object: ◮ convex hull (V -representation), versus ◮ half-space (H-representation). z O F Dual  coordinate systems:  x P = P HP : yP = F ′ (xP ) Q HQ HP + : z = (x − xQ )F ′ (p) + F (xQ ) HP : z = (x − xP )F ′ (xP ) + F (xP ) zP = F (xP ) P : (x, F (x)) xP 0 x (0, F (xP ) − xP F ′ (xP ) = −F ∗ (yP )) Legendre transform also called “slope” transform. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 19/64
  • 20. Legendre duality & Canonical divergence ◮ ◮ ◮ Convex conjugates have functional inverse gradients ∇F −1 = ∇F ∗ ∇F ∗ may require numerical approximation (not always available in analytical closed-form) Involution: (F ∗ )∗ = F with ∇F ∗ = (∇F )−1 . Convex conjugate F ∗ expressed using (∇F )−1 : F ∗ (y ) = = ◮ x, y − F (x), x = ∇y F ∗ (y ) (∇F )−1 (y ), y − F ((∇F )−1 (y )) Fenchel-Young inequality at the heart of canonical divergence: F (x) + F ∗ (y ) ≥ x, y AF (x : y ) = AF ∗ (y : x) = F (x) + F ∗ (y ) − x, y ≥ 0 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 20/64
  • 21. Dual Bregman divergences & canonical divergence [26] p(x) ≥0 q(x) = BF (θQ : θP ) = BF ∗ (ηP : ηQ ) KL(P : Q) = EP log = F (θQ ) + F ∗ (ηP ) − θQ , ηP = AF (θQ : ηP ) = AF ∗ (ηP : θQ ) with θQ (natural parameterization) and ηP = EP [t(X )] = ∇F (θP ) (moment parameterization). 1 1 dx − p(x) log dx KL(P : Q) = p(x) log q(x) p(x) H × (P:Q) H(p)=H × (P:P) Shannon cross-entropy and entropy of EF [26]: H × (P : Q) = F (θQ ) − θQ , ∇F (θP ) − EP [k(x)] H(P) = F (θP ) − θP , ∇F (θP ) − EP [k(x)] H(P) = −F ∗ (ηP ) − EP [k(x)] c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 21/64
  • 22. Bregman divergence: Geometric interpretation (I) Potential function F , graph plot F : (x, F (x)). DF (p : q) = F (p) − F (q) − p − q, ∇F (q) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 22/64
  • 23. Bregman divergence: Geometric interpretation (III) Bregman divergence and path integrals B(θ1 : θ2 ) = F (θ1 ) − F (θ2 ) − θ1 − θ2 , ∇F (θ2 ) , (1) θ1 = θ2 η2 = η1 ∗ ∇F (t) − ∇F (θ2 ), dt , (2) ∇F ∗ (t) − ∇F ∗ (η1 ), dt , (3) = B (η2 : η1 ) (4) η = ∇F (θ) η1 η2 θ2 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. θ1 θ 23/64
  • 24. Bregman divergence: Geometric interpretation (II) Potential function f , graph plot F : (x, f (x)). Bf (p||q) = f (p) − f (q) − (p − q)f ′ (q) F p ˆ Bf (p||q) q ˆ Hq X q p Bf (.||q): vertical distance between the hyperplane Hq tangent to F at lifted point q , and the translated hyperplane at p . ˆ ˆ c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 24/64
  • 25. total Bregman divergence (tBD) By analogy to least squares and total least squares total Bregman divergence (tBD) [11, 34, 12] δf (x, y ) = bf (x, y ) 1 + ∇f (y ) 2 Proved statistical robustness of tBD. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 25/64
  • 26. Bregman sided centroids [25, 21] Bregman centroids = unique minimizers of average Bregman divergences (BF convex in right argument) 1 ¯ θ = argminθ n 1 ¯ θ ′ = argminθ n ¯ θ = 1 n n BF (θi : θ) i =1 n BF (θ : θi ) i =1 n θi , center of mass, independent of F i =1 ¯ θ ′ = (∇F )−1 1 n n (∇F )(θi ) i =1 → Generalized Kolmogorov-Nagumo f -means. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 26/64
  • 27. Bregman divergences BF and ∇F -means Bijection quasi-arithmetic means (∇F ) ⇔ Bregman divergence BF . Bregman divergence BF (entropy/loss function F ) F ← → f = F′ f −1 = (F ′ )−1 f -mean (Generalized means) Squared Euclidean distance (half squared loss) Kullback-Leibler divergence 1 x2 2 ← → x x x log x − x ← → log x exp x Arithmetic mean n 1 j =1 n xj Geometric mean − log x ← → 1 −x 1 −x Harmonic mean (Ext. neg. Shannon entropy) Itakura-Saito divergence (Burg entropy) ( 1 n n j =1 xj ) n 1 n j =1 xj ∇F strictly increasing (like cumulative distribution functions) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 27/64
  • 28. Bregman sided centroids [25] ¯ ¯ Two sided centroids C and C ′ expressed using two θ/η coordinate systems: = 4 equations. ¯ ¯ ¯ C : θ , η′ ′ ¯ ¯ ¯ C : θ′, η ¯ C :θ = 1 n n θi i =1 ¯ η ′ = ∇F (θ) ¯ C′ : η = ¯ 1 n n ηi i =1 ¯ θ ′ = ∇F ∗ (¯) η c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 28/64
  • 29. Bregman centroids and Jacobian fields Centroid θ zeroes the left-sided Jacobian vector field: ∇θ wt A(θ : ηt ) t Sum of tangent vectors of geodesics from centroid to points = 0 ∇θ A(θ : η ′ ) = η − η ′ ∇θ ( since t t wt = 1, with η = ¯ wt A(θ : ηt )) = η − η ¯ t wt ηt . c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 29/64
  • 30. Bregman information [25] Bregman information = minimum of loss function IF (P) = = = 1 n 1 n 1 n n ¯ BF (θi : θ) i =1 n i =1 n i =1 ¯ ¯ ¯ F (θi ) − F (θ) − θi − θ, ∇F (θ) ¯ F (θi ) − F (θ) − 1 n n i =1 ¯ ¯ θi − θ, ∇F (θ) =0 = JF (θ1 , ..., θn ) Jensen diversity index (e.g., Jensen-Shannon for F (x) = x log x) ◮ For squared Euclidean distance, Bregman information = cluster variance, ◮ For Kullback-Leibler divergence, Bregman information related to mutual information. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 30/64
  • 31. Bregman k-means clustering [4] Bregman k-means: Find k centers C = {C1 , ..., Ck } that minimizes the loss function: LF (P : C) = BF (P : C) = P∈P BF (P : C) min i ∈{1,...,k} BF (P : Ci ) → generalize Lloyd’ s quadratic error in Vector Quantization (VQ) LF (P : C) = IF (P) − IF (C) IF (P) IF (C) LF (P : C) → → → total Bregman information between-cluster Bregman information within-cluster Bregman information total Bregman information = within-cluster Bregman information + between-cluster Bregman information c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 31/64
  • 32. Bregman k-means clustering [4] IF (P) = LF (P : C) + IF (C) Bregman clustering amounts to find the partition C ∗ that minimizes the information loss: LF ∗ = LF (P : C ∗ ) = min(IF (P) − IF (C)) C Bregman k-means : ◮ Initialize distinct seeds: C1 = P1 , ..., Ck = Pk ◮ Repeat until convergence ◮ Assign point Pi to its closest centroid: Ci = {P ∈ P | BF (P : Ci ) ≤ BF (P : Cj ) ∀j = i} ◮ Update cluster centroids by taking their center of mass: 1 Ci = |Ci | P∈Ci P. Loss function monotonically decreases and converges to a local optimum. (Extend to weighted point sets using barycenters.) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 32/64
  • 33. Bregman k-means++ [1]: Careful seeding (only?!) (also called Bregman k-medians since min Extend the D 2 -initialization of k-means++ i 1 BF (pi : x)). Only seeding stage yields probabilistically guaranteed global approximation factor: Bregman k-means++: ◮ ◮ Choose C = {Cl } for l uniformly random in {1, ..., n} While |C| < k ◮ Choose P ∈ P with probability BF (P:C) BF (P:C) n BF (Pi :C) = LF (P:C) i =1 → Yields a O(log k) approximation factor (with high probability). Constant in O(·) depends on ratio of min/max ∇2 F . c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 33/64
  • 34. Anisotropic Voronoi diagram (for MVN MMs) [10, 14] Learning mixtures, Talk of O. Schwander on EM, k-MLE From the source color image (a), we buid a 5D GMM with k = 32 components, and color each pixel with the mean color of the anisotropic Voronoi cell it belongs to. (∼ weighted squared Mahalanobis distance per center) log pF (x; θi ) ∝ −BF ∗ (t(x) : ηi ) + k(x) Most likely component segmentation ≡ Bregman Voronoi diagram (squared Mahalanobis for Gaussians) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 34/64
  • 35. Voronoi diagrams in dually flat spaces... Voronoi diagram, dual ⊥ Delaunay triangulation (general position) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 35/64
  • 36. Bregman dual bisectors: Hyperplanes & hypersurfaces [5, 24, 27] Right-sided bisector: → Hyperplane (θ-hyperplane) HF (p, q) = {x ∈ X | BF (x : p) = BF (x : q)}. HF : ∇F (p) − ∇F (q), x + (F (p) − F (q) + q, ∇F (q) − p, ∇F (p) ) = 0 Left-sided bisector: → Hypersurface (η-hyperplane) ′ HF (p, q) = {x ∈ X | BF (p : x) = BF (q : x)} ′ HF : ∇F (x), q − p + F (p) − F (q) = 0 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 36/64
  • 37. Visualizing Bregman bisectors Primal coordinates θ natural parameters Dual coordinates η expectation parameters Gradient Space: Bernouilli p’(1.93116855,-1.80332178) q’(2.56517944,0.45980247) D*(p’,q’)=0.60649981 D*(q’,p’)=0.49561129 q q’ Source Space: Logistic loss p p(0.87337870,0.14144719) q(0.92858669,0.61296731) D(p,q)=0.49561129 D(q,p)=0.60649981 p’ Gradient Space: Itakura-Saito dual p’(-1.88760873,-1.38808518) q’(-1.16516903,-3.43833618) D*(p’,q’)=0.44835617 D*(q’,p’)=0.66969016 p’ p Source Space: Itakura-Saito p(0.52977081,0.72041688) q(0.85824458,0.29083834) q’ q D(p,q)=0.66969016 D(q,p)=0.44835617 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 37/64
  • 38. Bregman Voronoi diagrams as minimization diagrams [5] A subclass of affine diagrams which have all non-empty cells . Minimization diagram of the n functions Di (x) = BF (x : pi ) = F (x) − F (pi ) − x − pi , ∇F (pi ) . ≡ minimization of n linear functions: Hi (x) = (pi − x)T ∇F (qi ) − F (pi ) ⇐⇒ c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 38/64
  • 39. Bregman dual Delaunay triangulations Delaunay Exponential ◮ empty Bregman sphere property, ◮ Hellinger-like geodesic triangles. BVDs extends Euclidean Voronoi diagrams with similar complexity/algorithms. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 39/64
  • 40. Non-commutative Bregman Orthogonality 3-point property (generalized law of cosines): BF (p : r ) = BF (p : q) + BF (q : r ) − (p − q)T (∇F (r ) − ∇F (q))) P Γ∗ Q P DF (P ||R) DF (P ||Q) ΓQR Q DF (Q||R) R (pq)θ Bregman orthogonal to (qr )η iff. BF (p : r ) = BF (p : q) + BF (q : r ) (Equivalent to θp − θq , ηr − ηq = 0) Extend Pythagoras theorem (pq)θ ⊥F (qr )η → ⊥F is not commutative... ... except in the squared Euclidean/Mahalanobis case, c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 40/64
  • 41. Dually orthogonal Bregman Voronoi & Triangulations Ordinary Voronoi diagram is perpendicular to Delaunay triangulation. Dual line segment geodesics: (pq)θ = {θ = θp + (1 − λ)θq |λ ∈ [0, 1]} (pq)η = {η = ηp + (1 − λ)ηq |λ ∈ [0, 1]} Bisectors: Bθ (p, q) : Bη (p, q) : x, θq − θp + F (θp ) − F (θq ) = 0 x, ηq − ηp + F ∗ (ηp ) − F ∗ (ηq ) = 0 Dual orthogonality: Bη (p, q) ⊥ (pq)η (pq)θ ⊥ Bθ (p, q) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 41/64
  • 42. Dually orthogonal Bregman Voronoi & Triangulations Bη (p, q) ⊥ (pq)η (pq)θ ⊥ Bθ (p, q) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 42/64
  • 43. Simplifying mixture: Kullback-Leibler projection theorem An exponential family mixture model p = k=1 wi pF (x; θi ) ˜ i Right-sided KL barycenter p ∗ of components interpreted as the ¯ projection of the mixture model p ∈ P onto the model exponential ˜ family manifold EF [32]: p p ∗ = arg min KL(˜ : p) ¯ p∈EF P p e-geodesic p ˜ m-geodesic p∗ ¯ exponential family EF mixture sub-manifold manifold of probability distribution Right-sided KL centroid = Left-sided Bregman centroid c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 43/64
  • 44. A simple proof argminθ KL(m(x) : pθ (x)) = Em [log m] − Em [log pθ ] = Em [log m] − Em [k(x)] − Em [ t(x), θ − F (θ)] ≡ argmaxθ Em [ t(x), θ ] − F (θ) = argmaxθ Em [t(x)], θ ] − F (θ) Em [t(x)] = wt Epθt [t(x)] = t wt ηt = η ¯ t ∇F (θ) = η = η ¯ ˘ θ = ∇F ∗ (¯) η c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 44/64
  • 45. Left-sided or right-sided Kullback-Leibler centroids? Left/right Bregman centroids=Right/left entropic centroids (KL of exp. fam.) Left-sided/right-sided centroids: different (statistical) properties: ◮ Right-sided entropic centroid: zero-avoiding (cover support of pdfs.) ◮ Left-sided entropic centroid: zero-forcing (captures highest mode). Left-side Kullback-Leibler centroid (zero-forcing) N2 = (5, 0.64) Symmetrized centroid Right-sided Kullback-Leibler centroid zero-avoiding N1 = N (−4, 4) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 45/64
  • 46. Hierarchical clustering of GMMs (Burbea-Rao) Hierarchical clustering of GMMs wrt. Bhattacharyya distance. Simplify the number of components of an initial GMM. (a) source (b) k = 48 (c) k = 16 c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 46/64
  • 47. Two symmetrizations of Bregman divergences ◮ Jeffreys-Bregman divergences. SF (p; q) = = ◮ BF (p, q) + BF (q, p) 2 1 p − q, ∇F (p) − ∇F (q) , 2 Jensen-Bregman divergences (diversity index). JF (p; q) = = BF (p, p+q ) + BF (q, p+q ) 2 2 2 p+q F (p) + F (q) −F 2 2 = BRF (p, q) Skew Jensen divergence [21, 29] (α) (α) JF (p; q) = αF (p)+(1−α)F (q)−F (αp +(1−α)q) = BRF (p; q) (Jeffreys and Jensen-Shannon symmetrization of Kullback-Leibler) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 47/64
  • 48. Burbea-Rao centroids (α-skewed Jensen centroids) Minimum average divergence: n x (α) wi JF (x, pi ) = arg min L(x) OPT : c = arg min x i =1 Equivalent to minimize: n n E (c) = ( i =1 wi α)F (c) − i =1 wi F (αc + (1 − α)pi ) Sum E = F + G of convex F + concave G function ⇒ Convex-ConCave Procedure (CCCP) Start from arbitrary c0 , and iteratively update as: ∇F (ct+1 ) = −∇G (ct ) ⇒ guaranteed convergence to a local minimum. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 48/64
  • 49. ConCave Convex Procedure (CCCP) minx E (x) = F (x) + G (x) ∇F (ct+1 ) = −∇G (ct ) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 49/64
  • 50. Iterative algorithm for Burbea-Rao centroids Apply CCCP scheme n ∇F (ct+1 ) = i =1 wi ∇F (αct + (1 − α)pi ) n ct+1 = ∇F −1 i =1 wi ∇F (αct + (1 − α)pi ) Get arbitrarily fine approximations of the (skew) Burbea-Rao centroids and barycenters. Unique GLOBAL minimum when divergence is separable [21]. Unique GLOBAL minimum for matrix mean [23] for the logDet divergence. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 50/64
  • 51. Information-geometric computing on statistical manifolds ◮ Cram´r-Rao Lower Bound and Information Geometry [13] e (overview article) ◮ Chernoff information on statistical exponential family manifold [19] ◮ Hypothesis testing and Bregman Voronoi diagrams [18] ◮ Jeffreys centroid [20] ◮ Learning mixtures with k-MLE [17] ◮ Closed-form divergences for statistical mixtures [16] c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 51/64
  • 52. Summary Information-geometric pattern recognition: ◮ ◮ Statistical manifold (M, g ): Rao’s distance and Fisher-Rao curved Riemannian geometry. Statistical manifold (M, g , ∇, ∇∗ ): dually flat spaces, Bregman divergences, geodesics are straight lines in either θ/η parameter space. ±1-geometry, special case of α-geometry ◮ Clustering in dually flat spaces ◮ Software library: jMEF [8] (Java), pyMEF [31] (Python) ◮ ... but also many other geometries to explore: Hilbertian, Finsler [3], K¨hler, Wasserstein, Contact, Symplectic, etc. (it a is easy to require non-Euclidean geometry but then space is wild open!) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 52/64
  • 53. Edited book (MIG) and proceedings (GSI) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 53/64
  • 54. Thank you. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 54/64
  • 55. Exponential families & statistical distances Universal density estimators [2] generalizing Gaussians/histograms (single EF density approximates any smooth density) Explicit formula for ◮ Shannon entropy, cross-entropy, and Kullback-Leibler divergence [26]: ◮ R´nyi/Tsallis entropy and divergence [28] e ◮ Sharma-Mittal entropy and divergence [30]. A 2-parameter family extending extensive R´nyi (for β → 1) and e non-extensive Tsallis entropies (for β → α) 1 Hα,β (p) = 1−β ◮ ◮ ◮ α p(x) dx 1−β 1−α −1 , with α > 0, α = 1, β = 1. Skew Jensen and Burbea-Rao divergence [21] Chernoff information and divergence [15] Mixtures: total Least square, Jensen-R´nyi, Cauchy-Schwarz e divergence [16]. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 55/64
  • 56. Statistical invariance: Markov kernel Probability family: p(x; θ). (X , σ) and (X ′ , σ ′ ) two measurable spaces. σ: A σ-algebra on X (non-empty, closed under complementation and countable union). Markov kernel = transition probability kernel K : X × σ ′ → [0, 1]: ◮ ◮ ∀E ′ ∈ σ ′ , K (·, E ′ ) measurable map, ∀x ∈ X , K (x, ·) is a probability measure on (X ′ , σ ′ ). p a pm. on (X , σ) induces Kp a pm., with Kp(E ′ ) = X K (x, E ′ )p(dx), ∀E ′ ⊂ σ ′ c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 56/64
  • 57. Space of Bregman spheres and Bregman balls [5] Dual Bregman balls (bounding Bregman spheres): Ballr (c, r ) = {x ∈ X | BF (x : c) ≤ r } F and BalllF (c, r ) = {x ∈ X | BF (c : x) ≤ r } Legendre duality: BalllF (c, r ) = (∇F )−1 (Ballr ∗ (∇F (c), r )) F Illustration for Itakura-Saito divergence, F (x) = − log x c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 57/64
  • 58. Space of Bregman spheres: Lifting map [5] F : x → x = (x, F (x)), hypersurface in Rd+1 . ˆ Hp : Tangent hyperplane at p , z = Hp (x) = x − p, ∇F (p) + F (p) ˆ ◮ Bregman sphere σ −→ σ with supporting hyperplane ˆ Hσ : z = x − c, ∇F (c) + F (c) + r . (// to Hc and shifted vertically by r ) σ = F ∩ Hσ . ˆ ◮ intersection of any hyperplane H with F projects onto X as a Bregman sphere: H : z = x, a +b → σ : BallF (c = (∇F )−1 (a), r = a, c −F (c)+b) c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 58/64
  • 59. Bibliographic references I Marcel R. Ackermann and Johannes Bl¨mer. o Bregman clustering for separable instances. In Scandinavian Workshop on Algorithm Theory (SWAT), pages 212–223, 2010. Yasemin Altun, Alexander J. Smola, and Thomas Hofmann. Exponential families for conditional random fields. In Uncertainty in Artificial Intelligence (UAI), pages 2–9, 2004. Marc Arnaudon and Frank Nielsen. Medians and means in Finsler geometry. LMS Journal of Computation and Mathematics, 15, 2012. Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705–1749, 2005. Jean-Daniel Boissonnat, Frank Nielsen, and Richard Nock. Bregman Voronoi diagrams. Discrete & Computational Geometry, 44(2):281–307, 2010. Nikolai Nikolaevich Chentsov. Statistical Decision Rules and Optimal Inferences. Transactions of Mathematics Monograph, numero 53, 1982. Published in russian in 1972. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 59/64
  • 60. Bibliographic references II Jos´ Manuel Corcuera and Federica Giummol´. e e A characterization of monotone and regular divergences. Annals of the Institute of Statistical Mathematics, 50(3):433–450, 1998. Vincent Garcia and Frank Nielsen. Simplification and hierarchical representations of mixtures of exponential families. Signal Processing (Elsevier), 90(12):3197–3212, 2010. Vincent Garcia, Frank Nielsen, and Richard Nock. Levels of details for Gaussian mixture models. In Asian Conference on Computer Vision (ACCV), volume 2, pages 514–525, 2009. Fran¸ois Labelle and Jonathan Richard Shewchuk. c Anisotropic Voronoi diagrams and guaranteed-quality anisotropic mesh generation. In Proceedings of the nineteenth annual symposium on Computational geometry, SCG ’03, pages 191–200, New York, NY, USA, 2003. ACM. Meizhu Liu, Baba C. Vemuri, Shun-ichi Amari, and Frank Nielsen. Shape retrieval using hierarchical total Bregman soft clustering. Transactions on Pattern Analysis and Machine Intelligence, 34(12):2407–2419, 2012. Meizhu Liu, Baba C. Vemuri, Shun ichi Amari, and Frank Nielsen. Total Bregman divergence and its applications to shape retrieval. In International Conference on Computer Vision (CVPR), pages 3463–3468, 2010. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 60/64
  • 61. Bibliographic references III Frank Nielsen. Cram´r-rao lower bound and information geometry. e In Rajendra Bhatia, C. S. Rajan, and Ajit Iqbal Singh, editors, Connected at Infinity II: A selection of mathematics by Indians. Hindustan Book Agency (Texts and Readings in Mathematics, TRIM). arxiv 1301.3578. Frank Nielsen. Visual Computing: Geometry, Graphics, and Vision. Charles River Media / Thomson Delmar Learning, 2005. Frank Nielsen. Chernoff information of exponential families. arXiv, abs/1102.2684, 2011. Frank Nielsen. Closed-form information-theoretic divergences for statistical mixtures. In International Conference on Pattern Recognition (ICPR), pages 1723–1726, 2012. Frank Nielsen. k-MLE: A fast algorithm for learning statistical mixture models. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, 2012. preliminary, technical report on arXiv. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 61/64
  • 62. Bibliographic references IV Frank Nielsen. Hypothesis testing, information divergence and computational geometry. In F. Nielsen and F. Barbaresco, editors, Geometric Science of Information (GSI), volume 8085 of LNCS, pages 241–248. Springer, 2013. Frank Nielsen. An information-geometric characterization of Chernoff information. IEEE Signal Processing Letters, 20(3):269–272, 2013. Frank Nielsen. Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation for frequency histograms. Signal Processing Letters, IEEE, PP(99):1–1, 2013. Frank Nielsen and Sylvain Boltz. The Burbea-Rao and Bhattacharyya centroids. IEEE Transactions on Information Theory, 57(8):5455–5466, August 2011. Frank Nielsen and Vincent Garcia. Statistical exponential families: A digest with flash cards, 2009. arXiv.org:0911.4863. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 62/64
  • 63. Bibliographic references V Frank Nielsen, Meizhu Liu, Xiaojing Ye, and Baba C. Vemuri. Jensen divergence based SPD matrix means and applications. In International Conference on Pattern Recognition (ICPR), 2012. Frank Nielsen and Richard Nock. The dual Voronoi diagrams with respect to representational Bregman divergences. In International Symposium on Voronoi Diagrams (ISVD), pages 71–78, 2009. Frank Nielsen and Richard Nock. Sided and symmetrized Bregman centroids. IEEE Transactions on Information Theory, 55(6):2048–2059, June 2009. Frank Nielsen and Richard Nock. Entropies and cross-entropies of exponential families. In International Conference on Image Processing (ICIP), pages 3621–3624, 2010. Frank Nielsen and Richard Nock. Hyperbolic Voronoi diagrams made easy. In International Conference on Computational Science and its Applications (ICCSA), volume 1, pages 74–80, Los Alamitos, CA, USA, march 2010. IEEE Computer Society. Frank Nielsen and Richard Nock. On r´nyi and tsallis entropies and divergences for exponential families. e arXiv, abs/1105.3259, 2011. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 63/64
  • 64. Bibliographic references VI Frank Nielsen and Richard Nock. Skew Jensen-Bregman Voronoi diagrams. Transactions on Computational Science, 14:102–128, 2011. Frank Nielsen and Richard Nock. A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45(3), 2012. Olivier Schwander and Frank Nielsen. PyMEF - A framework for exponential families in Python. In IEEE/SP Workshop on Statistical Signal Processing (SSP), 2011. Olivier Schwander and Frank Nielsen. Learning mixtures by simplifying kernel density estimators. In Frank Nielsen and Rajendra Bhatia, editors, Matrix Information Geometry, pages 403–426, 2012. Jose Seabra, Francesco Ciompi, Oriol Pujol, Josepa Mauri, Petia Radeva, and Joao Sanchez. Rayleigh mixture model for plaque characterization in intravascular ultrasound. IEEE Transaction on Biomedical Engineering, 58(5):1314–1324, 2011. Baba Vemuri, Meizhu Liu, Shun ichi Amari, and Frank Nielsen. Total Bregman divergence and its applications to DTI analysis. IEEE Transactions on Medical Imaging, 2011. c 2013 Frank Nielsen, Sony Computer Science Laboratories, Inc. 64/64