On the Smallest Enclosing Riemannian Balls 
— On Approximating the Riemannian 1-Center — 
http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ 
Marc Arnaudon1 Frank Nielsen2 
1Universit´e de Bordeaux, France 
2´Ecole Polytechnique & Sony CSL 
e-mail: Frank.Nielsen@acm.org 
Computational Geometry 46(1): 93-104 (2013) 
arXiv 1101.4718 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 1/39
Introduction: Euclidean Smallest Enclosing Balls 
Given d-dimensional P = {p1, ..., pn}, find the “smallest” 
(with respect to the volume ≡ radius ≡ inclusion) 
ball B = Ball(c, r ) fully covering P: 
c∗ = min 
c∈Rd 
n 
max 
i=1 kc − pik. 
◮ unique Euclidean circumcenter c∗, SEB [19]. 
◮ optimization problem non-differentiable [10] 
c∗ lie on the farthest Voronoi diagram 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 2/39
Euclidean smallest enclosing balls (SEBs) 
◮ 1857: d = 2, Smallest Enclosing Ball? of P = {p1, ..., pn} 
(Sylvester [16]) 
◮ Randomized expected linear time algorithm [19, 5] in fixed 
dimension (but hidden constant exponential in d) 
◮ Core-set [3] approximation: (1 + ǫ)-approximation in 
O( dn 
ǫ + 1 
ǫ2 )-time in arbitrary dimension, O( dn 
ǫ4.5 log 1 
ǫ ) [7] 
◮ Many other algorithms and heuristics [14, 9, 17], etc. 
SEB also known as Minimum Enclosing Ball (MEB), minimax 
center, 1-center, bounding (hyper)sphere, etc. 
→ Applications in computer graphics (collision detection with ball 
cover proxies [15]), in machine learning (Core Vector 
Machines [18]), etc. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 3/39
Optimization and core-sets [3] 
Let c(P) denote the circumcenter of the SEB and r (P) its radius 
Given ǫ > 0, ǫ-core-set C ⊂ P, such that 
P ⊆ Ball(c(C), (1 + ǫ)r (C)) 
⇔ Expanding SEB(C) by 1 + ǫ fully covers P 
Core-set of optimal size ⌈1 
ǫ ⌉, independent of the dimension d, 
and n. Note that combinatorial basis for SEB is from 2 to 
d + 1 [19]. 
→ Core-sets find many applications for problems in 
large-dimensions. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 4/39
Euclidean SEBs from core-sets [2] 
B˘adoiu-Clarkson algorithm based on core-sets [2, 3]: 
BCA: 
◮ Initialize the center c1 ∈ P = {p1, ..., pn}, and 
◮ Iteratively update the current center using the rule 
ci+1 ← ci + 
fi − ci 
i + 1 
where fi denotes the farthest point of P to ci : 
fi = ps , s = argmaxnj 
=1kci − pjk 
⇒ gradient-descent method 
⇒ (1 + ǫ)-approximation after ⌈ 1 
ǫ2 ⌉ iterations: O( dn 
ǫ2 ) time 
⇒ Core-set: f1, ..., fl with l = ⌈ 1 
ǫ2 ⌉ 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 5/39
Euclidean SEBs from core-sets: Rewriting with # 
a#tb: point (1 − t)a + tb = a + t(b − a) on the line segment [ab]. 
D(x, y) = kx − yk2, D(x, P) = miny∈P D(x, y) 
Algorithm 1: BCA(P, l ). 
c1 ← choose randomly a point in P; 
for i = 2 to l − 1 do 
nj 
// farthest point from ci 
si ← argmax=1D(ci , pj ); 
// update the center: walk on the segment [ci , psi ] 
ci+1 ← ci# 1 
psi ; 
i+1 
end 
// Return the SEB approximation 
return Ball(cl , r 2 
l = D(cl ,P)) ; 
⇒ (1 + ǫ)-approximation after l = ⌈ 1 
ǫ2 ⌉ iterations. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 6/39
Bregman divergences (incl. squared Euclidean distance) 
SEB extended to Bregman divergences BF (· : ·) [13] 
BF (c : x) = F(c) − F(x) − hc − x,∇F(x)i, 
BF (c : X) = minx∈X BF (c : x) 
F 
ˆp 
ˆq 
q p 
Hq 
H′ 
q 
BF (p, q) = Hq − H′ 
q 
⇒ Bregman divergence = remainder of a first order Taylor 
expansion. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 7/39
Smallest enclosing Bregman ball [13] 
F∗ = convex conjugate of F with (∇F)−1 = ∇F∗ 
Algorithm 2: MBC(P, l ). 
// Create the gradient point set (η-coordinates) 
P′ ← {∇F(p) : p ∈ P}; 
g ← BCA(P′, l ); 
return Ball(cl = ∇F−1(c(g)), rl = BF (cl : P)) ; 
Guaranteed approximation algorithm with approximation factor 
depending on 1 
minx∈X k∇2F(x)k 
, ... but poor in practice 
∀s, SF (x;∇F−1(c(g))) ≤ 
(1 + ǫ)2r ′∗ 
minx∈X k∇2F(x)k 
with SF (c; x) = BF (c : x) + BF (x : c) 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 8/39
Smallest enclosing Bregman ball [13] 
A better approximation in practice... 
Algorithm 3: BBCA(P, l ). 
c1 ← choose randomly a point in P; 
for i = 2 to l − 1 do 
nj 
// farthest point from ci wrt. BF 
si ← argmax=1BF (ci : pj ); 
// update the center: walk on the η-segment 
[ci , psi ]η 
ci+1 ← ∇F−1(∇F(ci )# 1 
i+1∇F(psi )) ; 
end 
// Return the SEBB approximation 
return Ball(cl , rl = BF (cl : X)) ; 
θ-, η-geodesic segments in dually flat geometry. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique & Sony Computer Science Laboratories 9/39
Basics of Riemannian geometry 
◮ (M, g): Riemannian manifold 
◮ h·, ·i, Riemannian metric tensor g: definite positive bilinear 
form on each tangent space TxM (depends smoothly on x) 
◮ k · kx : kuk = hu, ui1/2: Associated norm in TxM 
◮ ρ(x, y): metric distance between two points on the manifold 
M (length space) 
ρ(x, y) = inf Z 1 
0 kϕ˙ (t)k dt, ϕ ∈ C1([0, 1],M), ϕ(0) = x, ϕ(1) = y 
Parallel transport wrt. Levi-Civita metric connection ∇: ∇g = 0. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 10/39
Basics of Riemannian geometry: Exponential map 
◮ Local map from the tangent space TxM to the manifold 
defined with geodesics (wrt ∇). 
∀x ∈ M,D(x) ⊂ TxM : D(x) = {v ∈ TxM : γv (1) is defined} 
with γv maximal (i.e., largest domain) geodesic with 
γv (0) = x and γ′v(0) = v. 
◮ Exponential map: 
expx (·) : D(x) ⊆ TxM → M 
expx (v) = γv (1) 
D is star-shaped. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 11/39
Basics of Riemannian geometry: Geodesics 
◮ Geodesic: smooth path which locally minimizes the distance 
between two points. (In general such a curve does not 
minimize it globally.) 
◮ Given a vector v ∈ TxM with base point x, there is a unique 
geodesic started at x with speed v at time 0: t7→ expx (tv) or 
t7→ γt(v). 
◮ Geodesic on [a, b] is minimal if its length is less or equal to 
others. For complete M (i.e., expx (v)), taking x, y ∈ M, there 
exists a minimal geodesic from x to y in time 1. 
γ·(x, y) : [0, 1] → M, t7→ γt (x, y) with the conditions 
γ0(x, y) = x and γ1(x, y) = y. 
◮ U ⊆ M is convex if for any x, y ∈ U, there exists a unique 
minimal geodesic γ·(x, y) in M from x to y. Geodesic fully 
lies in U and depends smoothly on x, y, t. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 12/39
Basics of Riemannian geometry: Geodesics 
◮ Geodesic γ(x, y): locally minimizing curves linking x to y 
◮ Speed vector γ′(t) parallel along γ: 
Dγ′(t) 
dt 
= ∇γ′(t)γ′(t) = 0 
◮ When manifold M embedded in Rd , acceleration is normal to 
tangent plane: 
γ′′(t) ⊥ Tγ(t)M 
◮ kγ′(t)k = c, a constant (say, unit). 
⇒ Parameterization of curves with constant speed... 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 13/39
Basics of Riemannian geometry: Geodesics 
Constant speed geodesic γ(t) so that γ(0) = x and γ(ρ(x, y)) = y 
(constant speed 1, the unit of length). 
x#ty = m = γ(t) : ρ(x,m) = t × ρ(x, y) 
For example, in the Euclidean space: 
x#ty = (1 − t)x + ty = x + t(y − x) = m 
ρE (x,m) = kt(y − x)k = tky − xk = t × ρ(x, y), t ∈ [0, 1] 
⇒ m interpreted as a mean (barycenter) between x and y. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 14/39
Basics of Riemannian geometry: Injectivity radius 
Diffeomorphism from the tangent space to the manifold 
◮ Injectivity radius inj(M): largest r  0 such that for all 
x ∈ M, the map expx (·) restricted to the open ball in TxM 
with radius r is an embedding. 
◮ Global injectivity radius: infimum of the injectivity radius over 
all points of the manifold. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 15/39
Basics of Riemannian geometry: Sectional curvature 
Given x ∈ M, u, v two non collinear vectors in TxM, the sectional 
curvature Sect(u, v) = K is a number which gives information on 
how the geodesics issued from x behave near x. 
More precisely, the image by expx (·) of the circle centered at 0 of 
radius r  0 in Span(u, v) has length 
2πSK(r ) + o(r 3) as r → 0 
with 
SK(r ) = 
 
 
 
sin(√Kr ) 
√K 
if K  0, 
r if K = 0, 
sinh(√−Kr ) 
√−if K  0. 
K 
positive, zero or negative curvatures... 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 16/39
Basics of Riemannian geometry: Alexandrov’s theorem 
Given an upper bound α2 for sectional curvatures, compare 
geodesic triangles by Alexandrov theorem: 
Let x1, x2, x3 ∈ M satisfy x16= x2, x16= x3 and 
ρ(x1, x2) + ρ(x2, x3) + ρ(x3, x1)  2min ninj(M), 
π 
αo 
where α  0 is such that α2 is an upper bound of sectional 
curvatures. Let the minimizing geodesic from x1 to x2 and the 
minimizing geodesic from x1 to x3 make an angle θ at x1. 
Denoting by S2α 
2 the 2-dimensional sphere of constant curvature 
α2 (hence of radius 1/α) and ˜ρ the distance in S2 
α2 , we consider 
points ˜x1, ˜x2, ˜x3 ∈ S2α 
2 such that ρ(x1, x2) = ˜ρ(˜x1, ˜x2), 
ρ(x1, x3) = ˜ρ(˜x1, ˜x3). Assume that the minimizing geodesic from 
˜x1 to ˜x2 and the minimizing geodesic from ˜x1 to ˜x3 also make an 
angle θ at ˜x1. 
Then we have: ρ(x2, x3) ≥ ˜ρ(˜x2, ˜x3) . 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 17/39
Basics of Riemannian geometry: Topogonov’s theorem 
2− 
Assume β  0 is such that −β2 is a lower bound for sectional 
curvatures in M. Let x1, x2, x3 ∈ M satisfy x16= x2, x16= x3. Let 
the minimizing geodesic from x1 to x2 and the minimizing geodesic 
from x1 to x3 make an angle θ at x1. Denoting by Hβ2 the 
hyperbolic 2-dimensional space of constant curvature −β2 and 
ρ ˜the distance in Hβ2 , we consider points x1, ˜x2, ˜x3 ˜∈ Hβ2 such 
2− 
2− 
that ρ(x1, x2) = ˜ρ(˜x1, ˜x2), ρ(x1, x3) = ˜ρ(˜x1, ˜x3). Assume that the 
minimizing geodesic from ˜x1 to ˜x2 and the minimizing geodesic 
from ˜x1 to ˜x3 also make an angle θ at ˜x1. 
Then we have: ρ(x2, x3) ≤ ˜ρ(˜x2, ˜x3) . 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 18/39
Basics of Riemannian geometry: First law of cosines 
In spherical/hyperbolic geometries: 
◮ If θ1, θ2, θ3 are the angles of a triangle in S2α 
2 and l1, l2, l3 are 
the lengths of the opposite sides, then 
cos θ3 = 
cos(αl3) − cos(αl1) cos(αl2) 
sin(αl1) sin(αl2) 
◮ If θ1, θ2, θ3 are the angles of a triangle in H2− 
β2 and l1, l2, l3 
are the lengths of the opposite sides, then 
cos θ3 = 
cosh(βl1) cosh(βl2) − cosh(βl3) 
sinh(βl1) sinh(βl2) 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 19/39
Now ready for the “Smallest enclosing Riemannian ball” 
(M, g): complete Riemannian manifold 
ν: probability measure on M 
ρ(x, y): Riemannian metric distance 
Assume the measure support supp(ν) ⊆ in a geodesic ball 
B(o, R). 
f : M → R: measurable function 
kf kL∞(ν) = inf {a  0, ν ({y ∈ M, |f (y)|  a}) = 0} . 
α  0 such that α2 upper bounds the sectional curvatures in M. 
Rα = 
1 
2 
minninj(M), 
π 
αo 
inj(M): injectivity radius 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 20/39
Riemannian SEB: Existence and uniqueness [1] 
Assume 
R  Rα 
Consider farthest point map: 
H : M → [0,∞] 
x7→ kρ(x, ·)kL∞(ν) (1) 
c ∈ B(o, R). 
→ c ⊂ CH(supp(ν)) [1] (convex hull) 
⇒ center: notion of centrality of the measure 
⇒ point set: discrete measure, center → circumcenter 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 21/39
Example of Riemannian manifold: SPD space 
Space of Symmetric Positive Definite (SPD) matrices with 
◮ Riemannian distance: 
ρ(P,Q) = k log(P−1Q)kF =vuut 
d 
Xi=1 
log2 λi 
where λi are the eigenvalues of matrix P−1Q. 
◮ Non-compact Riemannian symmetric space of non-positive 
curvature (aka. Cartan-Hadamard manifold). 
◮ Any measure ν with bounded support satisfies R  Rα 
(choose α  0). 
⇒ Minimizer c of farthest point map H exists and is unique: 
1-center or minimax center of ν. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 22/39
Generalizing BCA to Riemannian manifolds 
GeoA: 
◮ Initialize the center with c1 ∈ P, and 
◮ Iteratively update the current minimax center as 
ci+1 = Geodesicci , fi , 
1 
i + 1 
where fi denotes the farthest point of P to ci , and 
Geodesic(p, q, t) denotes the intermediate point m on 
the geodesic passing through p and q such that 
ρ(p,m) = t × ρ(p, q). 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 23/39
Generalizing BCA to Riemannian manifolds 
a#Mt 
b: point γ(t) on the geodesic line segment [ab] wrt M. 
Algorithm 4: GeoA 
c1 ← choose randomly a point in P; 
for i = 2 to l do 
nj 
// farthest point from ci 
si ← argmax=1ρ(ci , pj ); 
// update the center: walk on the geodesic line 
segment [ci , psi ] 
ci+1 ← ci#M 
1 
i+1 
psi ; 
end 
// Return the SEB approximation 
return Ball(cl , rl = ρ(cl ,P)) ; 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 24/39
Proof sketch 
Assume supp(ν) ⊂ B(o, R) and 
R  Rα = 
1 
2 
minninj(M), 
π 
αo 
with α  0 such that α2 is an upper bound for the sectional 
curvatures in M. 
Lemma 
There exists τ  0 such that for all x ∈ B(o, R), 
H(x) − H(c) ≥ τρ2(x, c) 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 25/39
Stochastic approximation for measures 
For x ∈ B(o, R), t7→ γt (v(x, ν)) a unit speed geodesic from 
γ0(v(x, ν)) = x to one point y = γH(x)(v(x, ν)) in supp(ν) which 
realizes the maximum of the distance from x to supp(ν). 
v = 
1 
H(x) 
exp−1 
x (y) 
RieA: 
Fix some δ  0. 
◮ Step 1 Choose a starting point x0 ∈ supp(ν) and let 
k = 0 
◮ Step 2 Choose a step size tk+1 ∈ (0, δ] and let 
xk+1 = γtk+1(v(xk , ν)), then do again step 2 with 
k ← k + 1. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 26/39
Convergence theorem for RieA 
a ∧ b: minimum operator a ∧ b = min(a, b). 
R0 = 
Rα − R 
2 ∧ 
R 
2 
. 
Assume α, β  0 are such that −β2 is a lower bound and α2 an 
upper bound of the sectional curvatures in M. If the step sizes 
(tk )k≥1 satisfy 
δ ≤ 
R0 
2 ∧ 
2 
β 
arctanh (tanh(βR0/2) cos(αR) tan(αR0/4)) , 
lim 
k→∞ 
tk = 0, 
∞Xk 
=1 
tk = +∞ and 
∞Xk 
=1 
t2 
k  ∞. 
then the sequence (xk )k≥1 generated by the algorithm satisfies 
lim 
k→∞ 
ρ(xk , c) = 0 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 27/39
Case study I: Hyperbolic planar manifold 
In Klein disk (projective model), geodesics are straight (euclidean) 
lines [11]. 
ρ(p, q) = arccosh 
1 − p⊤q 
p(1 − p⊤p)(1 − q⊤q) 
where arccosh(x) = log(x + √x2 − 1). 
Here, we choose non-constant speed curve parameterization (not 
constant-speed geodesic): 
˜γt (p, q) = (1 − t)p + tq, t ∈ [0, 1]. 
⇒ Implement a dichotomy on ˜γt(p, q) to get #t . 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 28/39
Initialization First iteration 
Second iteration Third iteration 
Fourth iteration after 104 iterations 
http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 29/39
Performance 
0.14 
0.12 
0.1 
0.08 
0.06 
0.04 
0.02 
0 
Klein distance between current center and minimax center 
expe1.dat using 1:2 
0 50 100 150 200 
Radius of the smallest enclosing Klein ball anchored at current center 
0.8 
0.75 
0.7 
0.65 
0.6 
0.55 
0.5 
expe1.dat using 1:3 
0 50 100 150 200 
(a) (b) 
Convergence rate of the GeoA algorithm for the hyperbolic disk for 
the first 200 iterations. Horizontal axis: number of iterations 
Vertical axis: (a) the relative Klein distance between the current 
center and the optimal 1-center, (b) the radius of the smallest 
enclosing ball anchored at the current center. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 30/39
Case study II: Space of SPD matrices 
◮ d × d matrix M Symmetric Positive Definite (SPD) ⇔ 
M = M⊤ and that for all x6= 0, x⊤Mx  0. 
◮ The set of d × d SPD matrices: manifold of dimension 
d(d+1) 
2 [8] 
◮ The geodesic linking (matrix) point P to point Q: 
γt(P,Q) = P 
1 
2 P−1 
2 t 
2QP−1 
P 
1 
2 
where the matrix function h(M) is computed from the 
singular value decomposition M = UDV ⊤ (with U and V 
unitary matrices and D = diag(λ1, ..., λd ) a diagonal matrix 
of eigenvalues) as h(M) = Udiag(h(λ1), ..., h(λd ))V ⊤. For 
example, the square root function of a matrix is computed as 
M 
1 
2 = U diag(√λ1, ...,√λd ) V ⊤. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 31/39
SPD space: Splitting the geodesic for operator #t 
In this case, finding t such that 
k log(P−1Q)tk2 
F = rk log P−1Qk2 
F , (2) 
where k · kF denotes the Fr¨obenius norm yields to t = r . Indeed, 
consider λ1, ..., λd the eigenvalues of P−1Q, then 
ρ(P,Q) = k log(P−1Q)kF = qPi log2 λi amounts to find 
d 
Xi=1 
log2 λti 
= t2 
d 
Xi=1 
log2 λi = r 2 
d 
Xi=1 
log2 λi . 
That is t = r . 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 32/39
Case study II: Performance 
Riemannian distance between current SPD center and minimax SPD center 
0.7 
0.6 
0.5 
0.4 
0.3 
0.2 
0.1 
0 
expSPD1.dat using 1:2 
0 50 100 150 200 
Radius of the smallest enclosing Riemannian ball anchored at current SPD center 
16 
15.5 
15 
14.5 
14 
13.5 
13 
12.5 
12 
expSPD1.dat using 1:3 
0 50 100 150 200 
(a) (b) 
Convergence rate of the GeoA algorithm for the SPD Riemannian 
manifold (dimension 5) for the first 200 iterations. 
Horizontal axis: number of iterations i 
Vertical axis: 
◮ (a) the relative Riemannian distance between the current 
center ci and the optimal 1-center c∗ ( ρ(c∗,ci ) 
r∗ ) 
◮ (b) the radius ri of the smallest enclosing SPD ball anchored 
at the current center. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 33/39
Remark on SPD spaces and hyperbolic geometry 
◮ 2D SPD(2) matrix space has dimension d = 3: A positive 
cone. 
(a, b, c) : a  0, ab − c2  0	 ◮ Can be peeled into sheets of dimension 2, each sheet 
corresponding to a constant value of the determinant of the 
elements [4] 
SPD(2) = SSPD(2) × R+, 
where SSPD(2) = {a, b, c = √1 − ab) : a  0, ab − c2 = 1} 
2 ≥ 1, x1 = a−b 
◮ Map to (x0 = a+b 
2 , x2 = c) in hyperboloid 
model [12], and z = a−b+2ic 
2+a+b in Poincar´e disk [12]. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 34/39
Conclusion: Smallest Riemannian Enclosing Ball 
◮ Generalize Euclidean 1-center algorithm of [2] to Riemannian 
geometry 
◮ Proved the convergence under mild assumptions (for 
measures/point sets) 
◮ Existence of Riemannian core-sets for optimization 
◮ 1-center building block for k-center clustering [6] 
◮ can be extended to sets of Riemannian (geodesic) balls 
Reproducible research codes with interactive demos: 
http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 35/39
Bibliographic references I 
Bijan Afsari. 
Riemannian Lp center of mass : existence, uniqueness, and convexity. 
Proceedings of the American Mathematical Society, 139:655–674, February 2011. 
Mihai B˘adoiu and Kenneth L. Clarkson. 
Smaller core-sets for balls. 
In SODA ’03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 
801–802, Philadelphia, PA, USA, 2003. Society for Industrial and Applied Mathematics. 
Mihai B˘adoiu and Kenneth L. Clarkson. 
Optimal core-sets for balls. 
Computational Geometry: Theory and Applications, 40:14–22, May 2008. 
Pascal Chossat and Olivier P. Faugeras. 
Hyperbolic planforms in relation to visual edges and textures perception. 
PLoS Computational Biology, 5(12), 2009. 
Kaspar Fischer and Bernd G¨artner. 
The smallest enclosing ball of balls: combinatorial structure and algorithms. 
Int. J. Comput. Geometry Appl., 14(4-5):341–378, 2004. 
Teofilo F. Gonzalez. 
Clustering to minimize the maximum intercluster distance. 
Theoretical Computer Science, 38(0):293 – 306, 1985. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 36/39
Bibliographic references II 
Piyush Kumar, Joseph S. B. Mitchell, and E. Alper Yildirim. 
Approximate minimum enclosing balls in high dimensions using core-sets. 
ACM Journal of Experimental Algorithmics, 8, 2003. 
Serge Lang. 
Fundamentals of differential geometry, volume 191 of Graduate Texts in Mathematics. 
Springer-Verlag, New York, 1999. 
Thomas Larsson. 
Fast and tight fitting bounding spheres. 
In Proceedings of the Annual SIGRAD Conference, Stockholm, 2008. 
Frank Nielsen and Richard Nock. 
Approximating smallest enclosing balls with applications to machine learning. 
Int. J. Comput. Geometry Appl., 19(5):389–414, 2009. 
Frank Nielsen and Richard Nock. 
Hyperbolic Voronoi diagrams made easy. 
In International Conference on Computational Science and its Applications (ICCSA), volume 1, pages 
74–80, Los Alamitos, CA, USA, march 2010. IEEE Computer Society. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 37/39
Bibliographic references III 
Frank Nielsen and Richard Nock. 
Visualizing hyperbolic Voronoi diagrams. 
In Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG’14, pages 
90:90–90:91, New York, NY, USA, 2014. ACM. 
Richard Nock and Frank Nielsen. 
Fitting the smallest enclosing bregman balls. 
In 16th European Conference on Machine Learning (ECML), pages 649–656, October 2005. 
Jack Ritter. 
An efficient bounding sphere. 
In Graphics gems, pages 301–303. Academic Press Professional, Inc., 1990. 
Jonas Spillmann, Markus Becker, and Matthias Teschner. 
Efficient updates of bounding sphere hierarchies for geometrically deformable models. 
Journal of Visual Communication and Image Representation, 18(2):101–108, 2007. 
J. J. Sylvester. 
A question in the geometry of situation. 
Quarterly Journal of Pure and Applied Mathematics, 1:79, 1857. 
Bo Tian. 
Bouncing Bubble: A fast algorithm for Minimal Enclosing Ball problem. 
2012. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 38/39
Bibliographic references IV 
Ivor W. Tsang, Andras Kocsor, and James T. Kwok. 
Simpler core vector machines with enclosing balls. 
In Proceedings of the 24th International Conference on Machine Learning (ICML), pages 911–918, New 
York, NY, USA, 2007. ACM. 
Emo Welzl. 
Smallest enclosing disks (balls and ellipsoids). 
In H. Maurer, editor, New Results and New Trends in Computer Science, LNCS. Springer, 1991. 
c 
 2013-14 Frank Nielsen, ´E 
cole Polytechnique  Sony Computer Science Laboratories 39/39

On approximating the Riemannian 1-center

  • 1.
    On the SmallestEnclosing Riemannian Balls — On Approximating the Riemannian 1-Center — http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ Marc Arnaudon1 Frank Nielsen2 1Universit´e de Bordeaux, France 2´Ecole Polytechnique & Sony CSL e-mail: Frank.Nielsen@acm.org Computational Geometry 46(1): 93-104 (2013) arXiv 1101.4718 c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 1/39
  • 2.
    Introduction: Euclidean SmallestEnclosing Balls Given d-dimensional P = {p1, ..., pn}, find the “smallest” (with respect to the volume ≡ radius ≡ inclusion) ball B = Ball(c, r ) fully covering P: c∗ = min c∈Rd n max i=1 kc − pik. ◮ unique Euclidean circumcenter c∗, SEB [19]. ◮ optimization problem non-differentiable [10] c∗ lie on the farthest Voronoi diagram c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 2/39
  • 3.
    Euclidean smallest enclosingballs (SEBs) ◮ 1857: d = 2, Smallest Enclosing Ball? of P = {p1, ..., pn} (Sylvester [16]) ◮ Randomized expected linear time algorithm [19, 5] in fixed dimension (but hidden constant exponential in d) ◮ Core-set [3] approximation: (1 + ǫ)-approximation in O( dn ǫ + 1 ǫ2 )-time in arbitrary dimension, O( dn ǫ4.5 log 1 ǫ ) [7] ◮ Many other algorithms and heuristics [14, 9, 17], etc. SEB also known as Minimum Enclosing Ball (MEB), minimax center, 1-center, bounding (hyper)sphere, etc. → Applications in computer graphics (collision detection with ball cover proxies [15]), in machine learning (Core Vector Machines [18]), etc. c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 3/39
  • 4.
    Optimization and core-sets[3] Let c(P) denote the circumcenter of the SEB and r (P) its radius Given ǫ > 0, ǫ-core-set C ⊂ P, such that P ⊆ Ball(c(C), (1 + ǫ)r (C)) ⇔ Expanding SEB(C) by 1 + ǫ fully covers P Core-set of optimal size ⌈1 ǫ ⌉, independent of the dimension d, and n. Note that combinatorial basis for SEB is from 2 to d + 1 [19]. → Core-sets find many applications for problems in large-dimensions. c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 4/39
  • 5.
    Euclidean SEBs fromcore-sets [2] B˘adoiu-Clarkson algorithm based on core-sets [2, 3]: BCA: ◮ Initialize the center c1 ∈ P = {p1, ..., pn}, and ◮ Iteratively update the current center using the rule ci+1 ← ci + fi − ci i + 1 where fi denotes the farthest point of P to ci : fi = ps , s = argmaxnj =1kci − pjk ⇒ gradient-descent method ⇒ (1 + ǫ)-approximation after ⌈ 1 ǫ2 ⌉ iterations: O( dn ǫ2 ) time ⇒ Core-set: f1, ..., fl with l = ⌈ 1 ǫ2 ⌉ c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 5/39
  • 6.
    Euclidean SEBs fromcore-sets: Rewriting with # a#tb: point (1 − t)a + tb = a + t(b − a) on the line segment [ab]. D(x, y) = kx − yk2, D(x, P) = miny∈P D(x, y) Algorithm 1: BCA(P, l ). c1 ← choose randomly a point in P; for i = 2 to l − 1 do nj // farthest point from ci si ← argmax=1D(ci , pj ); // update the center: walk on the segment [ci , psi ] ci+1 ← ci# 1 psi ; i+1 end // Return the SEB approximation return Ball(cl , r 2 l = D(cl ,P)) ; ⇒ (1 + ǫ)-approximation after l = ⌈ 1 ǫ2 ⌉ iterations. c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 6/39
  • 7.
    Bregman divergences (incl.squared Euclidean distance) SEB extended to Bregman divergences BF (· : ·) [13] BF (c : x) = F(c) − F(x) − hc − x,∇F(x)i, BF (c : X) = minx∈X BF (c : x) F ˆp ˆq q p Hq H′ q BF (p, q) = Hq − H′ q ⇒ Bregman divergence = remainder of a first order Taylor expansion. c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 7/39
  • 8.
    Smallest enclosing Bregmanball [13] F∗ = convex conjugate of F with (∇F)−1 = ∇F∗ Algorithm 2: MBC(P, l ). // Create the gradient point set (η-coordinates) P′ ← {∇F(p) : p ∈ P}; g ← BCA(P′, l ); return Ball(cl = ∇F−1(c(g)), rl = BF (cl : P)) ; Guaranteed approximation algorithm with approximation factor depending on 1 minx∈X k∇2F(x)k , ... but poor in practice ∀s, SF (x;∇F−1(c(g))) ≤ (1 + ǫ)2r ′∗ minx∈X k∇2F(x)k with SF (c; x) = BF (c : x) + BF (x : c) c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 8/39
  • 9.
    Smallest enclosing Bregmanball [13] A better approximation in practice... Algorithm 3: BBCA(P, l ). c1 ← choose randomly a point in P; for i = 2 to l − 1 do nj // farthest point from ci wrt. BF si ← argmax=1BF (ci : pj ); // update the center: walk on the η-segment [ci , psi ]η ci+1 ← ∇F−1(∇F(ci )# 1 i+1∇F(psi )) ; end // Return the SEBB approximation return Ball(cl , rl = BF (cl : X)) ; θ-, η-geodesic segments in dually flat geometry. c 2013-14 Frank Nielsen, ´E cole Polytechnique & Sony Computer Science Laboratories 9/39
  • 10.
    Basics of Riemanniangeometry ◮ (M, g): Riemannian manifold ◮ h·, ·i, Riemannian metric tensor g: definite positive bilinear form on each tangent space TxM (depends smoothly on x) ◮ k · kx : kuk = hu, ui1/2: Associated norm in TxM ◮ ρ(x, y): metric distance between two points on the manifold M (length space) ρ(x, y) = inf Z 1 0 kϕ˙ (t)k dt, ϕ ∈ C1([0, 1],M), ϕ(0) = x, ϕ(1) = y Parallel transport wrt. Levi-Civita metric connection ∇: ∇g = 0. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 10/39
  • 11.
    Basics of Riemanniangeometry: Exponential map ◮ Local map from the tangent space TxM to the manifold defined with geodesics (wrt ∇). ∀x ∈ M,D(x) ⊂ TxM : D(x) = {v ∈ TxM : γv (1) is defined} with γv maximal (i.e., largest domain) geodesic with γv (0) = x and γ′v(0) = v. ◮ Exponential map: expx (·) : D(x) ⊆ TxM → M expx (v) = γv (1) D is star-shaped. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 11/39
  • 12.
    Basics of Riemanniangeometry: Geodesics ◮ Geodesic: smooth path which locally minimizes the distance between two points. (In general such a curve does not minimize it globally.) ◮ Given a vector v ∈ TxM with base point x, there is a unique geodesic started at x with speed v at time 0: t7→ expx (tv) or t7→ γt(v). ◮ Geodesic on [a, b] is minimal if its length is less or equal to others. For complete M (i.e., expx (v)), taking x, y ∈ M, there exists a minimal geodesic from x to y in time 1. γ·(x, y) : [0, 1] → M, t7→ γt (x, y) with the conditions γ0(x, y) = x and γ1(x, y) = y. ◮ U ⊆ M is convex if for any x, y ∈ U, there exists a unique minimal geodesic γ·(x, y) in M from x to y. Geodesic fully lies in U and depends smoothly on x, y, t. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 12/39
  • 13.
    Basics of Riemanniangeometry: Geodesics ◮ Geodesic γ(x, y): locally minimizing curves linking x to y ◮ Speed vector γ′(t) parallel along γ: Dγ′(t) dt = ∇γ′(t)γ′(t) = 0 ◮ When manifold M embedded in Rd , acceleration is normal to tangent plane: γ′′(t) ⊥ Tγ(t)M ◮ kγ′(t)k = c, a constant (say, unit). ⇒ Parameterization of curves with constant speed... c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 13/39
  • 14.
    Basics of Riemanniangeometry: Geodesics Constant speed geodesic γ(t) so that γ(0) = x and γ(ρ(x, y)) = y (constant speed 1, the unit of length). x#ty = m = γ(t) : ρ(x,m) = t × ρ(x, y) For example, in the Euclidean space: x#ty = (1 − t)x + ty = x + t(y − x) = m ρE (x,m) = kt(y − x)k = tky − xk = t × ρ(x, y), t ∈ [0, 1] ⇒ m interpreted as a mean (barycenter) between x and y. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 14/39
  • 15.
    Basics of Riemanniangeometry: Injectivity radius Diffeomorphism from the tangent space to the manifold ◮ Injectivity radius inj(M): largest r 0 such that for all x ∈ M, the map expx (·) restricted to the open ball in TxM with radius r is an embedding. ◮ Global injectivity radius: infimum of the injectivity radius over all points of the manifold. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 15/39
  • 16.
    Basics of Riemanniangeometry: Sectional curvature Given x ∈ M, u, v two non collinear vectors in TxM, the sectional curvature Sect(u, v) = K is a number which gives information on how the geodesics issued from x behave near x. More precisely, the image by expx (·) of the circle centered at 0 of radius r 0 in Span(u, v) has length 2πSK(r ) + o(r 3) as r → 0 with SK(r ) =    sin(√Kr ) √K if K 0, r if K = 0, sinh(√−Kr ) √−if K 0. K positive, zero or negative curvatures... c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 16/39
  • 17.
    Basics of Riemanniangeometry: Alexandrov’s theorem Given an upper bound α2 for sectional curvatures, compare geodesic triangles by Alexandrov theorem: Let x1, x2, x3 ∈ M satisfy x16= x2, x16= x3 and ρ(x1, x2) + ρ(x2, x3) + ρ(x3, x1) 2min ninj(M), π αo where α 0 is such that α2 is an upper bound of sectional curvatures. Let the minimizing geodesic from x1 to x2 and the minimizing geodesic from x1 to x3 make an angle θ at x1. Denoting by S2α 2 the 2-dimensional sphere of constant curvature α2 (hence of radius 1/α) and ˜ρ the distance in S2 α2 , we consider points ˜x1, ˜x2, ˜x3 ∈ S2α 2 such that ρ(x1, x2) = ˜ρ(˜x1, ˜x2), ρ(x1, x3) = ˜ρ(˜x1, ˜x3). Assume that the minimizing geodesic from ˜x1 to ˜x2 and the minimizing geodesic from ˜x1 to ˜x3 also make an angle θ at ˜x1. Then we have: ρ(x2, x3) ≥ ˜ρ(˜x2, ˜x3) . c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 17/39
  • 18.
    Basics of Riemanniangeometry: Topogonov’s theorem 2− Assume β 0 is such that −β2 is a lower bound for sectional curvatures in M. Let x1, x2, x3 ∈ M satisfy x16= x2, x16= x3. Let the minimizing geodesic from x1 to x2 and the minimizing geodesic from x1 to x3 make an angle θ at x1. Denoting by Hβ2 the hyperbolic 2-dimensional space of constant curvature −β2 and ρ ˜the distance in Hβ2 , we consider points x1, ˜x2, ˜x3 ˜∈ Hβ2 such 2− 2− that ρ(x1, x2) = ˜ρ(˜x1, ˜x2), ρ(x1, x3) = ˜ρ(˜x1, ˜x3). Assume that the minimizing geodesic from ˜x1 to ˜x2 and the minimizing geodesic from ˜x1 to ˜x3 also make an angle θ at ˜x1. Then we have: ρ(x2, x3) ≤ ˜ρ(˜x2, ˜x3) . c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 18/39
  • 19.
    Basics of Riemanniangeometry: First law of cosines In spherical/hyperbolic geometries: ◮ If θ1, θ2, θ3 are the angles of a triangle in S2α 2 and l1, l2, l3 are the lengths of the opposite sides, then cos θ3 = cos(αl3) − cos(αl1) cos(αl2) sin(αl1) sin(αl2) ◮ If θ1, θ2, θ3 are the angles of a triangle in H2− β2 and l1, l2, l3 are the lengths of the opposite sides, then cos θ3 = cosh(βl1) cosh(βl2) − cosh(βl3) sinh(βl1) sinh(βl2) c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 19/39
  • 20.
    Now ready forthe “Smallest enclosing Riemannian ball” (M, g): complete Riemannian manifold ν: probability measure on M ρ(x, y): Riemannian metric distance Assume the measure support supp(ν) ⊆ in a geodesic ball B(o, R). f : M → R: measurable function kf kL∞(ν) = inf {a 0, ν ({y ∈ M, |f (y)| a}) = 0} . α 0 such that α2 upper bounds the sectional curvatures in M. Rα = 1 2 minninj(M), π αo inj(M): injectivity radius c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 20/39
  • 21.
    Riemannian SEB: Existenceand uniqueness [1] Assume R Rα Consider farthest point map: H : M → [0,∞] x7→ kρ(x, ·)kL∞(ν) (1) c ∈ B(o, R). → c ⊂ CH(supp(ν)) [1] (convex hull) ⇒ center: notion of centrality of the measure ⇒ point set: discrete measure, center → circumcenter c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 21/39
  • 22.
    Example of Riemannianmanifold: SPD space Space of Symmetric Positive Definite (SPD) matrices with ◮ Riemannian distance: ρ(P,Q) = k log(P−1Q)kF =vuut d Xi=1 log2 λi where λi are the eigenvalues of matrix P−1Q. ◮ Non-compact Riemannian symmetric space of non-positive curvature (aka. Cartan-Hadamard manifold). ◮ Any measure ν with bounded support satisfies R Rα (choose α 0). ⇒ Minimizer c of farthest point map H exists and is unique: 1-center or minimax center of ν. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 22/39
  • 23.
    Generalizing BCA toRiemannian manifolds GeoA: ◮ Initialize the center with c1 ∈ P, and ◮ Iteratively update the current minimax center as ci+1 = Geodesicci , fi , 1 i + 1 where fi denotes the farthest point of P to ci , and Geodesic(p, q, t) denotes the intermediate point m on the geodesic passing through p and q such that ρ(p,m) = t × ρ(p, q). c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 23/39
  • 24.
    Generalizing BCA toRiemannian manifolds a#Mt b: point γ(t) on the geodesic line segment [ab] wrt M. Algorithm 4: GeoA c1 ← choose randomly a point in P; for i = 2 to l do nj // farthest point from ci si ← argmax=1ρ(ci , pj ); // update the center: walk on the geodesic line segment [ci , psi ] ci+1 ← ci#M 1 i+1 psi ; end // Return the SEB approximation return Ball(cl , rl = ρ(cl ,P)) ; c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 24/39
  • 25.
    Proof sketch Assumesupp(ν) ⊂ B(o, R) and R Rα = 1 2 minninj(M), π αo with α 0 such that α2 is an upper bound for the sectional curvatures in M. Lemma There exists τ 0 such that for all x ∈ B(o, R), H(x) − H(c) ≥ τρ2(x, c) c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 25/39
  • 26.
    Stochastic approximation formeasures For x ∈ B(o, R), t7→ γt (v(x, ν)) a unit speed geodesic from γ0(v(x, ν)) = x to one point y = γH(x)(v(x, ν)) in supp(ν) which realizes the maximum of the distance from x to supp(ν). v = 1 H(x) exp−1 x (y) RieA: Fix some δ 0. ◮ Step 1 Choose a starting point x0 ∈ supp(ν) and let k = 0 ◮ Step 2 Choose a step size tk+1 ∈ (0, δ] and let xk+1 = γtk+1(v(xk , ν)), then do again step 2 with k ← k + 1. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 26/39
  • 27.
    Convergence theorem forRieA a ∧ b: minimum operator a ∧ b = min(a, b). R0 = Rα − R 2 ∧ R 2 . Assume α, β 0 are such that −β2 is a lower bound and α2 an upper bound of the sectional curvatures in M. If the step sizes (tk )k≥1 satisfy δ ≤ R0 2 ∧ 2 β arctanh (tanh(βR0/2) cos(αR) tan(αR0/4)) , lim k→∞ tk = 0, ∞Xk =1 tk = +∞ and ∞Xk =1 t2 k ∞. then the sequence (xk )k≥1 generated by the algorithm satisfies lim k→∞ ρ(xk , c) = 0 c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 27/39
  • 28.
    Case study I:Hyperbolic planar manifold In Klein disk (projective model), geodesics are straight (euclidean) lines [11]. ρ(p, q) = arccosh 1 − p⊤q p(1 − p⊤p)(1 − q⊤q) where arccosh(x) = log(x + √x2 − 1). Here, we choose non-constant speed curve parameterization (not constant-speed geodesic): ˜γt (p, q) = (1 − t)p + tq, t ∈ [0, 1]. ⇒ Implement a dichotomy on ˜γt(p, q) to get #t . c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 28/39
  • 29.
    Initialization First iteration Second iteration Third iteration Fourth iteration after 104 iterations http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 29/39
  • 30.
    Performance 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 Klein distance between current center and minimax center expe1.dat using 1:2 0 50 100 150 200 Radius of the smallest enclosing Klein ball anchored at current center 0.8 0.75 0.7 0.65 0.6 0.55 0.5 expe1.dat using 1:3 0 50 100 150 200 (a) (b) Convergence rate of the GeoA algorithm for the hyperbolic disk for the first 200 iterations. Horizontal axis: number of iterations Vertical axis: (a) the relative Klein distance between the current center and the optimal 1-center, (b) the radius of the smallest enclosing ball anchored at the current center. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 30/39
  • 31.
    Case study II:Space of SPD matrices ◮ d × d matrix M Symmetric Positive Definite (SPD) ⇔ M = M⊤ and that for all x6= 0, x⊤Mx 0. ◮ The set of d × d SPD matrices: manifold of dimension d(d+1) 2 [8] ◮ The geodesic linking (matrix) point P to point Q: γt(P,Q) = P 1 2 P−1 2 t 2QP−1 P 1 2 where the matrix function h(M) is computed from the singular value decomposition M = UDV ⊤ (with U and V unitary matrices and D = diag(λ1, ..., λd ) a diagonal matrix of eigenvalues) as h(M) = Udiag(h(λ1), ..., h(λd ))V ⊤. For example, the square root function of a matrix is computed as M 1 2 = U diag(√λ1, ...,√λd ) V ⊤. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 31/39
  • 32.
    SPD space: Splittingthe geodesic for operator #t In this case, finding t such that k log(P−1Q)tk2 F = rk log P−1Qk2 F , (2) where k · kF denotes the Fr¨obenius norm yields to t = r . Indeed, consider λ1, ..., λd the eigenvalues of P−1Q, then ρ(P,Q) = k log(P−1Q)kF = qPi log2 λi amounts to find d Xi=1 log2 λti = t2 d Xi=1 log2 λi = r 2 d Xi=1 log2 λi . That is t = r . c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 32/39
  • 33.
    Case study II:Performance Riemannian distance between current SPD center and minimax SPD center 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 expSPD1.dat using 1:2 0 50 100 150 200 Radius of the smallest enclosing Riemannian ball anchored at current SPD center 16 15.5 15 14.5 14 13.5 13 12.5 12 expSPD1.dat using 1:3 0 50 100 150 200 (a) (b) Convergence rate of the GeoA algorithm for the SPD Riemannian manifold (dimension 5) for the first 200 iterations. Horizontal axis: number of iterations i Vertical axis: ◮ (a) the relative Riemannian distance between the current center ci and the optimal 1-center c∗ ( ρ(c∗,ci ) r∗ ) ◮ (b) the radius ri of the smallest enclosing SPD ball anchored at the current center. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 33/39
  • 34.
    Remark on SPDspaces and hyperbolic geometry ◮ 2D SPD(2) matrix space has dimension d = 3: A positive cone. (a, b, c) : a 0, ab − c2 0 ◮ Can be peeled into sheets of dimension 2, each sheet corresponding to a constant value of the determinant of the elements [4] SPD(2) = SSPD(2) × R+, where SSPD(2) = {a, b, c = √1 − ab) : a 0, ab − c2 = 1} 2 ≥ 1, x1 = a−b ◮ Map to (x0 = a+b 2 , x2 = c) in hyperboloid model [12], and z = a−b+2ic 2+a+b in Poincar´e disk [12]. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 34/39
  • 35.
    Conclusion: Smallest RiemannianEnclosing Ball ◮ Generalize Euclidean 1-center algorithm of [2] to Riemannian geometry ◮ Proved the convergence under mild assumptions (for measures/point sets) ◮ Existence of Riemannian core-sets for optimization ◮ 1-center building block for k-center clustering [6] ◮ can be extended to sets of Riemannian (geodesic) balls Reproducible research codes with interactive demos: http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 35/39
  • 36.
    Bibliographic references I Bijan Afsari. Riemannian Lp center of mass : existence, uniqueness, and convexity. Proceedings of the American Mathematical Society, 139:655–674, February 2011. Mihai B˘adoiu and Kenneth L. Clarkson. Smaller core-sets for balls. In SODA ’03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 801–802, Philadelphia, PA, USA, 2003. Society for Industrial and Applied Mathematics. Mihai B˘adoiu and Kenneth L. Clarkson. Optimal core-sets for balls. Computational Geometry: Theory and Applications, 40:14–22, May 2008. Pascal Chossat and Olivier P. Faugeras. Hyperbolic planforms in relation to visual edges and textures perception. PLoS Computational Biology, 5(12), 2009. Kaspar Fischer and Bernd G¨artner. The smallest enclosing ball of balls: combinatorial structure and algorithms. Int. J. Comput. Geometry Appl., 14(4-5):341–378, 2004. Teofilo F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38(0):293 – 306, 1985. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 36/39
  • 37.
    Bibliographic references II Piyush Kumar, Joseph S. B. Mitchell, and E. Alper Yildirim. Approximate minimum enclosing balls in high dimensions using core-sets. ACM Journal of Experimental Algorithmics, 8, 2003. Serge Lang. Fundamentals of differential geometry, volume 191 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999. Thomas Larsson. Fast and tight fitting bounding spheres. In Proceedings of the Annual SIGRAD Conference, Stockholm, 2008. Frank Nielsen and Richard Nock. Approximating smallest enclosing balls with applications to machine learning. Int. J. Comput. Geometry Appl., 19(5):389–414, 2009. Frank Nielsen and Richard Nock. Hyperbolic Voronoi diagrams made easy. In International Conference on Computational Science and its Applications (ICCSA), volume 1, pages 74–80, Los Alamitos, CA, USA, march 2010. IEEE Computer Society. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 37/39
  • 38.
    Bibliographic references III Frank Nielsen and Richard Nock. Visualizing hyperbolic Voronoi diagrams. In Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG’14, pages 90:90–90:91, New York, NY, USA, 2014. ACM. Richard Nock and Frank Nielsen. Fitting the smallest enclosing bregman balls. In 16th European Conference on Machine Learning (ECML), pages 649–656, October 2005. Jack Ritter. An efficient bounding sphere. In Graphics gems, pages 301–303. Academic Press Professional, Inc., 1990. Jonas Spillmann, Markus Becker, and Matthias Teschner. Efficient updates of bounding sphere hierarchies for geometrically deformable models. Journal of Visual Communication and Image Representation, 18(2):101–108, 2007. J. J. Sylvester. A question in the geometry of situation. Quarterly Journal of Pure and Applied Mathematics, 1:79, 1857. Bo Tian. Bouncing Bubble: A fast algorithm for Minimal Enclosing Ball problem. 2012. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 38/39
  • 39.
    Bibliographic references IV Ivor W. Tsang, Andras Kocsor, and James T. Kwok. Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning (ICML), pages 911–918, New York, NY, USA, 2007. ACM. Emo Welzl. Smallest enclosing disks (balls and ellipsoids). In H. Maurer, editor, New Results and New Trends in Computer Science, LNCS. Springer, 1991. c 2013-14 Frank Nielsen, ´E cole Polytechnique Sony Computer Science Laboratories 39/39