Successfully reported this slideshow.
Upcoming SlideShare
×

# 情報幾何の応用と最近の機械学習の動向

285 views

Published on

2018/11/3に発表した「人工知能の数理」勉強会のスライドです．

Published in: Education
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

### 情報幾何の応用と最近の機械学習の動向

1. 1. p(s) = (x(s), y(s), z(s))(a ≤ s ≤ b) e1(s) = p′(s) = (x′(s), y′(s), z′(s)) e1(s) ⋅ e1(s) = x′(s)2 + y′(s)2 + z′(s)2 = 1
2. 2. e′1(s) 0 = d (e1(s) ⋅ e1(s)) ds = 2e′i(s) ⋅ e1(s) e′1(s) e1(s) e′1(s) k(s) κ(s) = e′1(s) ⋅ e′1(s) = x′′(s)2 + y′′(s)2 + z′′(s)2
3. 3. e′1 e′2 e′3 = 0 k 0 −κ 0 τ 0 −τ 0 e1 e2 e3
4. 4. ∇ (M, g) Xg(Y, Z) = g(∇XY, Z) + g(Y, ∇*X Z) (X, Y, Z) ∈ 𝒳(M) ∇* g ∇
5. 5.
6. 6. y ∼ Nd(y; μ, Σ) Nd
7. 7. ⟨f, k( ⋅ , x)⟩ℋ = f(x) (∀x ∈ 𝒳, ∀f ∈ ℋ)
8. 8. 𝒳 k : 𝒳 × 𝒳 → ℝ (𝒳 ) x, y ∈ 𝒳 k(x, y) = k(y, x) n ∈ ℕ, x1, …, xn ∈ 𝒳, c1, …, cn ∈ ℝ n ∑ i,j=1 cicjk (xi, xj) ≥ 0
9. 9. zT Mz > 0 zT Mz ≥ 0 z ≡ M ≡ zT Mz = [z1, z2, ⋯, zn] m11 m12 ⋯ m1n ⋮ ⋱ mn1 m21 ⋯ mnn z1 z2 ⋮ zn
10. 10. A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum1 Wenkai Xu1 Zolt´an Szab´o2 Kenji Fukumizu3 Arthur Gretton1 1 Gatsby Unit, University College London 2 CMAP, ´Ecole Polytechnique 3 The Institute of Statistical Mathematics A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum1 Wenkai Xu1 Zolt´an Szab´o2 Kenji Fukumizu3 Arthur Gretton1 1 Gatsby Unit, University College London 2 CMAP, ´Ecole Polytechnique 3 The Institute of Statistical Mathematics Summary •Given: {xi}n i=1 ⇠ q (unknown), and a density p. •Goal: Test H0 : p = q vs H1 : p 6= q quickly. •New multivariate goodness-of-ﬁt test (FSSD): 1.Nonparametric: arbitrary, unnormalized p. x 2 Rd . 2.Linear-time: O(n) runtime complexity. Fast. 3.Interpretable: tell where p does not ﬁt the data. Previous: Kernel Stein Discrepancy (KSD) •Let x(x, v) := 1 p(x)rx[k(x, v)p(x)] 2 Rd . Stein witness function: g(v) = Ex⇠q[x(x, v)] where g = (g1, . . . , gd) and each gi 2 F, an RKHS associated with kernel k. - 4 - 2 2 4 - 0.2 0.2 0.4 p(x) q(x) g(x) Known: Under some conditions, kgkFd = 0 () p = q. [Chwialkowski et al., 2016, Liu et al., 2016] Statistic: KSD2 = kgk2 Fd = double sums z }| { Ex⇠qEy⇠q hp(x, y) ⇡ 2 n(n 1) P i<j hp(xi, xj). where hp(x, y) := [rx log p(x)] k(x, y) [ry log p(y)] + rxryk(x, y) + [ry log p(y)] rxk(x, y) + [rx log p(x)] ryk(x, y). Characteristics of KSD: 3 Nonparametric. Applicable to a wide range of p. 3 Do not need the normalizer of p. 7 Runtime: O(n2 ). Computationally expensive. Linear-Time KSD (LKS) Test: [Liu et al., 2016] kgk2 Fd ⇡ 2 n Pn/2 i=1 hp(x2i 1, x2i). 3 Runtime: O(n). 7 High variance. Low test power. The Finite Set Stein Discrepancy (FSSD) Idea: Evaluate witness g at J locations {v1, . . . , vJ}. Fast. FSSD2 = 1 dJ JX j=1 kg(vj)k2 2. Proposition (FSSD is a discrepancy measure). Main conditions: 1.(Nice kernel) Kernel k is C0-universal, and real analytic (Taylor series at any point converges) e.g., Gaussian kernel. 2.(Vanishing boundary) limkxk!1 p(x)g(x) = 0. 3.(Avoid “blind spots”) Locations {v1, . . . , vJ} are drawn from a distribution h which has a density. Then, for any J 1, h-a.s. FSSD2 = 0 () p = q. Characteristics of FSSD: 3 Nonparametric. 3 Do not need the normalizer of p. 3 Runtime: O(n). 3 Higher test power than LKS. Model Criticism with FSSD Proposal: Optimize locations {v1, . . . , vJ} and kernel bandwidth by arg max score = FSSD2 /sH1 (runtime: O(n)). Proposition: This procedure maximizes the true positive rate = P(detect di↵erence | p 6= q). score: 0.034 score: 0.44 Interpretable Features for Model Criticism 12K robbery events in Chicago in 2016 Model p = 10-component Gaussian mixture F = v⇤ = where model does not ﬁt well. Maximization objective FSSD2 /sH1 . Optimized v⇤ is highly interpretable. Bahadur Slope and Bahadur E ciency •Bahadur slope u rate of p-value ! 0 of statistic Tn under H1. High = good. •Bahadur e ciency = ratio slope(1) slope(2) of slopes of two tests. > 1 means test(1) better. •Results: Slopes of FSSD and LKS tests when p = N(0, 1) and q = N(µq, 1). 0 50 100 n 0.0 0.5 1.0 p-value T (1) n T (2) n Proposition. Let s2 k , k2 be kernel bandwidths of FSSD and LKS. Fix s2 k = 1. Then, 8µq 6= 0, 9v 2 R, 8k2 > 0, the Bahadur e ciency slope(FSSD) (µq, v, s2 k ) slope(LKS) (µq, k2) > 2. FSSD is statistically more e cient than LKS. Experiment: Restricted Boltzmann Machine •40 binary hidden units. d = 50 visible units. Signiﬁcance level a = 0.05. · · · · · · Model p Perturb one weight to get q. 2000 4000 Sample size n 0.00 0.25 0.50 0.75 Rejectionrate 1000 2000 3000 4000 Sample size n 0 100 200 300 Time(s) 2000 4000 Sample size n 0.0 0.5 1.0 Rejectionrate FSSD-opt FSSD-rand KSD LKS MMD-opt ME-opt Better •FSSD-opt, (FSSD-rand) = Proposed tests. J = 5 optimized, (random) locations. •MMD-opt [Gretton et al., 2012] = State-of-the-art two-sample test (quadratic-time). •ME-opt [Jitkrittum et al., 2016] = Linear-time two-sample test with optimized locations. •Key: FSSD (O(n)), KSD (O(n2 )) have comparable power. FSSD is much faster. WJ, WX, and AG thank the Gatsby Charitable Foundation for the ﬁnancial support. ZSz was ﬁnancially supported by the Data Science Initiative. KF has been supported by KAKENHI Innovative Areas 25120012. Contact: wittawat@gatsby.ucl.ac.uk Code: github.com/wittawatj/kernel-gof
11. 11. y = WT ϕ(x) y = w0 + w1x + w2x2 + w3x3 ϕ(x) = (1,x, x2 , x3 )T x x w = (w0, w1, w2, w3)T
12. 12. ϕ(x) = exp(− x − μ σ2 ) ϕ(x) ϕ0(x) = 1,ϕ1(x) = x, ϕ2(x) = x2 , ϕ3(x) = x3
13. 13. μm y = WT ϕ(x) w
14. 14. y = WT ϕ(x1, x2) 212 = 441 213 = 9261 ⋮ 2110 = 16,679,880,978,201
15. 15. w ̂y1 ̂y2 ⋮ ̂yN = ϕ0(x1) ϕ1(x1) ⋯ ϕM(x1) ⋮ ⋱ ϕ0(xN) ϕ0(xN) ⋯ ϕM(xN) w0 w1 ⋮ wM
16. 16. ̂y1 ̂y2 ⋮ ̂yN = ϕ0(x1) ϕ1(x1) ⋯ ϕM(x1) ⋮ ⋱ ϕ0(xN) ϕ0(xN) ⋯ ϕM(xN) w0 w1 ⋮ wM ̂y ϕ w ̂y = ϕw y = ϕw
17. 17. w w ∼ N(0,λ2 I) I 𝔼 [yyT ] − 𝔼[y]𝔼[y]T = 𝔼 [(Φw)(Φw)T ] = Φ𝔼 [wwT ] ΦT = λ2 ΦΦT y ∼ 𝒩 (0, λ2 ΦΦT )
18. 18. (x1, x2, ⋯, xN) y = (y1, y2, ⋯, yN) p(y)
19. 19. K = λ2 ΦΦT ϕ(x) = (ϕ0, ⋯, ϕM(x))T Knm = λ2 ϕ (xn) T ϕ (xm)
20. 20. K = λ2 ΦΦT ϕ(xn) ϕ(xm) Knm xn xm yn ym yn ym