Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

[DL輪読会]陰関数微分を用いた深層学習

1,150 views

Published on

2019/09/27
Deep Learning JP:
http://deeplearning.jp/seminar-2/

Published in: Technology
  • Be the first to comment

[DL輪読会]陰関数微分を用いた深層学習

  1. 1. 1 
 Deep Learning with Implicit Gradients Shohei Taniguchi, Matsuo Lab (M1)
  2. 2. • • • 2 - Meta-Learning with Implicit Gradients ‣ MAML inner update 
 iMAML - RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing 
 and Exploding Gradients? ‣ ERNN 2
  3. 3. Outline 1. - - 2. - 1 ‣ Implicit Reparameterization Gradients 3. Meta-Learning with Implicit Gradients 4. RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? 3
  4. 4. 4
  5. 5. • (e.g. 2 ) - - - NN • (e.g. ) - - - 
 ( ) y = f (x) y = ax2 + bx + c f (x, y) = 0 x2 + y2 = r2 x y 5
  6. 6. • • , , 
 f(x, y) = 0 dy dx = − ∂f/∂x ∂f/∂y = − fx fy f(x, y) = 0 (x0, y0) fy (x0, y0) x0 ∈ U y0 ∈ V g : U → V {(x, g(x))|x ∈ U} = {(x, y) ∈ U × V| f(x, y) = 0} 6
  7. 7. • 1 - - A - B 
 ( ) • 2 Jacobian f(x, y) = 0 (x0, y0) fy (x0, y0) x2 + y2 − r2 = 0 y = r2 − x2 fy (r,0) = 2 × 0 = 0 y = ± r2 − x2 fy 7
  8. 8. ( ) 1. - ( ) - iMAML 2. - - ERNN 8
  9. 9. ( ) 1. - ( ) - iMAML 2. - - ERNN 9
  10. 10. Implicit Reparameterization Gradients 10
  11. 11. • NeurIPS 2018 accepted • - Michael Figurnov, Shakir Mohamed, Andriy Mnih - DeepMind • reparameterization trick • iMAML ERNN 11
  12. 12. Reparameterization Trick • VAE • reparameterization trick • 𝔼q(z; ϕ) [log p (x|z)]−KL (q (z; ϕ)||p (z)) q ϵ = f (z; ϕ) = z − μϕ σϕ ϵ ∼ 𝒩 (0,1) ϕ ϵ ∇ϕ 𝔼q(z; ϕ) [log p (x|z)] = 𝔼p(ϵ) [ ∇ϕlog p (x|z) z=f−1 (ϵ; ϕ)] f f 12
  13. 13. Implicit Reparameterization Gradients • 1 → - - - f ϵ ∼ U (0,1) ϕ z = f−1 (ϵ; ϕ) ∇ϕ 𝔼q(z; ϕ) [log p (x|z)] = 𝔼p(ϵ) [∇ϕlog p (x|z)] = 𝔼p(ϵ) [∇zlog p (x|z)∇ϕz] ∇ϕz 13
  14. 14. Implicit Reparameterization Gradients • - - • ϵ = f (z; ϕ) ⇔ f (z; ϕ) − ϵ = 0 z ϕ ∇ϕz = − ∇ϕ f (z; ϕ) ∇z f (z; ϕ) = − ∇ϕ f (z; ϕ) q (z; ϕ) z q (z; ϕ) f−1 14
  15. 15. Meta-Learning with Implicit Gradients 15
  16. 16. • NeurIPS 2019 accepted • - Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine - MAML • MAML 16
  17. 17. Model-Agnostic Meta-Learning (MAML) • - - 1 (one-step adaptation) θ*ML := argmin θ∈Θ F(θ),  where F(θ) = 1 M M ∑ i=1 ℒ ( 𝒜lgi (θ), 𝒟test i ) 𝒜lgi (θ) = θ − α∇θℒ (θ, 𝒟tr i ) 17
  18. 18. MAML • - • MAML • 1 FOMAML - FOMAML 
 https://www.slideshare.net/DeepLearningJP2016/dl1maml • iMAML ∇θF (θ) 𝒜lgi (θ) 18
  19. 19. Inner Loop • • 𝒜lg⋆ (θ) = argmin ϕ′∈Φ Gi (ϕ′, θ) Gi (ϕ′, θ) = ̂ℒ (ϕ′)+ λ 2 ϕ′− θ 2 19
  20. 20. Outer Loop • MAML outer loop • inner loop ➡ θ ← θ − ηdθF(θ) = θ − η 1 M M ∑ i=1 d𝒜lgi(θ) dθ ∇ϕℒi ( 𝒜lgi(θ)) (ϕ = 𝒜lgi(θ)) d𝒜lgi(θ) dθ 20
  21. 21. Outer Loop • inner loop • • - adapt ϕi ≡ 𝒜lg⋆ i (θ) = argmin ϕ′∈Φ Gi (ϕ′, θ) ∇ϕ′Gi (ϕ′, θ) ϕ′=ϕi = 0 ∇ ̂ℒ(ϕi) + λ(𝒜lg⋆ i (θ) − θ) = 0 θ 𝒜lg⋆ (θ) d𝒜lg⋆ (θ) dθ = ( I + 1 λ ∇2 ̂ℒ (ϕi)) −1 ϕi 21
  22. 22. Outer Loop • 2 ① inner loop adapt (SGD ) ② 3 • ( I + 1 λ ∇2 ̂ℒ (ϕi)) −1 ϕi ( I + 1 λ ∇2 ̂ℒ (ϕi)) −1 ∇ϕℒi ( 𝒜lgi(θ)) 22
  23. 23. (CG ) • • Ax = b ⋯(1) (1) f(x) = 1 2 xT Ax − bT x x0 = 0,r0 = b − Ax0, p0 = r0 αk = rT k pk pT k Apk xk+1 = xk + αk pk rk+1 = rk − αkApk pk+1 = rk+1 + rT k+1rk+1 rT k rk pk 23
  24. 24. (CG ) • • ( 5 ) - (p22 ① ) ‣ Appendix E gi = ( I + 1 λ ∇2 ̂ℒ (ϕi)) −1 ∇ϕℒi ( 𝒜lgi(θ)) gi ( I + 1 λ ∇2 ̂ℒ (ϕi)) gi = ∇ϕℒi ( 𝒜lgi(θ)) rk 𝒜lgi(θ) 24
  25. 25. iMAML • inner loop ➡adapt • outer loop inner loop ➡inner loop ‣ MAML 1 ‣ iMAML Hessian-Free 2 adapt 25
  26. 26. • - iMAML inner loop ( ) - FOMAML (CG ) - MAML 
 (FOMAML ??) O(1) 26
  27. 27. • Omniglot - inner loop Hessian-Free iMAML - iMAML way ( ) - FOMAML 27
  28. 28. • Mini-ImageNet - Reptile (FOMAML ) - ?? 28
  29. 29. iMAML • iMAML • MAML • • • - 29
  30. 30. ( ) 1. - ( ) - iMAML 2. - - ERNN 30
  31. 31. RNNs Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? 31
  32. 32. • - Anil Kag, Ziming Zhang, Venkatesh Saligrama - , MERL • NeurIPS 2019 reject • RNN • • 32
  33. 33. RNN / • RNN - sigmoid tanh • RNN / - LSTM GRU hk = ϕ (Uhk−1 + Wxk + b) ϕ ∂hm ∂hn = ∏ m≥k>n ∂hk ∂hk−1 = ∏ m≥k>n ∇ϕ (Uhk−1 + Wxk + b) U 33
  34. 34. RNN ODE • RNN skip connection (ODE) • Neural ODE - 
 https://www.slideshare.net/DeepLearningJP2016/dlneural-ordinary- differential-equations dh(t) dt ≜ h′(t) = ϕ (Uh(t) + Wxk + b) ⟹ hk = hk−1 + ηϕ (Uhk−1 + Wxk + b) 34
  35. 35. ODE • ODE • 1 ➡ ( ) • ERNN dh dt = f (h, x) f (h, x) = 0 ⋯(1) (1) h x (h0, x0) fh (h0, x0) (1) h = g (x) (h0, x0) 35
  36. 36. ERNN • ERNN 
 
 ODE • 
 
 ➡ 
 h′(t) = ϕ (U (h(t) + hk−1) + Wxk + b) − γ (h(t) + hk−1) h′(t) = 0 hk hk f (hk−1, h) = ϕ (U (h + hk−1) + Wxk + b) − γ (h + hk−1) = 0 ∂h ∂hk−1 = − ∂f/∂hk−1 ∂f/∂h = − I ∂f/∂h 36
  37. 37. ∂f/∂h • 1. (sigmoid tanh OK) 2. - 
 ( ) ∂f ∂h = ∇ϕ (U (h + hk−1) + Wxk + b) U ϕ U 37
  38. 38. • • 5 • - h(0) k = 0 h(i+1) k = h(i) k + η(i) k [ ϕ ( U (h(i) k + hk−1) + Wxk + b ) − γ (h(i) k + hk−1)] η(i) k 38
  39. 39. HAR-2 RNN ERNN (log scale) • RNN • ERNN 1 ∂hT ∂h1 39
  40. 40. • RNN 
 ERNN 40
  41. 41. • SoTA • • 41
  42. 42. ERNN • NN 1 RNN • • SoTA • RNN • accept 42
  43. 43. & • • iMAML ERNN • 43

×