Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Interaction Networks for Learning about Objects, Relations and Physics

1,650 views

Published on

For my presentation for a reading group. I have not in any way contributed this study, which is done by the researchers named on the first slide.
https://papers.nips.cc/paper/6418-interaction-networks-for-learning-about-objects-relations-and-physics

Published in: Technology
  • Be the first to comment

Interaction Networks for Learning about Objects, Relations and Physics

  1. 1. Interaction Networks for Learning about Objects, Relations and Physics Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, koray kavukcuoglu (Google DeepMind) NIPS 2016 Reading Club
 Presenter: Ken Kuroki (@enuroi) 1
  2. 2. Background & Purpose • Some attempts to learn physical dynamics so far.
 (rigid bodies, fluid dynamics, 3D trajectory etc.) • This study aims to construct a general-purpose learnable physics engine.
 (that can learn novel physical systems) 2
  3. 3. Model at a Glance 3 O1 O2 O1,t O2,t r fR et+1 O2,t fO et+1 O2,t+1
  4. 4. Model in Detail 1 4 Rr = 0 0 1 1 0 0 Rs = 1 0 0 0 0 1
  5. 5. Model in Detail 2 5 NR : number of relations NO : number of objects bk : <oi, oj, rk>
 (rearranges the objects and relations into interaction terms) Relation
 e: multiple for one object c: aggregated by a
  6. 6. Implementation 1 6 O = Ds NO R = NR NO NR NO Rr Rs receiver sender DR NR Ra attributes , , object1's status vector
  7. 7. Implementation 2 7 m(G) = Ds Ds DR NR ORr ORs Ra = B [b1, b2, ..., bk] [e1, e2, ..., ek] = E fR
  8. 8. Implementation 3 8 G, X, E E = ERr – T [O; X; E] = C – Ds Ds DR NR O X E – fR a P = Ot+1 DA fA (Free energy)
  9. 9. Architecture • MLP (bias, ReLU) By hyperparamerter search... • FR : four 150-length hidden layers, output length 50 • FO : one 100-length hidden layer, output length 2
 (x and y velocity) • FA : one 25-length hidden layer 9
  10. 10. Optimization • Used Adam
 Learning rate 0.001, and downscaled by *0.8 for 40 epochs • L2 regularization
 (penalty factor by grid search) 10
  11. 11. Training Simulated 2000 scenes over 1000 time steps • Training : 1 million sample, for 2000 epochs (mini- batches of 100 to balance distributions) • Validation : 200k sample • Test data : 200k sample 11
  12. 12. Experiments 1. N-body 2. Bouncing balls 3. String 12
  13. 13. Comparison Alternative Models: 1. Constant velocity (output=input) 2. MLP (two 300-length hidden layers)
 input: flattened vector of all the input data 3. Interaction Network without E (interaction) 13
  14. 14. Results 14
  15. 15. Discussion 1. Performed better than alternatives 2. Baseline MLP couldn't effectively learn interaction 3. To understand "intuitive physics engine" in human 4. Potential to expand the model 15
  16. 16. Presenter's Comments 1. Can be applied to a larger system?
 (time & memory-wise) 2. Probably it can be parallelized 3. Really advantageous to alternatives? 16

×