Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

No Downloads

Total views

389

On SlideShare

0

From Embeds

0

Number of Embeds

1

Shares

0

Downloads

6

Comments

0

Likes

2

No embeds

No notes for slide

- 1. How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with Noah Goodman, Chris Baker, Rebecca Saxe, Tomer Ullman, Peter Battaglia, Jess Hamrick and others.
- 2. Core of common-sense reasoningHuman thought is structured around a basic understanding of physical objects, intentional agents, and their relations.“Core knowledge” (Spelke, Carey, Leslie, Baillargeon, Gergely…)Intuitive theories (Carey, Gopnik, Wellman, Gelman, Gentner, Forbus, McCloskey…)Primitives of lexical semantics (Pinker, Jackendoff, Talmy, Pustejovsky)Visual scene understanding (Everyone here…) From scenes to stories…The key questions: (1) What is the form and content of human common-sense theories of the physical world, intentional agents, and their interaction? (2) How are these theories used to parse visual experience into representations that support reasoning, planning, communication?
- 3. A developmental perspectiveA 3 year old and her dad:Dad: “Whats this a picture of?”Sarah: “A bear hugging a panda bear.” ...Dad: “What is the second panda bear doing?”Sarah: “Its trying to hug the bear.”Dad: “What about the third bear?”Sarah: “It’s walking away.” But this feels too hard to approach now, so what about looking at younger children (e.g.12 months or younger)?
- 4. Intuitive physics and psychologySouthgate and Csibra, 2009(13 month olds) Heider and Simmel, 1944
- 5. Intuitive physics(Gupta, Efros, Hebert) (Whiting et al)
- 6. Intuitive psychology
- 7. Probabilistic generative models• early 1990’s-early 2000’s – Bayesian networks: model the causal processes that give rise to observations; perform reasoning, prediction, planning via probabilistic inference. – The problem: not sufficiently flexible, expressive.
- 8. Scene understanding as an inverse problemThe “inverse Pixar” problem: World state (t) graphics Image (t)
- 9. Scene understanding as an inverse problem The “inverse Pixar” problem: physics… World state (t-1) World state (t) World state (t+1) … graphics Image (t-1) Image (t) Image (t+1)
- 10. Probabilistic programs• Probabilistic models a la Laplace. – The world is fundamentally deterministic (described by a program), and perfectly predictable if we could observe all relevant variables. – Observations are always incomplete or indirect, so we put probability distributions on what we can’t observe.• Compare with Bayesian networks. – Thick nodes. Programs defined over unbounded sets of objects, their properties, states and relations, rather than traditional finite- dimensional random variables. – Thick arrows. Programs capture fine-grained causal processes unfolding over space and time, not simply directed statistical dependencies. – Recursive. Probabilistic programs can be arbitrarily manipulated inside other programs. (e.g. perceptual inferences about entities that make perceptual inferences, entities with goals and plans re: other agents’ goals and plans.)• Compare with grammars or logic programs.
- 11. Probabilistic programs for “inverse pixar” scene understanding• World state: CAD++• Graphics – Approximate Rendering • Simple surface primitives • Rasterization rather than ray tracing (for each primitive, which pixels does it affect?) • Image features rather than pixels – Probabilities: • Image noise, image features • Unseen objects (e.g., due to occlusion)
- 12. Probabilistic programs for “inverse pixar” scene understanding• World state: CAD++• Graphics• Physics – Approximate Newton (physical simulation toolkit, e.g. ODE) • Collision detection: zone of interaction • Collision response: transient springs • Dynamics simulation: only for objects in motion – Probabilities: • Latent properties (e.g., mass, friction) • Latent forces
- 13. Modeling stability judgments
- 14. Modeling stability judgments physics… World state (t-1) World state (t) World state (t+1) … graphics Image (t-1) Image (t) Image (t+1)
- 15. Modeling stability judgments physics… World state (t-1) World state (t) World state (t+1) … Prob. approx. rendering Image (t-1) Image (t) Image (t+1)
- 16. Modeling stability judgments physics… World state (t-1) World state (t) World state (t+1) … Prob. approx. rendering Image (t-1) Image (t) Image (t+1)
- 17. Modeling stability judgments Prob. approx. Newton… World state (t-1) World state (t) World state (t+1) … Prob. approx. rendering Image (t-1) Image (t) Image (t+1)
- 18. Modeling stability judgments Prob. approx. Newton… World state (t-1) World state (t) World state (t+1) … Prob. approx. rendering Image (t-1) Image (t) Image (t+1) = perceptual uncertainty
- 19. Modeling stability judgments (Hamrick, Battaglia, Tenenbaum, Cogsci 2011)Perception: Approximate posterior with block positions normally distributed around ground truth, subject to global stability.Reasoning : Draw multiple samples from perception. Simulate forward with deterministic approx. Newton (ODE)Decision: Expectations of various functions evaluated on simulation outputs.
- 20. ResultsMean humanstabilityjudgment Model prediction (expected proportion of tower that will fall)
- 21. Simpler alternatives?
- 22. The flexibility of common sense(“infinite use of finite means”, “visual Turing test”)• Which way will the blocks fall?• How far will the blocks fall?• If this tower falls, will it knock that one over?• If you bump the table, will more red blocks or yellow blocks fall over?• If this block had (not) been present, would the tower (still) have fallen over?• Which of these blocks is heavier or lighter than the others?• …
- 23. Direction of fall
- 24. Direction and distance of fall
- 25. If you bump the table…
- 26. If you bump the table… (Battaglia, & Tenenbaum, in prep)Mean humanjudgment Model prediction (expected proportion of red vs. yellow blocks that fall)
- 27. Experiment 1: Cause/ Prevention Judgments (Gerstenberg, Tenenbaum, Goodman, et al., in prep)
- 28. Modeling people’s cause/prevention judgments• Physics Simulation Model p(B|A) – p(B| not A) 0 if ball misses p(B|A) 1 if ball goes in p(B| not A): assume sparse latent Gaussian perturbations on B’s velocity.
- 29. Simulation Model
- 30. Intuitive psychologyBeliefs (B) Desires (D) Actions (A) Heider and Simmel, 1944
- 31. Intuitive psychology Beliefs (B) Desires (D) Actions (A)Pr(A|B,D)Beliefs (B)… Heider and Simmel, 1944 Desires (D) …
- 32. Intuitive psychologyBeliefs (B) Desires (D) Probabilistic approximate planning Actions (A)Probabilistic program Heider and Simmel, 1944
- 33. Intuitive psychology In state j, chooseBeliefs (B) Desires (D) Actions i action i* = States j arg max pij , j u j Probabilistic i j approximate “Inverse economics” planning “Inverse optimal control” “Inverse reinforcement learning” “Inverse Bayesian decision theory” Actions (A) (Lucas & Griffiths; Jern & Kemp; Tauber & Steyvers; Rafferty & Griffiths; Goodman & Baker; Goodman & Stuhlmuller;Probabilistic program Bergen, Evans & Tenenbaum … Ng & Russell; Todorov; Rao; Ziebart, Dey & Bagnell…)
- 34. Goal inference as inverse constraints goals probabilistic planning rational planning (Baker, Tenenbaum & Saxe, Cognition, 2009) (MDP) 1 r = 0.98 actions Agent People 0.5 0 0 0.5 1 Model
- 35. Theory of mind: Agent Environment state Joint inferences about beliefs rational and preferences perception (Baker, Saxe & Tenenbaum, CogSci 2011) Beliefs PreferencesFood truck scenarios: rational planning Preferences Initial Beliefs Actions Agent
- 36. Goal inference with constraints goals multiple agents constraints goals rational planning (MDP) (Baker, Goodman & Tenenbaum, CogSci 2008, in prep) rational planning (MDP) actions AgentSouthgate& Csibra: actions Agent People Model
- 37. constraints goals Inferring social goals (Baker, Goodman & Tenenbaum, Cog constraints goals rational planning Sci 2008; Ullman, Baker, Evans, (MDP) Macindoe & Tenenbaum, NIPS 2009) rational planning (MDP) actionsHamlin, Kuhlmeier, Wynn & Bloom: Agent actions Agent Subject ratings prediction Model Subject ratings prediction Model
- 38. ConclusionsFrom scenes to stories… What contents of stories are routinely accessed through visual scenes? How can we represent that content for reasoning, communication, prediction and planning?Focus on core knowledge present in preverbal infants: intuitive physics, intuitive psychology.Representations using probabilistic programs: thick nodes (e.g. CAD++), thick arrows (physics, graphics, planning), recursive (inference about inference, goals about goals).Challenges for future work: (1) Integrating physics and psychology. (2) Efficient inference. (3) Learning.

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment