Biological Foundations for Deep Learning: Towards Decision Networks

228 views

Published on

Nathan Wilson, Nara Logics, Inc. presentation for Cognitive Systems Institute Speaker Series

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
228
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Biological Foundations for Deep Learning: Towards Decision Networks

  1. 1. Nathan R. Wilson, Ph.D. CSIG Speaker Series June 23, 2016 Biological Foundations for Deep Learning: Towards Decision Networks
  2. 2. Neural Networks and Neuroscience • Benefits of Cross-Pollination • New Learning Rules and Emerging Analogies • “Recommendations and Decision Support” as an Enriched Domain for Both Today
  3. 3. The Interface of Neuroscience + Computer Science
  4. 4. Benefits of Cross-Pollination Overcoming Stereotypes Neuroscientist - Focused on insignificant details - Loves chemicals, ice, rodents - Has trouble seeing the big picture Reality: has many of the same goals as DL researchers Deep Learning Researcher - Doesn’t pay any dues - Disembodied from traditional fields - Works out of coffee shops Reality: establishing one of the most important disciplines of 21st century Neural Networks and Neuroscience >
  5. 5. Neural networks and neuroscience share the same core orientation • Same Goal: overarching understanding of a general algorithm • Same Structure: • Connectionist not Von Neumann • Pathways not rules • Connection weights are the key • Structure is function • Same Puzzles: • What is the optimal transfer function? • Is the code distributed or localized? • How are sequences learned? Benefits of Cross-Pollination Neural Networks and Neuroscience >
  6. 6. So why study nature? Why biological neural networks? A Lesson from the 20th Century: Aviation Benefits of Cross-Pollination Otto Lilienthal, Foundations of Modern Aviation AI is to the brain as airplanes are to birds. The details are different, but the underlying principles are the same. -- Yann LeCun, 2015 The Wright Brothers spent a great deal of time observing birds in flight. Neural Networks and Neuroscience > Common retort: “Of course, modern aircraft look nothing like birds”
  7. 7. Stabilizing Similar Ideas Through Cross-Linking Benefits of Cross-Pollination Neuroscience: Has spent a century on how to map connectionist frameworks directly to cognitive, psychological, and social principles => where AI is headed. Neural Networks: Are emerging as the dominant framework for machine learning, and will inherit / reconcile mappings from other adjacent fields of AI. Neural Networks and Neuroscience >
  8. 8. Why Now? Benefits of Cross-Pollination Neuroscience Has: New tools to interact with cells at the “network” level (Zhang et al., 2010, Wilson et al. 2013, Nature Protocols) and uncover insights. Neural Networks Has: Rigorous frameworks and data sets for evaluating network intelligence. Software for identifying and optimizing key parameters of learning. Neural Networks and Neuroscience >
  9. 9. A difference of terminology, but not concepts. What is the goal? Neuroscience: “maximize reward” Neural Networks: “minimize loss” What is the system doing? Neuroscience: “learning from local micro-successes (Hebb)” Neural networks: “globally optimizing a function (backprop)” What are additional parameters for? Neuroscience: Stabilizing firing rates Neural networks: Regularization Learning Rules and Analogies Neural Networks and Neuroscience >
  10. 10. “Feedforward” transmission is electrical and easy to measure in the brain “Retrograde” signals are more “invisible”, but candidates exist Hebbian or STDP learning could provide mechanics for gradient descent (Markram et al., 1997; Bi and Poo, 1998; Xie and Seung, 2003; Bengio et al., 2016) A mismatch between pairs of neurons could be construed by the cells as a local error signal which could then propagate further. Methods are emerging that will explain how/if backpropagation happens Synaptic plasticity and backpropagation • Signals like: NGF, BDNF, cannabinoids, NO • Some are released in proportion to synapse strength • Can travel back through vesicular uptake, cytoplasmic transport Neural Networks and Neuroscience > Learning Rules and Analogies >
  11. 11. Resisting “out of range” connections Synaptic Plasticity and Regularization Neurons will “auto-tune” at different scales: • Inter-synaptic competition (Fonseca, 2002) • Single neuron within network (Murthy, 2003) • Trade strength for more partners (Wilson, 2007) • Whole network scaling (Turrigiano 1998, 2008) Neurons also undergo forms of “dropout” • Sparse coding and decorrelation (Olshausen 2004) • Stochastic firing; stochastic synapses (Zador 1999, Abbott 2004) Neural Networks and Neuroscience > Learning Rules and Analogies >
  12. 12. Inhibitory connections: more than meets the eye Negative weights in networks Denève et al., Nature Neuroscience 2016 Cells seem to “want” an excitatory / inhibitory balance (Liu, Nature Neuroscience, 2004) Inhibition is the basis for network gain control (Wilson et al., Nature 2012) Carandini et al., Nature Rev. Neuro 2012 Neural Networks and Neuroscience > Learning Rules and Analogies >
  13. 13. Neuroscience can inform us on how networks learn in practice. Nowhere is this more true than in…
  14. 14. Interesting aspects of recommendations / decision support: • As with games, it connects perception to cognition and action • Remains an original commercial justification of machine learning • Highly structured data sets and goals, rigorous arenas for success. Recommendations / Decision Support: an Interesting Network Learning Problem Games / Which Move to Make Recommendations / Decision Support Multi-level learning problems:
  15. 15. Our networks learn to “match” contexts to decisions: • Travel: which spots should I visit when my plane lands? • Entertainment: which movie is right for me and my friends? • Medicine: which available doctor is right for this patient? • Crime: which events could be related to this incident? • Fraud: which recent behavior doesn’t match the others? • Supply Chain: which component is exhibiting fault-predictive traits? Recommendations / Decision Support: An Interesting Network Learning Problem Evaluate many criteria, and “match” a recommended decision.
  16. 16. Brain-like algorithms can construct networks of knowledge In any domain to power real-time recommendations and decisions. Elucidating Pathways for Recommendations Through Network Learning
  17. 17. Elucidating Pathways for Recommendations Through Network Learning
  18. 18. Learning Representations Using “Perceptual” vs. “Cognitive” Structures Neural Networks and Neuroscience > Learning Rules and Analogies > “Mommy’s hair is melting”
  19. 19. Challenge 1: Recommendations need to cold start and generalize to new things, but then also to hyper-optimize • Bottom-up representations via sparse, “one shot” learning => like PageRank • Works in cold-start conditions • Can trace back reasons for answers Deep learning techniques can then further optimize recommendations, when historical data is available to support them => see paper on this appearing soon.
  20. 20. Challenge 2: the # of nuanced features gathered is pivotal to your recommendation success, across all algorithms Liam Neeson, 1990s: Liam Neeson, 2010s: Andrew Ng – Importance of Data Plot Concept Connectivity Plot Concept Connectivity
  21. 21. Challenge 3: Effective learning stacks are multi- component and must be kept organized III. Data Organization Engine • Structured, unstructured • Cross-source, un-harmonized • Feature engineering pathways II. Learning Platform • Unsupervised • Supervised I. Interfaces and APIs • Recommendations • Profiles / Analytics
  22. 22. Special Thanks: • Observations around deep learning at Nara Logics: • General-purpose recommendations pushed us to create a general implementation that worked across many domains and contexts. • Deep learning is compatible with our neuroscience-inspired association networks, and we continue to work on this convergence. • Analytics and interfaces into these networks are as important as the learning itself, for maintaining and extending performance. Deep Learning for Recommendations Is Now an Important Part of Our General Process Sahil Zubair Denise Ichinco Raymond Plante Jana Eggers
  23. 23. Neuroscience and deep learning research offer complementary insights that can be utilized in practice. For galvanizing this work, “recommendations” offers a particularly rich and well-structured domain for exploring the relationship between data and decisions. Summary: Today
  24. 24. Closing Quote; Thanks Good luck with what you may build, and never forget where you came from.
  25. 25. Nathan R. Wilson, Ph.D. CSIG Speaker Series June 23, 2016 Biological Foundations for Deep Learning: Towards Decision Networks

×