"Kate, a Platform for Machine Intelligence" by Wayne Imaino, IBM Research

753 views

Published on

Wayne Imaino, Distinguished Research Staff Member at IBM Almaden Research Center, currently working to develop machine intelligence, made this presentation as part of the Cognitive Systems Institute Speaker Series on Jan 28, 2016.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
753
On SlideShare
0
From Embeds
0
Number of Embeds
12
Actions
Shares
0
Downloads
18
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

"Kate, a Platform for Machine Intelligence" by Wayne Imaino, IBM Research

  1. 1. Why we’re building KATE* • Goal: Machine Intelligence (as distinct from Machine Learning) Machine Learning - solving a specific task on labeled data by defining & optimizing an objective function Machine Intelligence - flexible systems that continuously learn from unlabeled data, and that perform (motor) actions, predict consequences of those actions, and then plan ahead to reach goals. • Kate is a platform for Machine Intelligence development / demonstration • Using the HTM paradigm • Motor actions / sensor inputs are critical to machine intelligence * Kognitiv Anthropomorphic Temporally Enabled INPUTS: spatial-temporal data stream OUTPUTS: 1) Make forecasts 2) Recognize anomalies 3) Control actuators Feedback
  2. 2. Learning Bipedal Locomotion • Kate follows biological architecture / control structure • Central pattern generator - low level control • Assisted by cerebellum for coordination • Muscle-like actuation / feedback • Back-drivable motors • Spring extended actuators • Sensor-motor sequences should predict an expectation • Given motor effort what sensor input does HTM expect • e.g., after a step, HTM will expect foot pressure • Given sensor input what motor action should HTM initiate • e.g., if torso is pitched forward, HTM will initiate a leg swing • Temporal sequences encode context Slow walk
  3. 3. © 2014 IBM Corporation IBM Research | Science & Technology 3 • Spatial pooler regulates the connection of the inputs to the cell columns • Column activity determined by thresholding and inhibition • Input is represented as a sparse activity of columns • Sparse Distributed Representation or SDR • Temporal memory encodes sequences through cell activity • Predictive capability • Temporal pooler identifies sequences • Enables the hierarchy Spatial pooler Temporal memory input block diagram of 1 region Cell Columns Cell Columns Cell Columns input region 1 region 2 region 3 HTM Algorithm Temporal Pooler
  4. 4. Numenta Anomaly Benchmark
  5. 5. Kate: Strategic Direction • Future conduit for IBM’s cloud-based cognitive services • Ground-up integration • Watson services • Speech recognition / generation • Parts of speech parsing • Key parts of Machine Intelligence based HTM • Learning required : impossible to account for all contexts • Contextual control • Learning through demonstration • SDR formation of objects / actions Kate walking in Austin Lab
  6. 6. • Kate is an open robotic platform for IBM’s cognitive services • IBM’s value is in the services provided • Existing Watson services • Speech recognition / generation • Parts of speech parsing • New services based on HTM • Low cost for wide deployability • Easily fabricated • 3D printed parts, commonly available parts Kate: Collaborative Platform for Machine Intelligence Early concept Student version iPad Motor controller
  7. 7. Kate: Bipedal Locomotion and HTM • Traditional control metrics not applicable for locomotion • Control error, speed, bandwidth • (More applicable to robot arms, where placement is important) • Traditional control - kinematic path design / following is brittle • Works well only for well-defined environments • Metric in walking is NOT FALLING • Given any environment • Online learning is vital - key demonstration of HTM • Learning / recognition accuracy • Capacity • HTM sequence memory will learn / recognize all contexts • For which Kate is exposed • HTM will recognize contexts and modify control actions • Through central pattern generator (Galil controller) • Walking is a microcosm of intelligence without HTM
  8. 8. © 2014 IBM Corporation IBM Research | Science & Technology 8 • Spatial pooler regulates the connection of the inputs to the cell columns • Column activity determined by thresholding and inhibition • Input is represented as a sparse activity of columns • Sparse Distributed Representation or SDR • Temporal memory encodes sequences through cell activity • Predictive capability • Temporal pooler identifies sequences • Enables the hierarchy Spatial pooler Temporal memory input block diagram of 1 region Cell Columns Cell Columns Cell Columns input region 1 region 2 region 3 Appendix: Brief Description of the HTM Algorithm Temporal Pooler
  9. 9. © 2014 IBM Corporation IBM Research | Science & Technology • Proximal dendrites connects inputs to cell columns • Only inputs cause cell columns or cells to be active • Distal dendrites connect cells to cells • Captures sequences • Incorporates predictive capability • Cells have 4 states • Active (from inputs) • Predictive (from other cells) HTM Terminology Active In-active Predictive Active / predictive Proximal threshold Permanence threshold Distal threshold τo τp τd
  10. 10. © 2014 IBM Corporation IBM Research | Science & Technology V (overlap)2 3 3 C (connectivity) HTM: Spatial Pooler Cell columns Step 1. Calculate overlap V = CI Step 2. Threshold S = V’ > τo I (input) Step 3. Enforce inhibition
  11. 11. © 2014 IBM Corporation IBM Research | Science & Technology P (permanence) S (column state) HTM: Spatial Pooler Cell columns 0 1 1 Step 4. Learning. Active columns only. Pi,j = Pi,j + δ Ij is true Pi,j = Pi,j - δ Ij is false Step 5. C = P >τp (all possible connections) 0 0 1 1
  12. 12. © 2014 IBM Corporation IBM Research | Science & Technology D (connectivity) S (column state) HTM: Temporal Memory 0 1 1 Step 1. Calculate active state 3 2 1 Step 2. Calculate distal dendrite overlap, J J = DA Step 3. Threshold to obtain dendrite state K = J > τd Step 4. Cells with ‘active’ dendrites are predictive
  13. 13. © 2014 IBM Corporation IBM Research | Science & Technology Q (permanence) all possible connections S (column state) HTM: Temporal Memory 0 1 1 Qi,j = Qi,j + δ Aj is true Qi,j = Qi,j - δ Aj is false Step 5. Learning. For active dendrites only. Step 6. Update connectivity D = Q >τp
  14. 14. References • . J. Hawkins, S. Blakeslee. On Intelligence. Henry Holt and Company, New York, 2004. •. Numenta white paper, “HIERARCHICAL TEMPORAL MEMORY including HTM Cortical Learning Algorithms

×