2. Machine Learning vs. Machine Intelligence
• Machine Learning (e.g. Deep Learning)
“Solve a specific task by defining and optimizing an objective
function” (Yann LeCunn)
Training is supervised using labeled datasets (“this is a gorilla”)
Training and execution are distinct phases
• Machine Intelligence (Numenta’s HTM or IBM’s
CAL)
Systems which continuously and on their own detect and
predict patterns and sequences in sensory data streams,
act on these predictions
System must have a notion of time
Integration of sensory and motor functions ⬄ robots
5. Sparse Distributed Representations (SDR) –
What and Why?
• Dense Representations
Few bits (8-128) Example: ASCII “m” = 01101101
Efficient but no semantic meaning
• Sparse Representations
Many bits (thousands), few 1’s, mostly 0’s
Appears inefficient but evolution has picked it!
Each bit has semantic meaning
• Example of SDR uses: Union of Properties
Color 00000010001000000001100000000100 (‘red)
Shape 00001000100010100000100000000000 (‘sphere’)
Union 00001010101010100001100000000100 (‘red sphere’)
7. ESCAPE
• 1000 node parallel machine intelligence system
– per node: Xilinx Zynq dual A9 core + FPGA, 1 GB RAM,6x2 bi-di high-speed
links
– system topology: 3D mesh
– very high bandwidth
• Dual purpose
– will scale up CAL simulations to > 108 realistic neurons
– platform for design of waferscale system
7
8. Context Aware Learning
• Based on recent understanding of the Neo-cortex
– Quite realistic neuron models
• Learning via formation of new synapses
– Dynamic changing network topology => deep hardware implications
• Unsupervised learning from raw data streams
– No labels required
• Detects patterns, makes predictions, (may) take actions
– everything is temporal
• Universality
• Closed loop: Sensors – universal engine – actuators
– Robots
9. Embodied Cognition
• Intelligence starts with understanding sensor-motor interactions *
• “I believe that mobility, acute vision and the ability to carry out
survival related tasks in a dynamic environment provide a
necessary basis for the development of true intelligence.”
• Human cognition is shaped by the motor and perceptual system
• Intelligence emerges from interactions with the world
• Time is critical factor
• Limited knowledge of the world, rely on context
• Noise and uncertainty is present
• Real world possesses a continuum of states
• Developmental approach to intelligence
• Walk before we run
• Grasp before we catch
* R. Brooks, “Intelligence Without Representation”, 1991
Director, MIT Artificial Intelligence Laboratory, 545 Technology Square, Rm. 836, Cambridge, MA 02139, USA
Founder and CTO, iRobot
Chairman and CTO, Rethink Robotics
10. How Kate learns to walk farther
unsupervised
• Focus: Robot that learns to walk robustly
• Biological architecture:
• Central Pattern Generator (CPG) coordinates actuation
• Contextual control to predict / provide appropriate mitigation
11. V. Albouy, A. Asseman, D. Barros, H. Carbone, I. Carvalho, C. Chaves, M. Desta, R.
Gaspar, I. Godoy, P. Ludwig, B. Lyo, T. Mantelato and L. Munhoz
11
Team Brazil
12. Bipedal robots
KAIST DRC-HUBO
Boston Dynamics Atlas
Honda Asimo
TU Delft
Boston Dynamics Atlas NASA Valkyrie
Lola
Toro ATRIAS
HRP-4
HRP-4C
Dr. Guero
14. 12 14 16 18 20 22 24 26 28
−14000
−12000
−10000
−8000
−6000
−4000
−2000
0
2000
Slow walk data
Time (sec)
Amplitude(arb)
Roll acceleration
Pitch acceleration
Hip motor position
Leg motor position
Low Level Walking Controls
15. motors
Kate Control Structure
foot sensors,
inclinometer
mitigation
STT
TTS
Cont
iPad
Mac
CAL
accel, gyro, voice, video, touch
foot sensor, inclinometer
torque, motor pos
USB
tcpip
16. • Learn contexts
• Examples:
• Time sequence of angular attitude, eg. roll, pitch
• Time sequence of motor torques
• Time sequence of foot lift durations
• Context can be any or all of the above but for this study we used roll
• Develop expectations based on context
• Discern contexts as known or novel sequence
• If in a known sequence, are the expectations fulfilled
• If in a period of novel sequences, learn the sequence
• if in a period of known sequences, flag as an anomaly
• Provide appropriate actuation
• No anomaly - no action
• Anomaly triggers mitigation (pause)
How we learn to walk farther
22. Summary
First results to extend MFPT with context aware
learning
Learning contexts for good steps
Discerning anomalies and mitigating
Robots will provide large, correlated datasets
Significant opportunity for unsupervised learning
24. • R. Tedrake - MIT
• Atlas, Valkyrie
• K. Byl - UCSB (student of Tedrake)
• T. McGeer
• Passive dynamic walking
• M. Vukobratovic
• ZMP
• M. Grizzle - U. Michigan
• limit cycle analysis
• Ames - Oregon State Univ
• Atrias, Mabel, Thumper
• Hobbelen - TU Delft
• limit cycle walking
• J. Pratt
• virtual model control
Prior work
• Statically stable - used in early robots, slow
• Zero Moment Point (ZMP) - stance foot is always flat on ground
• Limit cycle walking - only dynamically stable, most efficient
• Hybrid zero dynamics
• Holonomically constrained knee / ankle
26. SDR Example: Find semantic similarities of words in Wikipedia
Document
corpus
(e.g. Wikipedia)
128 x 128
100K “Word SDRs”
minus =
Apple Fruit Computer
Macintosh
Microsoft
Mac
Linux
Operating
system
….
JH
runners up were
26
see http://www.cortical.io