Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

WBA hackathon 2018 Orientation

387 views

Published on

The Fourth Whole Brain Architecture Hackathon Orientation
Hackathon CFP: https://wba-initiative.org/en/3151/
Task videos: https://www.youtube.com/channel/UCT708fP0Tj38-PDV3t8wveA/videos

Published in: Science
  • How can I sharpen my memory? How can I improve forgetfulness? find out more...  https://tinyurl.com/brainpill101
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

WBA hackathon 2018 Orientation

  1. 1. The 4th Whole Brain Architecture Hackathon Orientation 2018-08-19 The Whole Brain Architecture Initiative
  2. 2. Agenda • Background and Purposes • Tasks • Evaluation Criteria • Samples explained • What the participants should do
  3. 3. Background and Purposes
  4. 4. WBA Hackathons so far 2015 2016 2017 Key Concept Hackathon theme The Whole Brain Architecture Core Hypothesis Open platform strategy Start learning from the Brain Combined ML Cognitive Architecture with LIS Tactile mini-Hackathon Hippocampus Hackathon
  5. 5. Brain Organ Framework (Standard external spec.) Recent activity at WBAI Organizing connectivity, I/F, capability, tasks of brain organs as the specs of brain-inspired AGI Agent St. ML St. St. Environment Tasks (test) Brain organ I/F (Information processing semantics) WBCA (Connectome) Capabilities of brain organs :Stub :ML [WBA Development] :Brain organs’ I/F St. ML
  6. 6. R&D Scenario from now on 追加 Developme nt プロト Developme nt プロト Developme nt マージ Developme nt プロト Developme nt ML St. St. St. Environment St. ML St. St. Environment St. St. ML St. Environment ML ML St. St. Environment St. St. ML ML Environment 追加 Developme nt 改良 Developme nt ML ML St. St. Environment St. ML ML ML Environment マージ Developme nt ML ML ML ML Environment Replacing with ML:Expanding inductive reasoning Generality of Brain- inspired Architecture① Brain organ Framework design (I/F, capability) ② Brain- constrained refactoring Expansion ML ML ML ML Environment ③ Meta-level mechanism for operating representations (Theory) Complete WBA :Stub :ML[WBA Development] :Brain organs I/F St. ML Entire Architecture Add Prototype Prototype Merge Prototype ML St. St. St. Environment St. ML St. St. Environment St. St. ML St. Environment ML ML St. St. Environment St. St. ML ML Environment Add Improvement ML ML St. St. Environment St. ML ML ML Environment Merge ML ML ML ML Environment
  7. 7. The purpose & policy for the 4th Hackathon You’ll develop a prototype: Stub-centered Sample Modular R&D Specs. & knowledge on the brain • Brain Organ Framework is yet to be completed… • Sample: connectivity, capability & outline of the tasks (testing) of brain organs • Learned features may not be clear in Deep Learning • Starting from the motor system to use neural network • Sensory features to be learned with DL later • Basal Ganglia involved in many cognitive tasks: a good candidate for additional development
  8. 8. Tasks
  9. 9. Types of Eye Movement Saccade VOR (Vestibulo- Ocular Reflex) Fixation OKN (Optokinetic nystagmus) Vergence Pursuit Movement to catch an object seen in the peripheral into the central visual field (The peripheral visual field does not have a good resolution.) Smooth movement occurring when consciously tracking a moving object in the central visual field To stabilize images on the retinas during head movement by producing eye movements in the direction opposite to head movement To repeat pursuit & reset when, e.g. seeing moving scenery from a vehicle To align the visual fields of both (right & left) eyes with targets To maintain the visual gaze on a single location (Ocular drifts also occurs to avoid habituation during fixation.)
  10. 10. 0 0.2 0.4 0.6 0.8 1 -80 -60 -40 -20 0 20 40 60 80 Central & Peripheral Vision Difference in resolution causes eye movement? Fovea centralis ↓ Location on retina, degrees Visualacuity (1/minutesofarc) Saccade Pursuit Detecting a salient object in the peripheral Perceiving an object with central vision Moving it to the center The object going out of the center Tracking so that it comes to the center
  11. 11. Saliency Roughly speaking, attracted attention to those stick out cf. Pop out in visual search Saliency Map Sticking out by color Sticking out by direction Requiring total search Color Bright. Orien - tation Mo- tion Input movie Parallel processing ③ Saliency Map Adding up feature maps ② Feature Map Lateral inhibition ① Feature Analysis
  12. 12. Point To Target • Displaying red cross cursor • Center of display = Center of visual fieldStep.1 • Go to next when the agent looks at the cross cursorStep.2 • Displaying big and small EsStep.3 • Agent gets 1 reward point when it looks at the big E. • It gets 2 points when it looks at the small E. Step.4
  13. 13. Random Dot • Displaying red cross cursor • Center of display = Center of visual field Step.1 • Go to next when the agent looks at the cross cursorStep.2 • Displaying randomly flashing points, moving points & eight direction arrowsStep.3 • Agent to find which is the direction of the moving pointStep.4 • Agent gazes in the direction. • It gets 1 point if correct. Step.5
  14. 14. Odd One Out • Displaying red cross cursor • Center of display = Center of visual field Step.1 • Go to next when the agent looks at the cross cursorStep.2 • Displaying objects including an ‘odd one’Step.3 • Agent to find the ‘odd one’Step.4 • Keep displaying until the agent looks at the ‘odd one’Step.5
  15. 15. Visual Search • Displaying red cross cursor • Center of display = Center of visual field Step.1 • Go to next when the agent looks at the cross cursorStep.2 • Displaying objectsStep.3 • Agent to find the ‘magenta T’Step.4 • Agent to look at the black box to the right if it finds the magenta T, else the black box on the left • It gets 1 point if correct. Step.5
  16. 16. Change Detection •Displaying red cross cursor •Center of display = Center of visual fieldStep.1 •Go to next when the agent looks at the cross cursorStep.2 •Displaying objectsStep.3 • Eliminating objects from the displayStep.4 • RedisplayStep.5 • Agent to judge if the object is the same as beforeStep.6 • It gets 1 point if correctStep.7
  17. 17. Multiple Object Tracking •Displaying red cross cursor •Center of display = Center of visual field Step.1 • Go to next when the agent looks at the cross cursorStep.2 • Displaying objects • One of them is greenStep.3 • Changing green to blackStep.4 • Moving objectsStep.5 • Stopping the motion • Changing an object to blueStep.6 • Agent to judge if the green and blue ones are the identical.Step.7 •Agent to look at the black box on the right if the identical, else the black box on the left • It gets 1 point if correct. Step.8
  18. 18. Tasks Summary Multiple Object Tracking Change Detection Visual Search Odd One Out Random Dot Point to Target Types of Eye Movement Use of Saliency Use of Working Memory Comments Saccade Saccade Saccade Saccade Saccade Pursuit ◯ - ◯ - - ◯ - - - - ◯ ◯ Often lured by large objects - - - - 1 or 2 objects will be tracked.
  19. 19. Evaluation Criteria
  20. 20. GPS Criteria Functionally General Biologically Plausible Computationally Simple To deal with various tasks Implementation constraints with Cortex, BG, SC modules No Big Switch AI
  21. 21. Evaluation Measures Weighting is TBD Dealing with many tasks Reward rate Execution speed & Accuracy Learning rate Success in more tasks (Generality) Reward rate=Task success rate/Average time for decision making Ex. Correct rate 0.8 & decision time 10 sec. R=0.8/10sec Time till the reward rate & loss function saturate Expected: a module takes ~10ms & the system ~100ms. E.g., no good if it takes 1 min. for calculating 1 frame
  22. 22. Biological Plausibility Implementation constraints with Cortex, BG, SC modules※ Basal Ganglia Cortex: likelihood calculation (accumulator model) Controlling threshold: Cortex – Basal Ganglia (BG) – Superior Colliculus (SC) Modifying the connection from Cortex to Striatum (within BG) by learning with dopaminergic neurons SC: bursts when likelihood goes beyond the threshold (motor output) ※OK to add or divide modules
  23. 23. Computational Simplicity • Your code on GitHub will be reviewed by the hackathon committee. • Code to be evaluated should be submitted by 24:00, Oct-7th. • Late submission could incur lower evaluation.
  24. 24. Other evaluation points • Judged from your presentation, code, etc. • Originality • Usability
  25. 25. Samples explained
  26. 26. Sample code with Docker & BriCA Docker BriCA Installation as entry barrier for MLIssue Intention Libraries • DL libraries such as Tensorflow, Keras, Chainer, … • OpenCV • BriCA ⇒ Participants to use more time for studying the model and examining the prototype The WBA Core Hypothesis: The brain exhibits intelligence by connecting learned ML. Issue Intention Participants to connect ML modules in an asynchronous & parallel way Mimicking asynchronous & parallel processing in the brain Specs.
  27. 27. Brain organs and related modules in the sample Module Full name Process summary Retina Visual Cortex LIP FEF PFC Cerebellum - - lateral intraparietal cortex Frontal Eye Field Prefrontal cortex - • Good vision at the center while blurred in the peripheral • What (object) & Where (motion) paths • The sample module passes through the image (no operation). • On the Where path • To create the saliency map • Incurs eye movement when stimulated (motion command?) • Planning, Task switching • Working memory • To generate allocentric information (in primates) • Smoothing movement • Rough command for motion targets BG Basal Ganglia • Actor-Critic RL SC Superior Colliculus • Controlling motion command output from input from BG & FEF
  28. 28. Connection in Saccade Environment Retina Visual Cortex LIP FEF PFC SC BG Hippocampal Formation Retinal Accumulator E Location dependent accumulation Non-Retinal Accumulator Allocentric location on the panel Image Blurring the peripheral Thru in Sample Making Saliency Map Location independent accumulation [[likelihood, ex, ey], [likelihood, ex, ey], … [likelihood, ex, ey]] Switching allocentric location, phases, etc. Allocentric location on the panel, etc.? Image Saliency Map ? ? ? [likelihood_threshold, likelihood_threshold, … likelihood_threshold] ? ? Controlling thresholds for accumulator likelihood Action=[ex, ey] or None ?: to be created by the participants Retinal Image Reward
  29. 29. Connection in Pursuit Environment Retina Visual Cortex LIP FEF PFC SC BG Hippocampal Formation Retinal Accumulator E Location dependent accumulation Non-Retinal Accumulator Allocentric location on the panel Image Blurring the peripheral Thru in Sample Making Saliency Map Location independent accumulation [[likelihood, ex, ey], [likelihood, ex, ey], … [likelihood, ex, ey]] Switching allocentric location, phases, etc. Allocentric location on the panel, etc.? Image Saliency Map ? ? ? [likelihood_threshold, likelihood_threshold, … likelihood_threshold] ? ? Controlling thresholds for accumulator likelihood Action=[ex, ey] or None ?: to be created by the participants Retinal Image Reward Cerebellum Action=[ex, ey] ?
  30. 30. Test tools Tool to display Accumulator & visual features Issues • Difficult to grasp the features on RL experiments • Difficult to grasp accumulator likelihood on the accumulator model Visualization of the features in real time
  31. 31. What the participants should do
  32. 32. Frankly, this hackathon is advanced… • Learning more of the brain, you’ll find more hypotheses and have more things that you want to do.  To decide on a hypothesis to be examined in the hackathon ASAP. • Grounding BriCA & current DL • BP-based DL propagates errors synchronously. • It is difficult to grasp the effect of step delays (in BriCA) and to determine problems in learning. • You may have to combine multiple modules into a BriCA module to make them work synchronously. • Tasks are cognitive. • E.g., planning, Task switching, etc. • How to combine RL with these tasks?

×