Synaptic processing unit   final year project - anthony hsiao
Upcoming SlideShare
Loading in...5
×
 

Synaptic processing unit final year project - anthony hsiao

on

  • 475 views

 

Statistics

Views

Total Views
475
Views on SlideShare
475
Embed Views
0

Actions

Likes
0
Downloads
8
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Synaptic processing unit   final year project - anthony hsiao Synaptic processing unit final year project - anthony hsiao Document Transcript

  • Imperial College LondonDepartment of Electrical and Electronic EngineeringFinal Year Project Report 2007Project Title: Synaptic The Synaptic Processing UnitStudent: Anthony HsiaoCourse: 4TProject Supervisor: Dr. George ConstantinidesSecond Marker: Professor Alessandro Astolfi
  • AbstractA small but growing community of engineers and scientists around the world arebreaking new grounds in the field of Neuromorphic Engineering, and succeed indesigning ever more complex brain-inspired artificial neural systems andimplementing them in low power analogue VLSI silicon chips.A recently proposed synapse model called binary cascade synapse has memoryproperties that are superior to other comparable models, and it is suitable forimplementation into digital hardware. Recent efforts have succeeded in designingFPGA implementations of these binary cascade synapses, but failed to implement ausefully large number of them onto one single chip.This project focuses on developing the FPGA implementation of binary cascadesynapses further, and by radically changing the digital architecture, essentiallydesigning a microprocessor that processes cascade synapses. This processor is calledSynaptic Processing Unit (SPU) and the prototype implementation can currently hostup to 8192 cascade synapses.This report describes the development of the SPU, which necessitated thedevelopment of a novel learning rule alongside of it, called Spike Timing and ActivityDependent Plasticity (STADP), and portrays a characterisation of this learning rule.Both the hardware implementation of the SPU and of the learning rule areimplemented onto an FPGA and evaluated in-circuit.Then, to put the SPU to an ultimate test, it was used together with an aVLSI neuronchip to form a neural system with binary cascade synapses, and was given a realclassification task, whereby it was taught to classify two greyscale images. Andindeed, the system does successfully classify the two images, which is a veryencouraging result.To the best of the knowledge of the author, the SPU presented here is the firsthardware implementation with such large number of synapses of its kind, in theworld.
  • The Synaptic Processing Unit Anthony HsiaoAcknowledgements Thank you to all those people who have helped me get this far, both academically and otherwise, and to those that accompanied me along the way. In particular, I would like to thank Dylan Muir at the Institute of Neuroinformatics for supervising my project, and being there whenever I needed help, especially during the crazy hours before the FPGA decided to take a holiday in the US. I would also like to thank Dr. George Constantinides at Imperial College London for supervising my project and Prof. Alessandro Astolfi for second marking it. More words of thanks go to Prof. Alessandro Astolfi for coordinating my exchange to ETH Zurich, and for being patient when necessary and laidback whenever possible. Thank you Stefano Fusi, one of the most impressive characters I met at the Institute, for giving me initial feedback and coming up with the basis for what later became STADP. Special thanks to Sungdo Choi and Daniel Fasnacht for all the help and support with the hardware and infrastructure; my computer was not struck by a particle from space, it turned out. Special thanks to Johanna von Lindeiner for good nights on the bench, and the many inspiring exchanges. I actually mean it ! A very special thank you goes out to Pantha Roy, who is just amazing. Thanks for the good times, and for attempting to save me from becoming a social recluse during the final few weeks of this project. An equally special thank you goes out to Siddharta Jha, another amazing character. Thank you for all those discussions and creative breaks, which really enriched my time at the institute. A massive thank you to a fellow brother in work, Christopher Maltby, for enduring all those long days and longer nights of work with me. As you know, without your company, I would not have been able to get any work done, let alone finish. I would like to thank my parents, Wendy and Tien-Wen for their unconditional Tien- support and for opening so many doors for me. Without your efforts and sacrifices, I would not be where I am today, and would probably not get wherever I will get in five, ten years! Finally, I would like to thank Dylan Muir again, because I am actually very grateful for all the help! Without your razor-sharp brain lobes and you patience and support, I would not have been able to achieve half of what I managed to do! 1-2
  • The Synaptic Processing Unit Anthony HsiaoTable of contents1 INTRODUCTION 1-91.1 WHAT IS NEUROMORPHIC ENGINEERING? 1-101.2 THE TOPIC OF THIS PROJECT PROJECT 1-111.3 AIMS 1-121.4 FURTHER REPORT STRUCTURE STRUCTURE 1-122 BACKGROUND 2-152.1 OF BRAINS, NEURONS AND SYNAPSES SYNAPSES 2-152.2 SYNAPTIC PLASTICITY AT THE HEART OF LEARNING IN NEURAL SYSTEMS AT LEARNING SYSTEMS 2-202.3 THE CASCADE SYNAPSE MODEL MODEL 2-212.4 PREVIOUS WORK 2-242.5 OVERVIEW OF THE HARDWARE ENVIRONMENT HARDWARE 2-252.5.1 SILICON NEURONS 2-262.5.2 SILICON SYNAPSES 2-272.5.3 COMMUNICATION USING AER 2-272.5.4 THE FPGA BOARD 2-282.5.5 SOFTWARE 2-302.5.6 FINALLY… ERROR! BOOKMARK NOT DEFINED.3 LEARNING STADP – A NOVEL HEBBIAN LEARNING RULE 3-313.1 STADP – YET ANOTHER LEARNING RULE? 3-313.1.1 FROM SPIKE TIME TO SPIKE RATE 3-333.2 CHARACTERISTICS OF STADP 3-354 DESIGN 4-384.1 SUMMARY OF FEATURES OF THE SYNAPTIC PROCESSING UNIT OF 4-384.2 SYSTEM LEVEL DESIGN 4-38 1-3
  • The Synaptic Processing Unit Anthony Hsiao4.2.1 THE SPU IN A NEURAL SYSTEM 4-394.2.2 INPUT AND OUTPUT PORTS 4-394.3 VIRTUALISING THE CASCADE SYNAPSE CASCADE 4-404.4 SPU INTERNAL ADDRESSING 4-424.5 MODULAR DESIGN OF THE SPU 4-434.6 MODULE SPECIFICATIONS 4-444.6.1 FORWARDING 4-454.6.2 LEARNING RULE (STADP) 4-454.6.3 CASCADE PROCESS 4-464.6.4 CASCADE MEMORY 4-464.6.5 GLOBAL SIGNALS 4-475 IMPLEMENTATION 5-485.1 PSEUDO-RANDOM NUMBER GENERATORS GENERATORS 5-485.2 DESCRIPTION OF GENERICS ESCRIPTION 5-495.3 MODULE LEVEL DESIGN 5-515.3.1 SPIKE FORWARDING 5-515.3.2 LEARNING RULE (STADP) 5-525.3.3 CASCADE SYNAPSE 5-565.3.4 CASCADE MEMORY 5-585.3.5 SIGNAL SELECTOR 5-605.4 SYSTEM INTEGRATION 5-605.5 INTEGRATION INTO THE FPGA BOARD BOARD 5-625.5.1 ON CLOCKS 5-646 VERIFICATION 6-657 EVALUATION & EXPERIMENTATION 7-677.1 IN-HARDWARE CHARACTERISATION OF STADP CHARACTERISATION 7-677.2 MODIFICATIONS FOR THE EXPERIMENTAL SETUP 7-717.3 CIRCUIT CALIBRATION 7-73 1-4
  • The Synaptic Processing Unit Anthony Hsiao7.4 IN-CIRCUIT VERIFICATION 7-757.4.1 FORWARDING 7-757.4.2 POTENTIATION 7-777.4.3 DEPRESSION 7-787.5 A REAL CLASSIFICATION TASK REAL TASK 7-807.5.1 FROM IMAGE TO PRE-SYNAPTIC STIMULI 7-807.5.2 TEACHING METHODS 7-837.5.3 RESULTS – NORMAL TEACHING 7-867.5.4 RESULTS - BOTTOM UP TEACHING 7-917.5.5 REMARKS ON THE CLASSIFICATION EXPERIMENTS 7-958 DISCUSSION 8-978.1 THE HARDWARE 8-978.2 STADP 8-988.3 THE CLASSIFICATION TASK TASK 8-998.4 CALIBRATION OF THE NEURAL SYSTEM NEURAL 8-1039 CONCLUSION 9-1059.1 REFINEMENTS 9-10610 REFERENCES 10-10810.1.1 WEB REFERENCES 10-10910.1.2 DATASHEETS AND REFERENCE BOOKS 10-11011 APPENDIX I – SUPPLEMENTARY FILES 11-11112 CHECKLISTS APPENDIX II – VERIFICATION CHECKLISTS 12-11212.1 MODULE LEVEL VERIFICATION 12-11212.2 SYSTEM LEVEL VERIFICATION 12-114 1-5
  • The Synaptic Processing Unit Anthony Hsiao13 THE APPENDIX III – A JOURNEY THROUGH THE SPU 13-11713.1 PRE-SYNAPTIC SPIKE 13-11713.2 POST-SYNAPTIC SPIKE 13-11914 APPENDIX IV – DESIGN HIERARCHY OF SOURCE FILES 14-120 1-6
  • The Synaptic Processing Unit Anthony HsiaoList of figuresFIGURE 1: IMAGE OUTPUT OF A SILICON RETINA .................................................................................... 1-11FIGURE 2: NEURONS OF THE WORLD. ................................................................................................... 2-16FIGURE 3: ACTION POTENTIALS (PIKES) ARE COMMONLY DESCRIBED BY THREE PROPERTIES:...................... 2-17FIGURE 4: ACTION POTENTIALS OF THE WORLD. .................................................................................... 2-18FIGURE 5: CGI OF A SYNAPSE WITH PRE- AND POST-SYNAPTIC NEURONS. ................................................ 2-19FIGURE 6: MICROGRAPH OF A SYNAPSE TAKEN AT THE UNIVERSITY OF ST. LUIS. ..................................... 2-19FIGURE 7: DIFFERENT FORMS OF SYNAPTIC PLASTICITY .......................................................................... 2-21FIGURE 8: SCHEMATIC OF A CASCADE MODEL OF SYNAPTIC PLASTICITY. ............................................... 2-22FIGURE 9: INITIAL SIGNAL-TO-NOISE-RATIO AS A FUNCTION OF MEMORY LIFETIME, FROM [1]..................... 2-24FIGURE 10: CIRCUIT DIAGRAM OF AN ULTRA LOW POWER INTEGRATE & FIRE NEURON. ............................ 2-26FIGURE 11: CIRCUIT DIAGRAM OF THE SO CALLED DIFF-PAIR INTEGRATOR (DPI) SYNAPSE........................ 2-27FIGURE 12: PROTOTYPE FPGA BOARD DEVELOPED BY DANIEL FASNACHT. ............................................. 2-29FIGURE 13: EXPERIMENTAL HARDWARE SETUP...................................................................................... 2-30FIGURE 14: STADP ........................................................................................................................... 3-33FIGURE 15: THE STADP MECHANISM. ................................................................................................. 3-34FIGURE 16: SIMULATED BEHAVIOUR OF STADP. .................................................................................. 3-36FIGURE 17: SYSTEM LEVEL INTERACTION OF SPU AND AVLSI NEURON CHIP............................................ 4-39FIGURE 18: BIT REPRESENTATION OF CASCADE SYNAPSES ...................................................................... 4-40FIGURE 19: SPU INTERNAL ADDRESSING FORMAT ................................................................................. 4-42FIGURE 20: CONCEPTUAL ARCHITECTURE OF THE SPU.......................................................................... 4-43FIGURE 21: A HYBRID CELLULAR AUTOMATA LINEAR ARRAY ................................................................ 5-49FIGURE 22: CONVENTIONS ON THE ARROWS USED IN BLOCK DIAGRAMS .................................................. 5-51FIGURE 23: SPIKE FORWARDING MODULE BLOCK DIAGRAM.................................................................... 5-52FIGURE 24: STADP LEARNING RULE BLOCK DIAGRAM........................................................................... 5-54FIGURE 25: INITIALISATION OF DELTA_T LOOK-UP TABLE. ...................................................................... 5-55FIGURE 26: FLOW DIAGRAM OF THE CASCADE SYNAPSES STATE UPDATE RULE ........................................ 5-56FIGURE 27: CASCADE MODULE BLOCK DIAGRAM .................................................................................. 5-58FIGURE 28: CASCADE MEMORY BLOCK DIAGRAM ................................................................................. 5-59FIGURE 29: INPUT SOURCE SELECTOR BLOCK DIAGRAM ......................................................................... 5-60FIGURE 30: PIPELINED SPU BLOCK DIAGRAM ....................................................................................... 5-61FIGURE 31: PIPELINED DATAFLOW THROUGH THE SPU .......................................................................... 5-62FIGURE 32: BLOCK DIAGRAM OF THE INTEGRATION OF THE SPU WITHIN THE FPGA BOARD ...................... 5-63FIGURE 33: COMPARISON OF DELTA_T_LUT CONTENT FOR 5KHZ AND 90MHZ....................................... 7-69FIGURE 34: SIMULATED HARDWARE BEHAVIOUR OF STADP AT 5KHZ SIMULATION CLOCK FREQUENCY. .... 7-71 1-7
  • The Synaptic Processing Unit Anthony HsiaoFIGURE 35: FREQUENCY RESPONSE OF THE NEURAL SYSTEM. ..................................................................7-74FIGURE 36: OSCILLOSCOPE SCREENSHOT OF POST-SYNAPTIC MEMBRANE POTENTIAL:................................7-74FIGURE 37: EXAMPLE OF A COHERENT 30HZ POISSON SPIKE TRAIN TO ALL 256 SYNAPSES. ........................7-76FIGURE 38: OSCILLOSCOPE SCREENSHOT OF POST-SYNAPTIC MEMBRANE POTENTIAL:................................7-77FIGURE 39: IN-CIRCUIT VERIFICATION OF POTENTIATION. ........................................................................7-78FIGURE 40: IN-CIRCUIT VERIFICATION OF DEPRESSION. ...........................................................................7-79FIGURE 41: OSCILLOSCOPE SCREENSHOT OF DECREASING POST-SYNAPTIC FIRING RATE: ............................7-80FIGURE 42: USING PICTURES AS PRE-SYNAPTIC STIMULI. .........................................................................7-82FIGURE 43: SPIKE TRAINS DERIVED FROM 16X16 PIXEL GREYSCALE IMAGES OF ANTHONY AND DYLAN. .....7-82FIGURE 44: CONCEPTUAL PROCEDURE OF A REAL CLASSIFICATION TASK. .................................................7-85FIGURE 45: CLASSIFICATION TASK: TEACH DYLAN, SHOW DYLAN FIRST, AT 22HZ. ..................................7-87FIGURE 46: CLASSIFICATION TASK: TEACH DYLAN, SHOW ANTHONY FIRST, AT 22HZ. ..............................7-87FIGURE 47: CLASSIFICATION TASK: TEACH DYLAN, SHOW DYLAN FIRST, AT 25HZ. ..................................7-88FIGURE 48: CLASSIFICATION TASK: TEACH DYLAN, SHOW ANTHONY FIRST, AT 25HZ. ..............................7-88FIGURE 49: CLASSIFICATION TASK: TEACH ANTHONY, SHOW ANTHONY FIRST, AT 22HZ...........................7-89FIGURE 50: CLASSIFICATION TASK: TEACH ANTHONY, SHOW DYLAN FIRST, AT 22HZ. ..............................7-89FIGURE 51: CLASSIFICATION TASK: TEACH ANTHONY, SHOW ANTHONY FIRST, AT 25HZ...........................7-90FIGURE 52: CLASSIFICATION TASK: TEACH ANTHONY, SHOW DYLAN FIRST, AT 25HZ. ..............................7-90FIGURE 53: CLASSIFICATION TASK: BOTTOM-UP TEACHING DYLAN, AT 50HZ..........................................7-92FIGURE 54: CLASSIFICATION TASK: BOTTOM-UP TEACHING DYLAN, AT 70HZ. .........................................7-92FIGURE 55: CLASSIFICATION TASK: BOTTOM-UP TEACHING DYLAN, FOR 2S AT 50HZ................................7-93FIGURE 56: CLASSIFICATION TASK: BOTTOM-UP TEACHING ANTHONY, AT 50HZ. .....................................7-93FIGURE 57: CLASSIFICATION TASK: BOTTOM-UP TEACHING ANTHONY, AT 70HZ. .....................................7-94FIGURE 58: CLASSIFICATION TASK: BOTTOM-UP TEACHING ANTHONY, FOR 2S AT 50HZ. ..........................7-94FIGURE 59: EXPECTED EFFECTS ON A SYNAPSE ....................................................................................8-101FIGURE 60: PRE-SYNAPTIC SPIKE ARRIVES AT SPU. ............................................................................13-117FIGURE 61: VALID PRE-SYNAPTIC SPIKE GETS FORWARDED, AFTER TWO CLOCK DELAYS ........................13-117FIGURE 62: VALID PRE-SYNAPTIC SPIKE GENERATES A PLASTICITY EVENT. ............................................13-117FIGURE 63: CASCADE SYNAPSE CHANGES IN OPERATION ....................................................................13-118FIGURE 64: PLASTICITY EVENTS .......................................................................................................13-118FIGURE 65: VALID POST-SYNAPTIC SPIKE ARRIVES AT SPU..................................................................13-119FIGURE 66: POST-SYNAPTIC SPIKE DOES NOT GET FORWARDED ...........................................................13-119FIGURE 67: POST-SYNAPTIC SPIKE SETS POST-SYNAPTIC EXPIRY TIME. ..................................................13-119 1-8
  • The Synaptic Processing Unit Anthony Hsiao1 Introduction ‘The brain – that’s my second most favourite organ!’ – Woody AllenSolving the mystery behind how the human brain works and computes will be one ofthe most significant discoveries in the history of science. A profound understandingof our most important organ (bar Woody Allen…) will have significant implicationsto healthcare, psychology and ethics, as well as to computing, robotics and artificialintelligence. Visionaries such as Ray Kurzweil go as far as predicting, that before themiddle of the 21st century, humans and machines will be able to merge in a waynever seen before, as brain interfaces enable users to bridge the gap between the realand virtual worlds to a level where the distinction between ‘real’ and ‘not real’ mightlose its importance. Artificial systems would reach computational powers thatmatched those of the human brain, just to surpass them a few years later.Most people find it difficult to imagine such scenarios, especially since even the mostpowerful computers to date, which can perform billions of operations per second,cannot reproduce some of the computational-magic that human brains perform on aday to day basis, such as pattern recognition or visual processing. ‘Intelligent’ and‘interactive’ systems are neither intelligent nor interactive, the most advanced robotsin the world are no match for a young child when it comes to performing motor tasksor recognition; the thought of ever meeting a machine with intelligence, humor or anopinion goes far beyond what most people think their computers will ever be able todo.Such future scenarios have been the topic of several books and films, and areportrayed as horror scenarios more often than not, ignoring many of the potentialopportunities that such a future could bear. Without attempting to make anyqualifying judgments, it should be noted that change happens, whether it is welcomeor not.This change could well be initiated by a small but growing community of engineersand scientists, driven by impressive advances in neuroscience, who are making 1-9
  • The Synaptic Processing Unit Anthony Hsiaosignificant progress in copying neuronal organization and function into artificialsystems. The secret to the human brain’s superior abilities appears to reside in howthe brain organises its slow acting electrical and chemical components (namelyneurons, as basic computational unit in the brain, synapses, which are the interfacesof neurons and possess rich dynamics allowing neurons to form interconnectedneural circuits). Researchers sometimes speak of ‘morphing’ these structures ofneural connections into silicon circuits, creating neuromorphic microchips. Ifsuccessful, this work could lead to implantable silicon retinas for the blind or soundprocessors for the deaf that last for 30 years on a single nine-volt battery or to low-cost, highly effective visual, audio or olfactory recognition chips for robots and othersmart machines. The long term goal is to engineer ever more complex artificialsystems with ever richer behaviour, and ultimately, the construction of an artificialbrain.1.1 What is neuromorphic engineering?The term neuromorphic was coined by Carver Mead, in the late 1980s to describeVery Large Scale Integration (VLSI) systems containing analogue electronic circuitsthat mimic neuro-biological architectures present in the nervous system.Neuromorphic Engineering is a new interdisciplinary field that takes inspiration frombiology, physics, mathematics and engineering to design analog, digital or mixed-mode analog/digital VLSI artificial neural systems. These include vision systems,head-eye systems, auditory processors and autonomous robots, whose physicalarchitecture and design principles are based on those of biological nervous systems.Although the field of neuromorphic engineering is still relatively new, impressive andencouraging results have already been achieved. Ranging from ‘simple’ chips withsilicon neurons or synapses [13] to more complex systems such as a silicon retina orcochlea [13] have been demonstrated in the past. 1-10
  • The Synaptic Processing Unit Anthony Hsiao sili Figure 1: Image output of a silicon retina Showing the head of a person at the Brains in Silicon Lab at Stanford University.1.2 The topic of this projectThis project focuses on one aspect of neuromorphic systems which is at the heart ofsome of the dynamics of neural networks, namely on synapses. Fusi et. al. havedemonstrated how using ordinary bounded synapse models can have devastatingeffects on memory in scenarios with ongoing modifications, and proposed a newsynapse model, the binary Cascade Synapse [1], which outperforms ordinary (binary)synapse models on several aspects [9].The nature of the Cascade Synapse makes it convenient to implement in digitalhardware rather than analogue VLSI, and it would be useful to augment existingneuromorphic neuron chips with Cascade Synapse functionality. Such a neuralsystem could then act as one single entity in a larger multi chip environment.Previous efforts have successfully designed individual cascade synapses andimplemented a small number – eight, to be precise – of them on an FPGA; however,in order to perform useful computation in a reasonably sized neural system, a massiveup-scaling of the number of synapses on one chip is necessary. In order to augment atypical aVLSI neuron chip with cascade synapse functionality, any number upwardsof 4000 synapses would be desirable, or rather, necessary.One way of doing this is to fundamentally change the way cascade synapses areimplemented on the FPGA, referred to as virtualisation: rather than having a numberof fixed hardware cascade synapses, which is logic-real-estate inefficient, anabstraction of each synapse could be stored in memory, and only retrieved, processedon and stored on demand. Since memory is generally cheap and abundant, unlike 1-11
  • The Synaptic Processing Unit Anthony Hsiaologic, in digital circuits, this Synaptic Processing Unit (SPU) can potentially allow fora very large scale implementation of cascade synapses on one single FPGA.1.3 Aims 1. To develop a Synaptic Processing Unit based on an FPGA that implements a large number of cascade synapses 2. To integrate the SPU with an aVLSI neuron chip to form a working neural system 3. To demonstrate the capabilities of the neural system by performing a real classification task1.4 Further report structureThis report is written for the scientifically and technically minded reader, withbackground knowledge of the concepts of electronic engineering, and is furtherstructured as follows: Background 2. Background This chapter attempts to brief the reader on all the necessary interdisciplinary background knowledge required for this project. In particular, it outlines some of the relevant biology and neuroscience, explains the used binary cascade model in more detail and describes the hardware and infrastructure environment the SPU will be working in. 3. STADP – a novel Hebbian learning rule This chapter will argue the case for developing a new learning rule called STADP, and describe how it works. It will also present an initial characterisation of the learning rule derived from simulation. 4. Design This chapter starts by providing a summary of the features of the SPU, to allow the reader to get a first impression. Then, it outlines the high level design and argues for the system architecture used. It finishes by giving a set of specifications for a modular implementation of the design. 1-12
  • The Synaptic Processing Unit Anthony Hsiao 5. Implementation This chapter starts by going off on a tangent, diving into the realm of random number generators. Then, it describes how the specifications given in the previous chapter were implemented in each module, and how the SPU integrates within the FPGA and its environment. 6. Verification This chapter is a very short one, which only portrays the efforts undertaken in order to verify the design and implementation. It will not reproduce the verification efforts themselves. 7. Evaluation & experimentation This is one of the key chapters and describes all the in-circuit verification and experimentation that has been carried out. Furthermore, it explains the real classification task given to the neural system, and presents the results. 8. Discussion This chapter discusses the evaluation and experimentation results, and tries to make general statements about the operation of the SPU, and conclusions about the success of the classification tasks itself. 9. Conclusion This chapter wraps up the report, and includes the conclusions derived from the work presented here. It objectively assesses advantages and disadvantages of the SPU, and suggests further improvements or changes to the system that might be worthwhile. 10. References This chapter enlists the sources that have been referred to while writing the report as well as sources that have been used throughout the design and implementation of the SPU. Append 11. Appendices There are four appendices, Appendix I with a list of supplementary Matlab files used throughout the project, Appendix II with a copy of the checklist used for verification, Appendix III with screeshots of waveforms showing the journey of a 1-13
  • The Synaptic Processing Unit Anthony Hsiao pre- and a post-synaptic spike through the SPU and finally Appendix IV, listing the design hierarchy of the VHDL source files used. 1-14
  • The Synaptic Processing Unit Anthony Hsiao2 Background ‘If the human brain were so simple that we could understand it, we would be so simple that we couldnt’ – Emerson M. Pugh2.1 Of brains, neurons and synapsesWhen IBM’s Deep Blue supercomputer beat then world chess champion GarryKasparov during their rematch in 1997, it did so by means of sheer brute force andcomputational power. The machine evaluated some 200 million potential boardmoves a second, whereas Kasparov considered only three each second, at most10.1.1. But despite Deep Blue’s victory (in fact, Kasparov won the first match againstDeep Blue the year earlier, and IBM refused to agree to a third ‘deciding’ match [21]),computers are no real competition for the human brain in areas such as vision,hearing, pattern recognition, and learning, not to mention their inability to displaycreativity, humour or emotions. And when it comes to operational efficiency, there isno contest at all. A typical room-size supercomputer weighs roughly 1,000 timesmore, occupies 10,000 times more space and consumes a millionfold more powerthan does the neural tissue that makes up the brain [22].Clearly, computers and brains are fundamentally different, both in terms ofarchitecture and performance. Table 1 summarises important key differences ofbrains and (conventional) computers. Processing Element Energy Speed Style of Fault elements size use computation tolerantBrain ~1011 neurons 10-6m 30W 100Hz Parallel, Yes ~1014 synapses distributed, memory at computation PC 109 transistors 10-6m 30W 109Hz + Serial, No (CPU) centralized, memory distant to computation Table 1: A comparison between computers and brains 2-15
  • The Synaptic Processing Unit Anthony HsiaoAt the most basic cellular level, brains consist of a vast number of brain cells, anestimated 100 billion of them, called neurons. These are also believed to constitutethe basic building blocks of computation within the central nervous system, and arein many ways analogous to logic gates in digital electronics. The brains network ofneurons forms a massively parallel information processing system.While there are a large number of different types of neurons, each with differentfunctions and morphologies, most neurons are typically composed of a soma, or cellbody, a dendritic tree and an axon, as shown in Figure 2. Figure 2: Neurons of the world.There are many different types of neurons, each with different morphologies and functions, which are found in different parts of brains. Image courtesy of G. IndiveriOne of the most important properties of a neuron is its membrane potential, thepotential difference across the cell membrane, which is used to communicatebetween neurons. A complicated molecular mechanism that stems from the cell’shighly complex membrane can give rise to so called action potentials or spikes, whichare sharp a increase followed by an equally sharp drop in the membrane potentialwithin a few ms. A neuron receives inputs, i.e. spikes, from other neurons, typicallymany thousands, on its dendritic tree, and integrates them (approximately) on itsmembrane potential. Once the membrane potential exceeds a certain threshold, theneuron generates a spike which travels from the body down the axon, commonly 2-16
  • The Synaptic Processing Unit Anthony Hsiaodescribed as the output of a neuron, to the next neuron(s) (or other receptors). Thisspiking event is also called depolarization, and is followed by a refractory period,during which the neuron is unable to fire. The membrane potential of a spikingneuron is shown in Figure 3, conceptually, while Figure 4 shows some measurementsof real action potentials of the world. Typically, neurons fire at rates between 0Hzand about 100Hz, and both the precise timing of individual spikes and the firing ratesof neurons are believed to play an important role in neural communication andcomputation. Figure 3: Action potentials (pikes) are commonly described by three properties: (pike pikes) properties roperties: Pulse width, firing rate or inter-spike-interval and refractory period. Courtesy of Giacomo Indiveri. 2-17
  • The Synaptic Processing Unit Anthony Hsiao Figure 4: Action potentials of the world. Courtesy of Giacomo Indiveri, modified by Anthony HsiaoThe axon endings of neurons almost touch the dendrites or cell body of the nextneuron. The gap between two neurons is a specialized structure called synapse and isthe point of transmission of spikes from the pre-synaptic neuron to the post-synapticneuron, as shown in Figure 5 and Figure 6. This transmission is effected byneurotransmitters, chemicals which are released from the pre-synaptic neuron upondepolarization, which bind to receptors in the post-synaptic neuron, therebyadvancing the depolarisation of it. Most synapses are excitatory, i.e. they increase thedepolarisation of the post-synaptic neuron, although there are so called inhibitorysynapses (with inhibitory neurotransmitters), which render a post-synaptic neuron lessexcitable. The human brain is estimated to have a vast 1014 synapses.The extent to which a spike from one neuron is transmitted on to the next, thesynaptic efficacy or weight, depends on many factors, such as the amount ofneurotransmitter available or the number and arrangement of receptors, and is notconstant, but changes over time. This property is called synaptic plasticity, and it isthis variable synaptic strength, that is believed to give rise to both memory andlearning capabilities, which makes it particularly interesting to study synapses! 2-18
  • The Synaptic Processing Unit Anthony Hsiao pre- post- Figure 5: CGI of a Synapse with pre- and post-synaptic neurons. Excerpt of the 2005 Winner of the Science and Engineering Visualisation Challenge. By G. Johnson. Medical Media, Boulder, CO Figure 6: Micrograph of a Synapse taken at the University of St. Luis. In the center of the image is the Synaptic Cleft, which separates the pre- (top) and post-synapticneuron (bottom). The pre-synaptic neuron has clearly visible vesicles which contain neurotransmitters.Upon pre-synaptic depolarisation, these neurotransmitters are released and diffuse across the synaptic cleft, to be received by receptors on the post-synaptic neuron, advancing its depolarisation.Scientists have developed various models of the underlying molecular mechanisms ofsynaptic plasticity, describing it to good levels of accuracy; however it is important toappreciate, that there are details to synaptic plasticity which are still subject ofongoing research. 2-19
  • The Synaptic Processing Unit Anthony Hsiao2.2 Synaptic plasticity at the heart of learning in neural systemsThere are several underlying mechanisms that cooperate to achieve synaptic plasticity,including changes in the quantity of neurotransmitter released into a synapse andchanges in how effectively cells respond to those neurotransmitters [7]. As memoriesare believed to be represented by vastly interconnected networks of synapses in thebrain, synaptic plasticity is one of the important neuro-chemical foundations oflearning and memory. Thereby, strengthening, Long-Term Potentiation (LTP), andweakening of a synapse, Long-Term Depression (LTD), are widely considered to bethe major mechanisms by which learning happens and memories are stored in thebrain.Many models of learning assume some kind of activity based plasticity, whereby anincrease in synaptic efficacy arises from the pre-synaptic cells repeated and persistentstimulation of the post-synaptic cell. These kinds of learning rules are commonlyreferred to as Hebbian learning rules, popularly summarised as ‘What fires together,wires together’.Another particularly prominent experimentally observed form of long term plasticityis called Spike-Timing Dependent Plasticity (STDP), and depends on the relativetiming of pre- and post-synaptic action potentials. If a pre-synaptic spike is succeededquickly by a post-synaptic spike, then there appears to exist some kind of causalitysince the pre-synaptic neuron has contributed to the depolarization of the post-synaptic neuron, and they should be connected more strongly, by potentiating thesynapse. Conversely, if a pre-synaptic spike is directly preceded by a post-synapticspike, their connection should be weakened, and the synapse gets depressed.Different forms of observed plasticity that can be described by STDP are shown inFigure 7. 2-20
  • The Synaptic Processing Unit Anthony Hsiao Figure 7: Different forms of synaptic plasticity The amount (qualitatively) and type of synaptic modification evoked by repeated pairing of pre- and post-synaptic action potentials in different preparations. The horizontal axis is the difference tpre-tpost of these spike-times. Results are shown for slicerecordings of different neurons. Without going into unnecessary detail, the important point to note is that different forms of plasticity exist. Figure from Abbott & Nelson 2000.Several other models of synaptic plasticity exist, ranging over several levels ofcomplexity and biological plausibility. Each has its advantages and disadvantages,proposing different mechanisms of synaptic plasticity, trying to explain differenttypes of experimentally observed plasticity. Other global regulatory processes oflearning, such as synaptic scaling or synaptic redistribution are thought to benecessary alongside activity based learning rules [5].While learning rules and models of synaptic plasticity attempt to describe themechanism by which synaptic plasticity is generated, different models of synapsesthemselves exist, which can vary greatly in the way they respond to ‘plasticity signals’.2.3 The cascade synapse modelStoring memories of ongoing, everyday experiences requires a high degree ofsynaptic plasticity, while retaining these memories demands protection againstchanges induced by further activity and experiences. Models in which memories arestored through switch-like transitions in synaptic efficacy are good at storing but badat retaining memories if these transitions are likely, and they are poor at storage butgood at retention if they are unlikely [1]. In order to address this dilemma, Fusi et. al.developed the model of binary cascade synapses, which combines high levels ofmemory storage with long retention times and significantly outperforms conventionalmodels [9]. 2-21
  • The Synaptic Processing Unit Anthony HsiaoThey consider the case of binary synapses, i.e. a synapse with only two efficacies, (forexample potentiated and depressed, weak or strong), which is not implausible, sincebiological synapses have been reported to display binary states of efficacy as well [2].The structure of a binary cascade model is shown in Figure 8, specifying twoindependent dimensions for each synapse. Just like ordinary models of binarysynapses, a binary cascade synapse can be in one of two states of efficacy, weak orstrong, but while ordinary models only allow one fixed value of plasticity, cascadesynapses possess a cascade of n states with varying degree of plasticity,implementing metaplasticity (i.e. the plasticity of plasticity). Ongoing plasticity thencorresponds to transitions of a synapse between states characterized by differentdegrees of plasticity, rather than (only) different synaptic strengths. Figure 8: Schematic of a Cascade Model of Synaptic Plasticity. Courtesy of Stefano Fusi. There are two levels of synaptic strength, weak (yellow) and strong (blue), denoted by + and -. Associated with these strengths is a cascade of n sates (n = 5 in this case). Transitions between state I of the cascade of any strength and state 1 of the opposite strength take place with probability qi, corresponding to conventional synaptic plasticity. Transitions with probabilities p i ±link the states within the respective cascade (downward arrows), corresponding to metaplasticity.Binary cascade synapses can respond to any learning rule with binary plasticitysignals, i.e. signals that are either ‘potentiate’ or ‘depress’, and responds to themstochastically; plasticity signals are only responded to with a given probability which 2-22
  • The Synaptic Processing Unit Anthony Hsiaois determined by the state along the cascade the synapse is in. So it is the varyingprobability of responding to plasticity signals that implement the different degrees ofplasticity described above.In the highest state (state 1 of the cascade in Figure 8), the probability of respondingto a plasticity event is 1, and decreases for states further down the cascades, wherethe synapse becomes less plastic. In the model analysed by Fusi, the plasticity actuallyhalves for every state down the cascade, i.e. 50% chance of responding to a plasticitysignal in the second cascade, 25% in the third, and so forth.A cascade synapse can respond to plasticity events in two ways, depending onwhether it already has the ‘right’ efficacy, referred to as switching and chaining. If itswitches, then it is changing efficacy, i.e. from weak to strong, or vice versa. If asynapse switches, it will always make a transition to state 1, i.e. the most plastic state,of the opposite cascade, regardless of what state it was in before. In Figure 8, thesetransitions are represented by the arrows between the two cascades, with plasticityprobabilities given by qi. If the synapse chains, i.e. it already has the right efficacy,then it is moving down one state in the cascade, thereby reducing (halving) itsplasticity probability, becoming less plastic. In Figure 8, this is represented by thedownward arrows connecting consecutive states within each cascade, with plasticityprobabilities given by pi+/-.Thus, cascade synapses can respond to ongoing modifications by reducing theirplasticity, thereby ‘reassuring’ their state of efficacy. Another way of looking at it isthat synaptic efficacies and their degree of plasticity are dependent on the history ofthe synapses and the plasticity signals they received.Fusi et. al. assess the performance of cascade synapses to that of ordinary binarysynapses by comparing the strength of an initial memory trace, the initial signal-to-noise ratio, as well as the average memory lifetime, the point at which this signal-to-noise ratio becomes equal to 1 for both synapse model (it is worthwhile to reiterate,that it was this trade-off, ability to store memories easily vs. retaining them for a longtime, that originally led them to develop the cascade synapse model in the first place).They find that cascade models arrive at a better compromise, storing new memories 2-23
  • The Synaptic Processing Unit Anthony Hsiaomore easily and faithfully, yet retaining them for a longer period of time, as shown inFigure 9. Without going into unnecessary detail (the interested reader is advised toconsult [1] for more information), they find that the better performance of cascadesynapses stems the fact that they experience power-law forgetting, unlike ordinarybinary synapses, which experience exponentially fast decay of their memories. Figure 9: Initial Signal-to-noise-ratio as a function of memory lifetime, from [1]. Signal-to-noise- [1]. 5 The initial signal-to-noise ratio of a memory trace stored using 10 synapses plotted against the memory lifetime (in units of 1 over the rate of candidate plasticity events). The blue (lower) curve isfor a binary model with synaptic modification occurring with probability q that varies along the curve. The red (upper) line applies to the cascade model described by Fusi et. Al. The two curves have been normalised so that the binary model with q = 1 gives the same result as the n = 1 cascade model towhich it is identical. Clearly, the cascade model performs better than the ‘normal’ binary model both in terms of initial signal-to-noise ratio and memory lifetime.In summary, binary cascade synapses outperform their ‘ordinary counterpart’ in termsof memory storage and retention, which derives from the more complex structureallowing the synapse to respond to ongoing modifications along two dimensions –efficacy and metaplasticity. It is desirable to implement these nice properties into realhardware, and previous attempts have already laid good groundwork for that.2.4 Previous workThis project mainly builds up on two previous projects. The first one, titled ‘Astochastic synapse for reconfigurable hardware’, a short project during the Tellurideworkshop for Neuromorphic Engineering by Dylan Muir [15], laid the ground work 2-24
  • The Synaptic Processing Unit Anthony Hsiaofor both the following and this project. In particular, it succeeded in creating a firstVHDL implementation of the cascade synapse and verified its operation insimulations. One of the biggest contributions of this project is the design of oneparticular type of pseudo-random number generator, the Hybrid Cellular Automataarray pseudo-random number generator, which also found extensive use in thiscurrent project. However, no actual hardware was synthesised from the digital design.The second project, ‘A VHDL implementation of the Cascade Synapse Model’, adiploma project by Tobias Kringe [16], succeeded in designing and implementing asmall array of cascade synapses onto an FPGA. The operation of the digital cascadesynapses was verified both in simulation and in hardware, and encouraging resultswere achieved in confirming the complex behaviour of the cascade synapse (which iswhy this current project will not focus on reproducing and re-verifying the propertiesof hardware implemented cascade synapses). However, the VHDL implementationwas rather large, and only a small number of synapses could be implemented ontothe FPGA. It was Tobias Kringe who proposed to virtualise the cascade synapses(which is one of the aims of this current project) in order to realise a useful number ofsynapses onto one FPGA. Due to the radically different architecture of the virtualisedsynapses to the static hardware synapses, next to none of his VHDL implementationwas reused.To the best of the knowledge of the author, there has been no other workinghardware implementation of a large number of cascade synapses (in fact, of anynumber of synapses) to date.2.5 Overview of the hardware environmentNeuromorphic aVLSI hardware commonly comprises low power analogue CMOScircuits operating in the subthreshold regime, that mimic (morph) the properties ofreal neural systems and elements. In particular, a neuromorphic aVLSI neuron chipwas used, which comprised an array of leaky Integrate & Fire (IF) silicon neuronswith Diff-Pair Integrator (DPI) synapses. Communication to the outside world wasdone using the asynchronous Address Event Representation (AER) protocol. The 2-25
  • The Synaptic Processing Unit Anthony HsiaoFPGA is sitting on an FPGA board developed at the Institute of Neuroinformatics inZurich.2.5.1 Silicon neuronsThere are different types of silicon neurons, such as conductance based models whichaim to map molecular conductance mechanisms underlying neuron behaviour indetail into analogue electronic circuits, or more qualitative models such as the I&Fneuron model, which merely implements the observed characteristics of neuronbehaviour into silicon, such as integration, firing or the refractory period.The aVLSI chip used in this project contained 128 I&F neurons similar to the circuitdepicted in Figure 10. Qualitatively, this I&F circuit works by integrating inputcurrent from on-chip synapses on its membrane, and elicits a (voltage) spike if themembrane voltage crosses a firing threshold. Figure 10: Circuit diagram of an ultra low power Integrate & Fire Neuron. 10: an low Labelled functional circuit elements mimic the behaviour of real neurons. Transistors operate in thesub-threshold regime to exploit their desirable exponential characteristics. A capacitor Cmem integratesincoming post-synaptic current into a membrane voltage Vmem. If the membrane potential crosses the spiking threshold, it will ‘spike’ just like a real neuron. Courtesy of Giacomo Indiveri. 2-26
  • The Synaptic Processing Unit Anthony Hsiao2.5.2 Silicon synapsesEach I&F neuron has 32 silicon synapses with different properties and behaviourconnected to it, but only one type of synapse was used in this project, namely thestatic DPI synapse. The circuit of such a synapse is depicted in Figure 11.Qualitatively, the DPI synapse works by receiving a (voltage) spike from a pre-synaptic neuron (or from the outside world), and then injects a given amount ofcurrent onto the membrane of the post-synaptic neuron it is connected to in response.The amount of current produced by every incoming spike is dependent on the staticsynaptic weight and the time constant of the synapse, which can be adjusted toachieve the desired static synaptic weight. Figure 11: Circuit diagram of the so called Diff-Pair Integrator (DPI) synapse. 11: Diff- iff synapse. For every pre-synaptic spike it receives, it dumps a post-synaptic current onto the membrane of the post-synaptic neuron connected to it. The amount of current, and other dynamics, can be set by parameters such as the synaptic weight, the time constant tau or the threshold voltage. Communication2.5.3 Communication using AERThe Address Event Representation (AER) protocol is used to allow forcommunication in multi-chip environments. It is a serial asynchronous four-phasehandshaking protocol (using request-acknowledge signals) which encodes events (i.e.spikes) of individual neurons by assigning each neuron a unique address (up to 2-27
  • The Synaptic Processing Unit Anthony Hsiao16bits). Every time a neuron fires, it generates an address event, which is thentransmitted over the AER bus to receiving hardware. Unlike conventional electronicsystems with arrays of information sources, such as digital cameras, neuromorphicsystems using the AER protocol do not scan through every one of its elements totransmit one frame after another, but rather, information is transmitted on demand.Only if a neuron spikes, will an address event be transmitted. Therein, one of themost important points about the AER protocol is its asynchrony, whereby the precisetiming of the address event is implicitly encoding the time of the spike itself – noneed to communicate timestamps for individual spikes.Conveniently, since electronic circuits implementing neuromorphic hardware are veryfast, while neural activity is rather slow (<100Hz), a large number of neurons canshare the same AER bus without problem. Typically, an AER bus would have abandwidth of about 1Mevent/second.2.5.4 The FPGA boardThe FPGA used in this project is a Xilinx Spartan 3 (xc3s400pq208) that sits on aprototype FPGA board developed by Daniel Fasnacht during his diploma project atthe Institute of Neuroinformatics in Zurich, depicted in Figure 12. Features used inthis project are the USB interface and the two AER ports (one input, one output). Ithas an external clock of 106.125MHz, and is programmed using JTAG.Apart from developing the board itself, Daniel Fasnacht further developed a Linuxdriver to allow communication with the USB board. A program developed byGiacomo Indiveri is used to send data to the FPGA board. In particular, pre-synapticspikes are sent through the USB bus to the SPU by specifying a synapse address andan inter-spike interval to the previous spike, data which is easily generated using thepiking neuron toolbox1 in Matlab. The aVLSI neuron chip is configured using Matlab2.1 Developed by Dylan Muir at the Institute of Neuroinformatics2 To set up the environment variable for the aVLSI chip in Matlab: chipinit.m. To load the requiredcalibration settings to the chip: bias_050607.m 2-28
  • The Synaptic Processing Unit Anthony HsiaoIt should be noted, that his is a prototype board, and with experimental or prototypehardware, extra consideration should be taken, since not all functions necessarilyhave to work as expected. However, seeing experimental hardware work and become‘alive’ is one of the most gratifying moments of hardware development.In the experimental setup used for the classification task (as described in 7.5A realclassification task) the FPGA board interfaces with an aVLSI ‘IFSLTWA’ neuron chip,using the AER connections to send address events to, and receiving feedback fromthe neurons. Figure 13 illustrates this experimental setup. Figure 12: Prototype FPGA board developed by Daniel Fasnacht. 12: 1. Xilinx Spartan 3 (xc3s400pq208) 2. USB port 3. AER-out port 4. AER-in port 2-29
  • The Synaptic Processing Unit Anthony Hsiao 13: Figure 13: Experimental hardware setup. 1. FPGA SPU 2. Forward AER connection 3. aVLSI chip with array of I&F neurons 4. Oscilloscopemeasuring the post-synaptic membrane potential 5. post-synaptic feedback AER connection (with logic analyzer) 6. pre-synaptic stimuli input USB connection.2.5.5 SoftwareThroughout this project, three software packages were used, namely Xilinx ISE 9.1iWebpack to code the VHDL design, Modelsim PE Student Edition to simulate VHDLcode and Matlab, for various things, including plotting, initialization file generation,analysis or spike train generation.A project diary was kept on GoogleDocuments. 2-30
  • The Synaptic Processing Unit Anthony Hsiao rule3 STADP – a novel Hebbian learning r ule ‘The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn’ – Alvin TofflerIn the previous section, the general concept of synaptic plasticity was introduced.While different learning rules have been proposed, for the task at hand, keeping inmind that the Synaptic Processing Unit is to be tested on a real classification task, it isnecessary to implement a learning rule that is both suitable for the learning task in ageneral environment, as well as easily implemented into digital hardware. There areseveral learning rules out there that would be interesting to be implemented, mostprominently STDP, amongst also others [18], [3], [20], but none really meet the needsfor this project.From [19] and [20], it was concluded that ordinary STDP would not be sufficient as ageneral learning rule. Instead, the system would either have to be taught withspecifically crafted and highly correlated temporal patterns (not a generalenvironment), or a more elaborate version of STDP would have to be constructed,which is impractical for the implementation, both in terms of hardware real estate(memory in particular, but also logic) and circuit complexity. Prototype designs forSTDP were rejected on the basis of it requiring excessive memory andovercomplicating the digital circuit.Instead, a novel but very simple, easily implemented learning rule was developedtogether with [20], called Spike-Timing and Activity Dependent Plasticity (STADP),which produces simple binary plasticity events, depress and potentiate, as required bythe binary cascade synapse model.3.1 STADP – Yet another learning rule?At the heart of STADP is the same Hebbian learning paradigm, that ‘what firestogether, wires together’. Unlike STDP, which derives the causality for ‘firingtogether’ from the difference in spike times, STADP uses a mixture of firing time and 3-31
  • The Synaptic Processing Unit Anthony Hsiaofiring rate based measures to determine, whether pre- and post-synaptic neuron ‘firetogether’.As the name suggests, STADP produces plasticity signals depending on spike timingas well as activity. In particular, it is dependent on the state of activity of the post-synaptic neuron, and the timing of pre-synaptic spikes.STADP says, that the post-synaptic neuron can be in one of two states at any point intime: active and inactive. This state is determined by a threshold function of the post-synaptic firing frequency: if it is above a mean firing rate fm, it is said to be active,otherwise it is inactive. For example, a setup of aVLSI I&F neurons could have amean firing rate fm = 50Hz, which is biologically plausible, and be said to be activefor firing rates above 50Hz, and inactive for firing rates below 50Hz.Then, two neurons are said to ‘fire together’ if a pre-synaptic spike arrives while thepost-synaptic neuron is active, and the synapse should be potentiated (LTP). Thereverse is also true, i.e. when a pre-synaptic spike arrives at the synapse while thepost-synaptic neuron is inactive, then the synapse should be depressed (LTD).However, this scheme would result in one plasticity signal for every pre-synapticspike, so in order to condition the number of plasticity signals produced, STADP isstochastic, and only produces potentiation or depression signals with a certainprobability, called the probability of plasticity, p(plasticity). Figure 14 belowsummarises how STADP produces plasticity events. 3-32
  • The Synaptic Processing Unit Anthony Hsiao 14: Figure 14: STADPPlasticity events are elicited with a probability p(plasticity), and depend on the spike time of the pres- synaptic, and the activity of the post-synaptic neuron.3.1.1 From spike time to spike rateThe two state abstraction of the post-synaptic neuron’s activity essentially requires anintegration of its spike-times to produce spike rates. However, integration of spikesarriving at irregular intervals into spike rates can be a non-trivial task in real timeprocessing in digital hardware (it would be very easy in analogue electronicsactually!). In STADP, this is elegantly performed using a stochastic process, inspiredby quantum physics [20]. The main idea behind this is that the post-synaptic neuronis in an unknown state of activity until it gets ‘measured’, in this case by an incomingpre-synaptic spike.Every time the post-synaptic neuron spikes, its state of activity is set to activeindependent on the current state. A neuron in active state can then make a transitionto the inactive state with a probability p(deactivate) (this can also be regarded as atwo state hidden Markov process), as depicted in Figure 15.Without specifying what the p(deactivate) is at any point of time, it can beappreciated how a post-synaptic neuron firing at mean firing rate fm should have aprobability of being in active state, p(active) of 0.5, a more active neuron should havea higher p(active) and a less active neuron should have a lower p(active). 3-33
  • The Synaptic Processing Unit Anthony Hsiao 15: mechanism. Figure 15: The STADP mechanism. A post-synaptic neuron can be in one of two states: active and inactive. The STADP mechanism determines the state of the post-synaptic neuron by integrating the post-synaptic firing times. A post-synaptic spike sets the neuron to active state, which then stochastically resets to the inactive state afteran amount of time equal to the mean postsynaptic inter-spike interval. Clearly, the probability that thepost-synaptic neuron is in active state at any given time increases as it’s firing rate increases, and is 0.5 if it is firing at the mean firing rate.In order to implement this in real hardware (it would be rather challenging to actuallyinstantiate some kind of quantum process), the STADP mechanism proposed here isusing an abstraction of the stochastic deactivation of the post-synaptic neuron. Thisabstraction is based on the assumption that the neuron fires as a poisson process withmean firing rate fm, which has an exponentially distributed inter-spike interval (thetime interval between two consecutive spikes) ~ exp(1/fm). Then, upon everyincoming post-synaptic spike (which sets the neuron’s state to active), anexponentially distributed ‘expiry time’ is drawn, after which the neuron is said toreset to the inactive state.This way, the desired properties can be achieved: if the post-synaptic neuron is firingat the mean firing rate fm, it will have an equal chance of being in active or inactivestate, on average, at any point in time. Similarly, if it is firing at a higher rate, it has ahigher chance of being active since it is being set to active faster than it is expiring toinactive, while if it is firing at a lower rate, it has a lower chance of being active atany point in time. 3-34
  • The Synaptic Processing Unit Anthony HsiaoOne question remains. Whether a plasticity event is a depression or a potentiationevent is dependent on the post-synaptic neuron’s activity as explained above – butthen, how does STADP behave for different pre-synaptic frequencies? As the namesuggests, the plasticity is dependent on spike timing, since the state of activity of thepost-synaptic neuron is only ever evaluated on an incoming pre-synaptic spike, but infact, its rate plays a role too.In general, the higher the pre-synaptic frequency, the more plasticity events will beproduced. However, since potentiation and depression are only elicited withprobability p(plasticity), the dependence on the pre-synaptic rate is slightly morecomplex. While high pre-synaptic frequencies are likely to lead to a high rate ofplasticity, low, but non-zero, pre-synaptic frequencies are likely not to result in anyplasticity event at all, as only few of the already rare pre-synaptic spikes would everlead to a plasticity event.In summary, the pre-synaptic firing rate can be said to determine the rate (probability)of plasticity events, while the post-synaptic frequency is best described as setting thetype of the plasticity events. Synapses with high pre-synaptic firing rates are morelikely to be receiving plasticity signals, while synapses with low pre-synaptic firingrates are likely to remain static, as they receive none or only few plasticity events. Characteristics3.2 Characteristics of STADPThe previous section explained how, conceptually, STADP works, and how the actualSTADP mechanism, which draws an exponentially distributed expiry time for thepost-synaptic neuron to reset to the inactive state, works. The following paragraphsdescribe some of its characteristics as well as the expected plasticity signals thatSTADP would produce.When characterising the behaviour or the results of STADP, the two important pointsto be noted are firstly whether the expiry time mechanism works at all, and secondlywhat plasticity profile it produces over a range of pre- and post-synaptic frequencies.By observing p(active), the correct operation of the mechanism can be verified, by 3-35
  • The Synaptic Processing Unit Anthony Hsiaoobserving the plasticity rates, i.e. how many potentiation or depression events areelicited per second, insights into the plasticity profile can be gained.The following plots were obtained from a simple Matlab simulation3 done by DylanMuir, and show the rate of potentiation (LTP rate), rate of depression (LTD rate), thenet effect of plasticity (LTP rate – LTD rate) as well as p(active), over pre- and post-synaptic frequency ranges of 0-100Hz. 16: Figure 16: Simulated behaviour of STADP. Left column: rate of potentiation and depression events per second, over a range of pre- and post- synaptic frequencies [1:100Hz] (ignore the axis labels). Right column: Net effect of STADP and probability of the postsynaptic neuron being in active state per unit time.These simulation results suggest that STADP indeed works as a Hebbian learning rule,and has the desired characteristics. The p(active) is approximately 0.5 at a post-synaptic frequency of 50Hz, is increases for higher frequencies, and decreases forlower frequencies. Furthermore, the plasticity rate increases with pre-synaptic3 p(active) curve: make_prob_active_vs_freq_plot.m other plots: make_freq_sim_plot.m 3-36
  • The Synaptic Processing Unit Anthony Hsiaofrequency for both potentiation and depression, which also have a qualitativelycorrect behaviour, best summarized by the net effect of LTP and LTD: with increasingpre-synaptic frequencies, there are more plasticity events, with potentiationdominating for high post-synaptic frequencies, and depression dominating for lowpost-synaptic frequencies.One important characteristic to note, however, is that potentiation and depression arenot symmetric within the regime of operation, and that the net effect of plasticity hasa bias towards depression, or equivalently, reluctance towards potentiation. This isdue to the p(active) curve, which is not linear or symmetric about the (50Hz, 0.5)point. As will be described later in the experimental section, this will have anobservable effect.Possible remedies for this could include measures such as pre-biasing or distorting thep(active) curve so that it saturates at 100Hz, or by setting a minimum expiry time of10ms (1/100Hz) in order to ensure that p(active) is 1 at 100Hz. The remedy usedwould have to be matched to the particular implementation of STADP.While more detailed and formal analysis of STADP would be desirable, this would gobeyond the scope of this report. These initial simulation results are satisficing ( =satisfying enough), and confidence in the learning rule further derives from [20]. 3-37
  • The Synaptic Processing Unit Anthony Hsiao4 Design ‘I am enough of an artist to draw freely upon my imagination. Imagination is moreimportant than knowledge. Knowledge is limited. Imagination encircles the world’ – Albert Einstein4.1 Summary of features of the Synaptic Processing UnitThe Synaptic Processing Unit designed here has the following features: • Speed of operation: Clocked at 90MHz internally • System architecture: o Fully pipelined design – the SPU can theoretically process a new address event every clock cycle, although this never happens in practice o Modular design – allows for easy plug-in of a new learning rule • On-chip learning rule: STADP with 11.1ns time resolution • I/O ports: 1x USB input, 1x AER input, 1x AER output • Cascade representation: 6bit, reconfigurable, allowing for synapses with up to 32 cascades • Cascade memory address width: 13bit, reconfigurable, allowing for up to 8192 binary cascade synapses • Addressing: Configurable number of neurons (up to 256) • One teacher synapse per neuron4.2 System level designAlthough this project builds upon previous work as mentioned earlier, most parts ofthe Synaptic Processing Unit were designed from scratch, since the pipelined andvirtualized cascade synapse requires a very different architecture. 4-38
  • The Synaptic Processing Unit Anthony Hsiao4.2.1 The SPU in a neural systemFrom a high level point of view, the SPU is supposed to integrate with one aVLSIneuron chip, forming one coherent neural system containing an array of neurons withcascade synapse functionality. This system could, for example, be used as one layerof a larger network of spiking neurons, as depicted in Figure 17. Figure 17: System level interaction of SPU and aVLSI neuron chip. 17: System interaction Together, these form one freely reconfigurable integrated array of N Integrate and Fire neurons with binary cascade synapses.4.2.2 Input and output portsIn order to act as one coherent system, the SPU has to be able to communicate bothwith the neuron chip, as well as with the outside world. Here, this is done using theUSB port of the FPGA board as pre-synaptic input, and the two AER ports to connectthe SPU to the neuron chip.Clearly, a forward connection, whereby pre-synaptic spikes are routed towards theright post-synaptic neuron is necessary. However, in order to be able to performlearning using STADP, and indeed most other learning rules, an additional feedbackconnection from the neuron chip back to the SPU is necessary, in order to obtaininformation about the post-synaptic neurons, which in this case means to estimatetheir state of activity. 4-39
  • The Synaptic Processing Unit Anthony Hsiao4.3 Virtualising the cascade synapseThe binary cascade model is quite a nice model to be implemented in digitalhardware. It has essentially only two important properties, namely its binary efficacyand its current state, which at the same time encodes the plasticity, which in turn isrepresented by a plasticity probability, which halves for every higher cascade. Thishas ‘digital’ written all over it.In order to virtualise the cascade synapses, some conceptual ‘cascade mechanism’ bywhich to process them has to be devised. The basic idea is to trade hardware realestate on the FPGA for memory, and to process synapses on demand. This has twoimmediate design deliverables: • In order to virtualise the cascade synapses, an abstraction or memory representation of them has to be defined, • A mechanism, by which they are processed on, i.e. how individual synapses respond to plasticity signals, has to be developedConveniently, the cascade synapse can be represented by a bit vector very intuitively.One bit encodes the synaptic efficacy, while a number of other bits encode the stateof the synapse, i.e. the synaptic plasticity, i.e. the plasticity probability, depending onthe number of cascades. Then, halving the plasticity probability is just a matter of abit shifting operation. As depicted in Figure 18, an Nbit representation where theMSB represents the efficacy, and the word [N-1...0] represents the plasticityprobability, as an unsigned binary number. 18: Figure 18: Bit representation of cascade synapses 4-40
  • The Synaptic Processing Unit Anthony HsiaoUsing this representation, the plasticity probability ranges from 0 to 2N-1-1 rather thanfrom 0 to 1, but this is not a problem, since it can be regarded as the numerator of arational number with denominator 2N-1-1. Such a representation can easily be storedin and retrieved from memory, and provides the functionality required to implementthe virtualisation.Here, N = 6 was fixed as a reasonable maximum cascade representation width,allowing for synapses with up to 32 cascades. This is more than sufficient, and in fact,too large a number of cascades can actually decrease the memory performance of thesynapses [1].The processing on the cascade synapse can be expected to be relatively simple, sincethere is only a small number of things the synapse ‘can do’: switch or chain, with aprobability given by its state. The exact mechanism implemented is described indetail in the 4-41
  • The Synaptic Processing Unit Anthony HsiaoImplementation section, but from a high level description point of view, it has to: • Obtain the right cascade from memory • Perform the necessary operations on its state representation (i.e. switch, chain or do nothing) • Produce a new cascade state representation, and pass it back to the cascade memory4.4 SPU internal addressingSince incoming and outgoing events are following the AER protocol, wherebyneurons are identified by addresses, the SPU internal representation is also usingaddresses as identifiers of synapses. 19: Figure 19: SPU internal addressing formatAt the heart of the addressing scheme are the synapses, which can be identifieduniquely by an Nbit synapse address, as shown in Figure 19. For historical reasons4,this synapse address is set to 13bits, allowing it to uniquely identify up to 8192synapses. The top few bits of the synapse address represent the neuron address,which uniquely identify the post-synaptic neuron which the cascade synapse isconnecting to. The aspect ratio of the neural system, i.e. how many neurons there areand how many synapses each has can be changed freely within the SPU by changing4 The SPU was originally designed to interact with an aVLSI chip with 256 neurons and 8192 synapses,the largest of its kind at that time 4-42
  • The Synaptic Processing Unit Anthony Hsiaothis neuron address width, and does not have to correspond to the actual number ofneurons (or synapses) on the aVLSI chip. Modular4.5 Modular design of the SPUApart from implementing cascade synapse behaviour in a virtualised fashion, the SPUhas to perform two other important tasks: spike forwarding and learning.Overall, the core of the SPU, i.e. ignoring data I/O and FPGA board particulars, willhave the following four modules: • Forwarding module • Learning module • Cascade module • Cascade memoryThe conceptual architecture that stems from these four modules is depicted in Figure20. Figure 20: Conceptual Architecture of the SPU 20:The principle of operation of the SPU is as follows: 4-43
  • The Synaptic Processing Unit Anthony Hsiao 1. The signal selector (not one of the core functions of the SPU) performs arbitration between pre- and post-synaptic inputs, and forwards this address into the SPU, to the forwarding module, the cascade memory and the learning module. 2. The cascade memory retrieves the cascade synapse representation corresponding to the synapse address, and, at the same time, writes new cascade states to (another location in) memory. 3. The learning rule (stochastically) produces plasticity signals as required by STADP and the pre- and post-synaptic spikes the SPU receives. 4. The forwarding module forwards pre-synaptic addresses on to the output of the SPU, depending if, and only if, the efficacy of the synapse is high. 5. The cascade module (stochastically) processes the cascade representation according to the plasticity signals it receives from the learning module and passes on a new cascade state to be written by the cascade memoryThis architecture can be fully pipelined, so that the SPU can process one ‘instruction’,i.e. one address event, per clock cycle. This is particularly important in order to ensurethat the SPU is operating fast enough, since in a multi-chip environment, it should notbe the processing bottleneck, but rather, it should be able to process whatever isbeing thrown its way by the pre-synaptic input (USB). Since the AER bus cantypically transmit about 1Mevent/second, the SPU should be able to process amultiple of that, which a fully pipelined architecture allows.In order to ensure that only the ‘right’ signals are being processed and that no wrongdata is written to memory, the SPU uses an extra level of control signals that indicatethe validity of the data shown in Figure 20.4.6 Module specificationsThe high level relationship between the individual modules described abovetranslates into precise input/output and functional specifications, described below. 4-44
  • The Synaptic Processing Unit Anthony Hsiao4.6.1 ForwardingFunction: • To forward valid pre-synaptic spikes to the post-synaptic neuron address over the AER output of the SPU, if the ‘target’ synapse has high efficacy or a teacher signal was sent.Input signals: • neuron_address: address of the synapse the current pre-synaptic spike is addressed to. Up to 13bits • target_synapse_efficacy: MSB of the cascade representation of the addressed synapse. 1bit. • address_pre_post: control signal issued by the signal selector which indicates whether current data comes from the pre-synaptic (‘0’) or the post- synaptic (‘1’) feedback input. 1bit. • address_valid: control signal that indicates whether current data is a validOutputs: • target_neuron_address: address of the post-synaptic neuron that is to be sent out through the AER output. up to 8bits. • target_address_valid: control signal that indicates whether the target neuron address is valid. 1bit.4.6.2 Learning Rule (STADP)Function: • To implement STADP • To correctly produce plasticity events (dep./pot.)Inputs: • synapse_address: address of the incoming pre- or post-synaptic spike. Up to 13bits. • address_pre_post: control signal issued by the signal selector which indicates whether current data comes from the pre-synaptic (‘0’) or the post- synaptic (‘1’) feedback input. 1bit. • address_valid: control signal that indicates whether current data is a valid. 1bit.Outputs: • cascade_synapse_address: address of the cascade synapse that the plasticity signals are valid for. Up to 13bits. • plasticity_dep_pot: plasticity signal, indicating whether the cascade synapse should be depressed (‘0’) or potentiated (‘1’). 1bit. • plasticity_valid: control signal that indicates whether the plasticity signal and the cascade synapse address are valid. 1bit. 4-45
  • The Synaptic Processing Unit Anthony Hsiao Cascade4.6.3 Cascade ProcessFunction: • To process cascade states according to plasticity signals from the learning moduleInputs: • cascade_synapse_state: cascade state representation of the cascade synapse that is to be processed. Up to 6bits. • cascade_synapse_address: address of the current cascade synapse that the plasticity signals are valid for. Up to 13bits. • plasticity_dep_pot: plasticity signal, indicating whether the cascade synapse should be depressed (‘0’) or potentiated (‘1’). 1bit. • plasticity_valid: control signal that indicates whether the plasticity signal and the cascade synapse address are valid. 1bit.Outputs: • cascade_address_out: address of the new cascade state representation of the valid new state. Up to 6bits. • new_state: new processed cascade state representation ready to be written back to memory. Up to 6bits. • new_state_valid: control signal that indicates whether the new state and the cascade out address is valid. 1bit.4.6.4 Cascade memoryFunction: • To retrieve cascade representations of synapses addressed at its read port • To store valid and new cascade representations of synapses addressed at its write portInput signals: • synapse_address: address of the cascade the current pre-synaptic spike is addressed to. Up to 13bits. • new_state_address: address of the new state that has undergone plasticity. Up to 13bits. • new_state: new state of cascade synapse after processing. Up to 6bits. • new_state_valid: control signal that indicates whether the new state for the new state address is a valid. 1bit.Outputs: • current_state: address of the post-synaptic neuron that is to be sent out through the AER output. Up to 6bits. 4-46
  • The Synaptic Processing Unit Anthony Hsiao4.6.5 Global signalsIn addition to the inputs specified above, all modules share clock, clock enable andasynchronous reset inputs to reset all internal registers and FIFOs. Note that thecontent of memory is not reset to the initial state by this reset signal, but only theoutput registers of the memory are cleared. All signals internal to the SPU are activehigh. 4-47
  • The Synaptic Processing Unit Anthony Hsiao5 Implementation ‘Its not good enough that we do our best; sometimes we have to do whats required’ – Winston Churchill Pseudo-5.1 Pseudo-random number generatorsThe performance of stochastic learning processes, indeed of any stochastic process, isheavily dependent on the ‘quality’ of the underlying randomness. Since the SPU hasrandom processes in two of its major functional components, the cascade synapsemodule and the learning rule, implementing a good pseudo-random numbergenerator (pRNG) is even more important.A good pRNG generates highly uncorrelated sequences of pRNs with a very longmaximum-length, before the sequence repeats. A good review on ‘classical’ pRNGscan be found in [8], however the pRNG used here is more unconventional. Instead ofperforming mathematical manipulation, including multiplication by prime numbersand modulo division to generate pRNs, which is what most classical pRNGs do and israther resource intensive for a digital logic implementation, a so called Hybrid cellularautomata (HCA) array pRNG is employed, which, on the contrary, are a very efficientchoice for FPGA implementation.Cellular automata consist of grids of ‘cells’, where each cell can be in one of a finitenumber of states. Time is discrete, and each cell has a local update rule to determinethe state of it in the next unit of time. One of the most popular cellular automata isConway’s 2D ‘Game of Life’.Here, we consider a one dimensional binary HCA, i.e. an array of bits, where eachcell (bit) has one of two local update rules, namely Rule 90 or Rule 150, as shown inFigure 21, classified by Wolfram [16]. Rule 90 takes the XOR of both of itsneighbours to determine the next state of a cell, while Rule 150 adds the XOR of thecurrent value of the cell as well. Cells beyond the boundaries of the array areconsidered to be 1 at all times, which ensures that the automaton does not freeze incase of all cells being 0. These choices and the right configuration for the rules used 5-48
  • The Synaptic Processing Unit Anthony Hsiaoensure that the pRNG produces maximum length sequences of uniform pRNs. In [8],there is a detailed description of which rules to use for what bit position to generatemaximum length sequences for HCA arrays of a given size. Figure 21: A Hybrid Cellular Automata linear array 21:The HCA pRNG makes use of two different nearest neighbour update rules, namely Rule 90 and Rule 150. It is very suitable for implementation on an FPGA, and further produces maximal-length sequences of highly uncorrelated patterns. Figure courtesy of Dylan Muir.If used as described above, HCA pRNGs would introduce high correlation foradjacent cells, which can be avoided by only using a subset of non-neighbouring bitsfrom a larger array to generate random numbers. One possible choice for creating a32bit random number is to use a 128bit HCA, tapping off every fourth bit to form thepRN, for example.By using this method to generate pRNs as required by the different modules, thestochastic processes in the SPU can be trusted to be as random as is possible, to thebest of the knowledge of the author.5.2 Description of generics genericsBefore explaining the architecture of the individual SPU internal modules, it is helpfulto understand the parameterisation of the VHDL code that was carried out in order to 5-49
  • The Synaptic Processing Unit Anthony Hsiaokeep the SPU reconfigurable. The following is a brief description of the generics usedwithin the implementation that allow a customisation of the SPU. • SYNAPSE_ADDRESS_WIDTH : natural := 13: The synapse address width is the width of most the addresses within the SPU, and sets the maximum number of synapses that can be addressed. By default, it is set to 13bits, allowing for up to 8192 cascade synapses to be addressed. The fixed depth of the cascade memory (the memory itself is not parameterisable) also limits the maximum number of synapses to be implemented to 8192, although fewer synapses may be used (manual reconfiguration of the memory would be required to increase the depth of the cascade memory; this is not difficult). • NEURON_ADDRESS_WIDTH : natural := 8: The neuron address width is the width of the neuron address, and tells the SPU how many of the synapse addresses’ MSBs are attributed to identifying the neuron. By default, it is set to 8bits allowing for up to 256 neurons to be addressed, and a smaller number of neurons can be specified without problems. • CASCADE_WIDTH : natural := 5: The cascade width is the number of bits that the cascade representation uses. It can be up to 6 bits wide, as limited by the width of the cascade memory, but fewer bits, such as the default value of 5 bits may be specified. The cascade width includes both the efficacy bit and the plasticity probability width. At the same time, the cascade width specifies the width of the pRN generated in the cascade synapse module, which is always one bit less than the cascade width (since the plasticity probability in the cascade representation, which will be compared to the pRN, is one bit smaller than the cascade width). • PRE_THRESHOLD : natural := 230: The pre threshold sets the p(plasticity) with which STADP elicits plasticity events; the higher the threshold, the smaller is the p(plasticity). It may range from 0 to 255, where p(plasticity) would be 1 and 0 respectively.Using these four parameters, the SPU can be configured, at compile time, to have thedesired characteristics. 5-50
  • The Synaptic Processing Unit Anthony Hsiao5.3 Module level designThe following sections will individually describe the implementations of the SPU’smodules on a functional level. In order to save paper and time, no VHDL code isreproduced here. The interested reader is advised to consult the supplementary CDfor the VHDL code.In all of the diagrams shown in the following sections, the convention shown inFigure 22 for arrows is used. In particular, dotted arrows are used to represent theflow of control signals, dashed arrows for addresses and solid lined arrows are usedto represent the flow of data. 22: Figure 22: Conventions on the arrows used in block diagramsFurthermore, light blue vertical bars are used to indicate register levels or clockedprocesses.5.3.1 Spike forwardingThe forwarding module is the simplest out of all the four major functional modules.As specified in the previous chapter, it ‘only’ has to forward valid pre-synaptic spikesif the synapse it was addressed to has high efficacy, or if it is being sent to theteacher synapse. The basic structure of the learning module is shown in Figure 23.The outputs are generated in a very simple way. The target neuron address is simplyforwarded directly from the incoming neuron address, while the target address validsignal is a simple chain of logic operations. Note that the target address valid signal isdependent on the negation of the address_pre_post signal, since a pre-synapticinput spike is represented by a ‘0’. 5-51
  • The Synaptic Processing Unit Anthony Hsiao Figure 23: Spike forwarding module block diagram 23:The teacher synapse is defined to be the 0th synapse of every neuron, i.e. if thesynapse address’ bottom (depending on how wide the neuron address width is) bitsare zero, then it is sent to the teacher synapse, and should be forwarded regardless ofthe synaptic efficacy.Due to its simplicity, the forwarding module only requires one clock cycle to performthe processing.5.3.2 Learning rule (STADP)The learning rule module is much more complex, as shown in Figure 24. It containssome logic, several registers, a look-up table implemented by a 256x36bit single portROM, a 256x36bit single port memory block RAM, a 36bit timer with 11.1nsresolution and an 8bit pRNG. In order to understand it, it is best to work from theoutputs backwards, and considering separately what happens on a pre- and on a post-synaptic synapse address (spike).There are three output signals: the cascade synapse address, the plasticity signal andthe plasticity valid signal, which need to be considered first. 5-52
  • The Synaptic Processing Unit Anthony HsiaoThe cascade synapse address is simply a forwarded version of the input synapseaddress.The plasticity signal, i.e. whether a synapse should be depressed or potentiated,depends on the activity of the postsynaptic neuron. As mentioned earlier, this isimplemented by drawing pseudo-random exponentially distributed expiry times forthe post-synaptic neuron, at which it becomes inactive, and comparing this expirytime to the current time is all it needs to elicit the right plasticity signal. So, if thecurrent time, i.e. the output of the timer, is greater than the post-synaptic neuron’sexpiry time which is given by the output of the expiry time memory, i.e. it has alreadyexpired, then a depression signal is produced (plasticity_dep_pot is reset to ‘0’).If the current time is less than or equal to the expiry time, then the neuron has not yetexpired but is still active, and a potentiation signal is produced (plasticity_dep_potis reset to ‘1’).The plasticity valid signal is only valid, if the incoming spike is valid and pre-synaptic.Furthermore, since plasticity signals are only elicited with a probability p(plasticity),the plasticity valid signal is further only valid, if an 8bit pRN is above the plasticitythreshold pre_threshold.That is really all there is to the generation of plasticity signals, i.e. that is all thathappens on arrival of a pre-synaptic spike. The rest of the STADP learning rulemodule is concerned with handling post-synaptic spikes and setting pseudo-randomexponentially distributed expiry times. 5-53
  • The Synaptic Processing Unit Anthony Hsiao Figure 24: STADP learning rule block diagram 24: 5-54
  • The Synaptic Processing Unit Anthony HsiaoIntegral to determining the state of activity of the post-synaptic neuron are thedelta_t_LUT ROM and the activity expiry times RAM. The former contains pre-loaded exponentially distributed time intervals, after which the post-synaptic neuronexpires, while the latter contains the absolute times, at which the post-synapticneuron expires. The pRNG permanently generates pseudo-random numbers between0 and 255, which are also the address input to the ROM, thus pseudo-randomlyreading the content of the ROM. This has the effect of drawing an exponentiallydistributed new expiry time, after the output of the ROM is added to the current timeoutput of the timer. Thus, on every clock cycle, there is one exponentially distributedexpiry time available at the input to the RAM, which will be written to memory uponarrival of a valid post-synaptic spike, into the location specified by the post-synapticneuron address (the top few bits of the synapse address). All times are represented inunits of clock cycles.The reason behind choosing an 8bit pRN, 256 entries deep delta_t_LUT or the 256entries deep activity expiry time memory is again historic, and has to do withthe fact that the SPU was initially designed to interact with a neuron chip with 256I&F neurons. 25: look- Figure 25: Initialisation of delta_t look-up table. This LUT contains exponentially distributed delta(t)s with mean 20ms. The distribution is sampled at 256 points and the data is stored in random positions within the ROM 5-55
  • The Synaptic Processing Unit Anthony HsiaoThe content of the delat_t_LUT table is initialised with a coefficient file generatedusing a Matlab script5, and is such that a neuron firing at the mean firing rate fm,would on average draw expiry times of 1/fm, as required. The content of thecoefficient file, and thus the content the LUT is initialised with, is in units of clockcycles. Figure 25 shows an example of an initialization of the look-up table.The processing of plasticity signals takes two clock cycles in total due to thismodule’s two-stage pipelined architecture.5.3.3 Cascade synapseBefore examining the architecture of the cascade synapse module, it is helpful tohave another look at the process by which the cascade synapse should respond toplasticity signals, i.e. how the cascade should be processed. A conceptual flowdiagram is shown in Figure 26. 26: Figure 26: Flow diagram of the cascade synapses state update rule 1. Since the cascade synapses are stochastic, some of the incoming plasticity events do not actually require any processing to be done on the synapse at all, i.e. the synapse does not undergo any plasticity. The probability of undergoing5 To generate a new coefficient file that the delta_t_LUT is initialized with, use coe.m 5-56
  • The Synaptic Processing Unit Anthony Hsiao plasticity is given by the synapse’s current plasticity probability, represented by an unsigned binary number from the cascade representation. So in order to determine whether a synapse should be modified at all, this plasticity probability is compared to a uniform pRN. If it is greater or equal, then it should undergo plasticity, and do nothing otherwise. This, i.e. deciding whether anything should be done to the synapse, is the first important step in the processing of the cascade. 2. If the synapse does respond to the plasticity signals it receives (i.e. its plasticity probability is larger than a pRN), then it has two choices: either chain, or switch. This is dependent on the current efficacy and the ‘direction’ of the plasticity signal, i.e. whether it is a potentiation or a depression command. If the current efficacy and the direction of plasticity agree, i.e. if a depressed synapse receives a depress signal, or if a potentiated synapse receives a potentiate signal, then the synapse should chain, and it should switch otherwise. 3. The chaining process simply requires the cascade to reduce its plasticity, by shifting it by one bit towards the LSB, thereby halving the plasticity probability. The efficacy remains unchanged. 4. The switching process is similarly simple, since all that needs to be done to the cascade representation is to invert the efficacy and to reset the plasticity probability to the highest value, i.e. to ‘1..1’.As outlined here, the actual processing of the cascade synapses is not very complex,but can be done with simple logic operations. Again, implementing the cascadesynapse into digital hardware is nearly ideal.The architecture of the cascade synapse module is shown in Figure 27. It containstwo pipeline register levels, one pRNG of width cascade_width – 1 and the stateupdate logic which implements the processing steps described above.The cascade address out and the new state valid signals which feed into the cascadememory are not actually modified at all by the cascade synapse process. They arepassed straight through the module, crossing two pipeline register stages. 5-57
  • The Synaptic Processing Unit Anthony HsiaoWhat the cascade synapse is acting on are the (current) cascade synapse state as wellas the plasticity signal, in a fashion described above. During the first stage, acomparator determines whether any changes to the cascade state need to be done atall, and during the second stage, the appropriate modifications to the current cascadesynapse state are made, and output to the new state. Figure 27: Cascade module block diagram 27:The processing of the cascade representations takes two clock cycles in total due toits two-stage processing and pipelined architecture.5.3.4 Cascade memoryThe cascade memory module is more than just a simple block of memory. It containsa 8192x6bit dual port RAM block, a multiplexer and a comparator, to performmemory read-write collision avoidance.Conceptually, the cascade memory needs to read a current cascade state frommemory, and at the same time, write a ‘new’ cascade state back into memory, hencethe dual port functionality of the memory. In particular one port is used as dedicatedwrite port, the other one as dedicated read port. However, as is commonly the casewith dual port memory, there exists the danger that both ports attempt to read or 5-58
  • The Synaptic Processing Unit Anthony Hsiaowrite to the same memory location at the same time, which would lead to unknownor unstable outputs.Therefore, in order to avoid memory access collisions, the cascade memory contains acomparator which checks, whether a collision is about to happen (i.e. whether bothread and write address are the same). In case of a collision, priority is given to thewrite port, as the read port gets disabled. Then, the output of the memory’s write port(which is actually valid) is selected as output, as the memory operates inWRITE_FIRST mode [32]. That way, data is still written to memory and the samedata is also produced at the output. If there is no collision, the output is by defaultselected to be the output of the read port. 28: Figure 28: Cascade memory block diagramThe content of the memory is initialised using a coefficient file generated by anothershort Matlab script6. This initialises the cascade memory to contain pseudo-randomstates uniformly distributed across all of its cascades.Due to the collision avoidance mechanism, the cascade memory module also requires2 clock cycles to read data and produce it at its output correctly. Nevertheless, this6 To generate a coefficient file to initialize the cascade memory, the script state_init.m was used. 5-59
  • The Synaptic Processing Unit Anthony Hsiaomemory is fully pipelined and can process new read or write commands at everyclock cycle.5.3.5 Signal selectorThe four modules described above are the core modules internal to the SPU, howeverthere is one other important module, the signal selector, which sits at the interfacebetween SPU and the FPGA board’s specific hardware (such as USB or AERcomponents). The purpose of the signal selector is primarily to interface the SPU withspikes coming form the pre- and post-synaptic inputs, annotating them as pre- orpost-synaptic. In the case that both inputs have valid data available, the signalselector selects the signals in an alternating fashion. Figure 29 shows the selector,which interfaces with pre-synaptic USB (fx2) and feedback AER FIFOs. Figure 29: Input source selector block diagram 29:The selector requires one clock cycle to produce the data, which it feeds directly intothe SPU.5.4 System integrationThe individual modules that are at the core of the SPU have been described above;this section explains more specifically how they integrate to make up the SPU. 5-60
  • The Synaptic Processing Unit Anthony HsiaoSince all modules are fully pipelined and can process events at every clock cycle,extra care has to be taken to ensure that the right data is at the right place at the righttime.Conveniently, most of the modules take 2 clock cycles to process data, so littlesynchronisation has to be done. The forwarding process, however, which receivesinput both from the source selector and the cascade memory, has to be conditioned.Specifically, the address and valid signals to the forwarding process have to bedelayed such that they arrive at the same time as the target synapse efficacy, namelytwo clock cycles later. Figure 30 depicts a more detailed block diagram of the SPU,including a two clock cycle delay to synchronise the forwarding module (process).The numbers just next to modules indicate the clock cycle that the data arrives at thatmodule. Figure 30: Pipelined SPU block diagram 30:The SPU interacts with the outside world through the ports provided by the FPGAboard, namely the USB and AER ports. Each of these ports is connected to a FIFO(either at its input or its output) acting as a buffer. If the AER output FIFO is nearlyfull (this is unlikely to happen in practice), it sets a global busy signal high, which inturn forces the SPU internal module’s clock enable signals low, thereby practically 5-61
  • The Synaptic Processing Unit Anthony Hsiaofreezing any processing that is happening within the SPU, until the FIFO has freed upsome space again by sending data out the AER output.The pipeline of the SPU is depicted more explicitly in Figure 31, which shows withwhat causalities and dependencies data flows through the SPU. Data from the signalselector arrives at the first delay buffer, the cascade memory and the learning rule atthe same time. Two clock cycles later, data, a cascade state as well as plasticitysignals arrive at the forwarding module and the cascade synapse respectively. Validpre-synaptic data is forwarded to the AER output FIFO on the next clock edge, andone clock cycle later, a new cascade state is ready to be written back into memory. 0 1 2 3 4 5 6 Signal Selector Data Delay Buffer 1 Data Delay Buffer 2 Data Spike Forwarding Add AER out Cascade Memory State State NStat Learning Rule Plast Plast Cascade Synapse NStat NStat Figure 31: Figure 31: Pipelined dataflow through the SPU5.5 Integration into the FPGA boardThe FPGA board offers a full set of I/O interfaces that the SPU makes use of, and thefollowing provides a more detailed description of the precise integration of the SPUinto the FPGA board. Figure 32 illustrates the interfaces the SPU is making use of,and their associated entities. 5-62
  • The Synaptic Processing Unit Anthony Hsiao Figure 32: Block diagram of the integration of the SPU within the FPGA board 32: board Note: all FIFOs need ‘First Word Fall Through’ propertyPre-synaptic data enters the FPGA board through the USB port, and is handled by thefx2if (USB interface) and then buffered into the fx2 FIFO. The pre-synaptic stimulusdata sent to the USB port consists of address and inter-spike interval pairs. In order tohandle this, the FPGA board features a sequencer which holds back the address (i.e.part of the data) for a duration given by the inter-spike interval, before passing it onto the input selector (blue arrow in Figure 32).From there on, data (addresses) enters the SPU and leaves it again after a few clockcycles, going though the synapse selector (if it is used), an output FIFO as buffer,through the AER out interface module and through the AER out port into one of thestatic DPI synapses on an aVLSI neuron chip (green arrow in Figure 32) (the synapseselector module works quite similar to the signal selector, but the other way round: itforwards spikes to one of the two static DPI synapses on an aVLSI neuron chip, in analternating fashion, so as to avoid overloading one single synapse on a neuron withtoo many spikes. Whether or not it is used depends on the application, it isappropriate to use it of a lot of synapses are connecting to one post-synaptic neuron,as was the case in the classification task described later). 5-63
  • The Synaptic Processing Unit Anthony HsiaoWhen a post-synaptic spike elicits an address event, it is communicated back over thefeedback AER in bus to the port, through the AER in interface module to a FIFOacting as a buffer, via the signal selector back into the SPU (where it would beprocessed by the STADP module).5.5.1 On clocksAs mentioned in the feature summary, the SPU is clocked at 90MHz internally. Thisclock was derived from one of the FPGA’s internal Digital Clock Multipliers (DCM),which conditioned the external 106.125MHz clock to produce the desired 90MHz.Everything within the FPGA board is running at 90MHz, with one exception: the USBport is operated at 45MHz, by halving the 90MHz clock signal. The USB FIFO(fx2fifo) is thus driven with two different clocks, written at 45MHz and read at90MHz. 5-64
  • The Synaptic Processing Unit Anthony Hsiao6 Verification ‘Genius is 1% inspiration and 99% perspiration’ – Thomas EdisonVerification is one of the most daunting but crucial tasks in digital hardware design,and failure to do so properly can come at great costs both in terms of money, timeand reputation. Rather than reproducing the entire verification work carried out,which includes testbenches at module and system level, the interested reader ispointed to the appendices.Verification plans for module and system level verification, which followed an ad-hoctesting paradigm, can be found in 6-65
  • The Synaptic Processing Unit Anthony HsiaoAppendix II – Verification checklists. Appendix III – A journey through the SPU aimsto demonstrate simulation efforts made to verify the correct operation of the SPU. Inparticular, it shows a set of example waveforms from testbenches, which follow apre- and a post-synaptic spike on a journey through the SPU. 6-66
  • The Synaptic Processing Unit Anthony Hsiao7 Evaluation & experimentation ‘What we see depends mainly on what we look for’ – Sir John LubbockPrevious work, including work by Tobias Kringe, focused on, and verified, thebehaviour and performance of digital hardware implementations of the cascadesynapse, and reproducing this is not an aim of this project. Instead, the focus here lieson the use of the cascade synapses for learning within a general learning environment.The following sections describe the evaluation of the SPU which was carried out inthree steps: • Firstly, the STADP learning rule was characterised again, this time in- hardware, to further reassure the correct operation of it – especially since software simulations (Matlab) and hardware implementations can be worlds apart. • Then, a quick in-circuit verification of the SPU, including the verification of forwarding and learning, was carried out, to assure that the SPU was operational. • Finally, the SPU was tested in a general learning environment, and, coupled with an aVLSI neuron chip, was used for a real classification task.7.1 In-hardware characterisation of STADP In- characterisationThe Matlab simulation of STADP presented earlier verified, qualitatively, that thislearning rule has the expected properties. However, it is worthwhile to go one stepfurther and perform yet another characterisation of STADP, but this time in hardware.This in-hardware characterisation was carried out using a behavioural modelsimulation of the learning module in ModelSim (an in-circuit verification isconceivable, but inconvenient since there would be no access to internal signals ofthe FPGA, and it would take a lot of manual labour to actually carry out the largeamount of measurements required). 7-67
  • The Synaptic Processing Unit Anthony HsiaoIn order to reproduce the simulation results of Figure 16, a slightly more elaborateVHDL testbench7 and additional Matlab functions were required. The testbenchsimulates the STADP module connected to the sequencer and timestamp modules sothat stimuli could be sent to it in the ‘normal’ fashion. It is using Matlab generatedstimulus inputs8 read from several binary files, and logs STADP plasticity outputs toseveral binary output files, which are then analysed in Matlab9 to obtain the resultsrequired to reproduce the plots for characterisation.Any meaningful in-hardware characterisation of STADP necessitates the collectionand analysis of a large amount of output data to pre- and post-synaptic stimuli, sinceSTADP is stochastic, which in turn cover a large range of pre- and post-synapticfrequency pairs (10:5:100Hz for both pre- and post-synaptic frequencies, i.e. nearly400 data points). However, this would amount to several minutes’ worth of inputspike trains (pre- and post-synaptic), which would take months to simulate on anormal desktop PC in ModelSim.This is mainly because ModelSim is a synthesis tool which does not simulate in realtime, but is most comfortable simulating in simulation time units in regimes of microto pico seconds. Then, the simulation of one clock cycle in real time (e.g. with a90MHz clock, 11.1ns) can take several iterations in simulation time. Simulating onesecond worth of a 90MHz clock would thus take at least 90million iterations insimulation time, and simulating several minutes worth of input stimuli would becomea task of ridiculously high computational complexity (for a standard desktop PC). Animportant point to note is, that most of this time, the STADP module would not evenproduce any outputs, since stimuli, i.e. spikes, are being held back by the sequencerfor most of the time.7 Elaborate VHDL testbench using file I/O to read stimuli from binary file, and write outputs to binaryfiles: Class_tb.vhd8 To generate the binary stimuli files: generatePostCharacterisationStimuliFile.m,generatePlasticityCharacterisationStimuliFile.m9 To analyse the log of the binary output files: characterisePActivePost.m,characterisePlasticity.m 7-68
  • The Synaptic Processing Unit Anthony HsiaoIn order to get around this problem, the in-hardware simulation of the STADPmodule was carried out at an imaginary ‘internal clock frequency’ of 5kHz instead of90MHz. This means that the delta_t_LUT within the STADP module, whichpreviously (and in the actual hardware running on the SPU) contained exponentiallydistributed expiry intervals in units of 11.1ns (1 clock cycle at 90MHz), now containsthe same exponentially distributed expiry intervals, but in units of 0.2ms (1 clockcycle at 5kHz). Similarly, the inter-spike intervals of the input stimuli are in units of0.2ms now, while they were in units of 11.1ns previously. Figure 33: Comparison of delta_t_LUT content for 5kHz and 90MHz. 33: delta_t_LUT andUsing an imaginary clock frequency of 5kHz, the content of the delta_t_LUT is much coarser. This can be thought of as sampling the curve of the exponential distribution at a lower frequency.The bottom line of this is that the inter-spike intervals are smaller, in terms of clockcycles, which means that the sequencer waits fewer clock cycles before releasing astimulus. Overall, this reduces simulation time to a manageable load. The drawbackof this approach is the more coarse exponential distribution of expiry times that isloaded into the delta_t_LUT, which can be thought of as being sampled at a lowersampling rate, as depicted in Figure 33. However, this is a necessary evil that stillenables a meaningful in-hardware characterisation of the operation of the STADPmodule.Lengthy simulation yields the results shown in Figure 34. On first glance, they appearto resemble the simulation results of Figure 16. And indeed, the qualitative behaviouris satisficing. The p(active) is approximately 0.5 at a post-synaptic frequency of 50Hz,increases for higher frequencies, and decreases for lower frequencies. The large 7-69
  • The Synaptic Processing Unit Anthony Hsiaoamount of ‘noise’ observed is attributed mainly to the coarse sampling of thedistribution curve, as mentioned above, and to stochasticity of the underlying poissonspike train stimuli. Also, the plasticity rate increases with pre-synaptic frequency forboth potentiation and depression, which also show the correct behaviour,qualitatively. The net effect of LTP and LTD is also within the expected range, andalso shows a bias towards depression, which is possibly more pronounced than inFigure 16. This effect can also, in parts, be attributed to the coarse sampling of thedistribution, which results in this p(active) curve being slightly lower than expected,and therefore ‘even less symmetric’ than in the previous simulation, makingpotentiation less likely and pronouncing the reluctance towards potentiation. Anothercause for this is that at this ‘slow’ clock speed, individual delays within the hardware,it takes an input spike 2 clock cycles before it gets processed by STADP, have a muchgreater effect on the absolute perceived timing of the spike by the learning rule.Initial simulations using an even lower imaginary clock frequency of 1kHz had aneven more noisy p(active) curve (results not shown here), supporting theextrapolation, that the actual hardware running at 90MHz, sampling the exponentialdistribution curve at a sufficiently high frequency, can be expected to behave farmore closely to the simulation results presented in Figure 16. 7-70
  • The Synaptic Processing Unit Anthony Hsiao Figure 34: Simulated hardware behaviour of STADP at 5kHz simulation clock frequency. 34: frequency. Left column: rate of potentiation and depression events per second, over a range of pre- and post- synaptic frequencies [1:100Hz] (ignore the axis labels). Right column: Net effect of STADP and probability of the postsynaptic neuron being in active state per unit time, together with expected result, over a range of frequencies [1:100Hz]In summary, it can be concluded that the hardware implementation of STADP doesindeed have the right characteristics, and that there is reason to believe that althoughthe in-hardware simulation result at 5kHz is less ideal, the actual hardware running at90kHz behaves more like the expected simulation model. experimental7.2 Modifications for the experimental SetupThis section outlines some important modifications made to the SPU that was specificto the experimental hardware used and the experiments conducted. Thesemodifications to the SPU are not generally applicable.The experimental hardware used, namely the FPGA board, had one unexpectedshortcoming: the AER in port, i.e. the post-synaptic feedback connection, suffered 7-71
  • The Synaptic Processing Unit Anthony Hsiaofrom timing inaccuracies on the data bus. While the control signals, request andacknowledge, worked as desired, indicating address events correctly, the actual data,i.e. the address representation, was being read by the AER module before the databus could settle, thereby essentially producing random AER data inputs. This is ahardware bug that has to be solved, however that would go beyond the scope of thisproject. Instead, the address of the post-synaptic neuron was hardwired into the SPU.This was only possible since experiments conducted only made use of a single post-synaptic silicon neuron.Section 5.5 Integration into the FPGA board briefly mentioned the so called synapseselector. This module was implemented in order to be able to conduct theexperiments (as will be described later) which only made use of one single post-synaptic silicon neuron, with a large number of pre-synaptic inputs, 256 to be precise.The DPI synapses are designed to operate in biologically plausible regimes, receivingpre-synaptic inputs of up to, say, 100-200Hz, although typical firing rates are around50Hz. The classification experiment (which will be described in a later section)however required those 256 pre-synaptic inputs, operating at expected firing rates ofup to about 50Hz each, to stimulate the same post-synaptic neuron. This ~13kHzinput would have overwhelmed the input bandwidth of the single DPI synapse –through which all pre-synaptic activity is routed to the post-synaptic neuron – whichwas observed to saturate at a pre-synaptic firing rate of about 12kHz. Luckily, theaVLSI neuron chip featured two identical static DPI synapses per I&F neuron, so thatthe pre-synaptic load could be shared between both.The task of the synapse selector was to make sure that spikes were sent to both DPIsynapses in an alternating fashion, by toggling one address bit in the output address.Measurements of this are given in the Circuit calibration section.Finally, the SPU’s output is a neuron address identifying the post-synaptic neuronfrom within all the neurons on the neuron chip. However, the neuron chip allows foraddressing of individual synapses on the chip (which makes sense, and in fact, theSPU’s addressing scheme is also based on uniquely identifying synapses). Eachneuron has several synapses, including some with rich dynamics and local learning 7-72
  • The Synaptic Processing Unit Anthony Hsiaorules, which were not used, while the two excitatory static DPI synapses where used.Therefore, the missing synapse identifier also has to be hardwired into the SPU. Inparticular, this required hard wiring the bottom 5bits of the AER out address toalways send spikes through the static DPI synapses (in fact, the 2nd bit was not static,but toggled by the synapse selector).7.3 Circuit calibration(In the following paragraphs, whenever a stimulus is presented or sent to the SPU, itrefers to sending a text file with address and inter-spike interval timestamp generatedby the spiking neuron toolbox for matlab, using the linux script aexstim developed byGiacomo Indiveri)Before the SPU can be operated together with an aVLSI neuron chip to form a neuralsystem, they need to be calibrated to obtain the desired behaviour.For the experiments that are to be carried out, the system should be calibrated for onesingle post-synaptic neuron receiving 256 different pre-synaptic inputs. One way oflooking at this system is to consider the post-synaptic neuron to be performing amapping of the total pre-synaptic input frequency, i.e. the firing rate at the input tothe DPI synapse which is equal to the sum of all individual pre-synaptic firing rates, toa post-synaptic output frequency. Since the total input frequency is high, the synapticweight of the DPI synapse has to be reduced to a level where this mapping is linear,and does not drive the post-synaptic neuron at too high frequencies, or into saturation.In particular, a mapping of approximately [0, 12.8kHz] total pre-synaptic frequency to[0, 100Hz] post-synaptic frequency is required.In order to achieve this calibration, two DPI synapse parameters, the weight w andthe time constant tau, were adjusted. Then, using pre-synaptic inputs of knownconstant frequency to drive the post-synaptic neuron, an input-output relationshipcould be established, and w and tau could be tweaked experimentally. Figure 35shows the final frequency response of the neural system, with parameter values w ~=0.43V and tau ~= 2.79V. It has a nice linear region for a wide range of pre-synapticstimulus frequencies before it starts to saturate, thanks to the synapse selector 7-73
  • The Synaptic Processing Unit Anthony Hsiaomechanism. Figure 36 shows an oscilloscope screenshot where the system is runningat the equilibrium point, at which the post-synaptic neuron fires at mean frequency50Hz. 35: Figure 35: Frequency response of the neural system. Linear post-synaptic frequency response to a wide range of pre-synaptic stimuli frequencies. DPI synaptic weight and time constant parameters are w ~= 0.43V and tau ~= 2.79V. 36: post- membrane Figure 36: Oscilloscope screenshot of post-synaptic membrane potential: A regular teacher signal driving the post-synaptic neuron at ~50Hz. The top signal is the membrane potential of the post-synaptic neuron, the bottom signal is the teacher signal driving it, running at ~6.4kHz. 7-74
  • The Synaptic Processing Unit Anthony HsiaoUsing this system calibration10, the desired experiments can be carried out.7.4 In-circuit verification11 In-Before diving into a more elaborate classification experiment using the SPU, it isworthwhile to perform a final set of in-circuit verification tasks. Firstly, some generaltests were done to verify basic operation, before more elaborate in-circuit verificationtasks were carried out, namely verification of Forwarding, Potentiation andDepression.In summary, it was concluded that the basic operation of the SPU behaves asexpected. The other in-circuit verification experiments are described below.In the following paragraphs, the term ‘teacher signal’ is used to refer to a teacherstimulus that drives the post-synaptic neuron at the specified frequency, e.g. a 25Hzteacher signal is a signal that drives the post-synaptic neuron at a frequency of 25Hz.For most experiments, the on-chip current injection function on the aVLSI neuronchip was used as teacher signal, rather than an actual pre-synaptic input to theteacher synapse, because it is more convenient, and does not require the generationof a multitude of different teacher stimulus files.7.4.1 ForwardingThe correct operation of the forwarding mechanism was already partly verified duringthe calibration, as regular teacher signals were used to obtain the frequency response.However, there is more to forwarding, and the following functionalities were verified: • Does the SPU forward teacher signals correctly? (already verified during calibration) o Output frequency should correspond to the specified input frequency o Verified by forwarding spikes of regular ISI, and observing the output rate.10 Stored in Matlab script bias_050607.m11 Videos of in circuit verification (depression and potentiation) available on YouTube. Search terms:SPU, stadp, cascade, synapse, plasticity 7-75
  • The Synaptic Processing Unit Anthony Hsiao • Does it stop spikes to depressed synapses? o Initialise all synapses to a depressed state o Send spikes to depressed synapses o There should be no output spikes • Does it forward spikes to potentiated synapses? o Initialise all synapses to a potentiated state o Send spikes to potentiated synapses o There should be output spikes at the input spike rateThe ability to correctly forward spikes was verified using tests described above. Anexample poisson input spike train used to verify forwarding is shown in Figure 37and a screenshot from the oscilloscope showing an example post-synaptic response tosuch a spike train is shown in Figure 38. Figure 37: Example of a coherent 30Hz poisson spike train to all 256 synapses. 37:Black dots represent a spike at the time of its occurrence. All spike trains have the same average spike rate. 7-76
  • The Synaptic Processing Unit Anthony Hsiao Figure 38: Oscilloscope screenshot of post-synaptic membrane potential: 38: post-a poisson stimulus driving the post-synaptic neuron at ~30Hz, clearly showing the contributions to the membrane potential by individual incoming pre-synaptic spikes.7.4.2 PotentiationIn order to verify the ability of the synapses to become potentiated, a range ofdifferent stimuli lasting 1s were applied to the SPU repeatedly, while a teacher signalwas applied at the same time. The teaching time, i.e. the number of trials of 1sshowings, required to drive the post-synaptic frequency above the mean firing rate of50Hz was recorded as output, over four sets of trials. Every time a new set ofmeasurements was taken, the SPU was first power cycled to re-initialise the cascadestates to all depressed.Figure 39 shows a plot of the teaching times against several input stimuli, for twodifferent teacher signal strengths. The errorbar plot (and all other following errorbarplots in this report) shows mean and standard deviation of the dataset measured.Clearly, the synapses are able to potentiate so long as the teacher signal is strongenough. As the average pre-synaptic frequency increases, so does the effectiveness ofteaching, since the time required to drive the post-synaptic neuron into active state,i.e. above 50Hz, decreases. However, it should be noted, that the strength of the 7-77
  • The Synaptic Processing Unit Anthony Hsiaoteacher signal is more critical to the success of potentiation than the pre-synapticfiring rate itself. 39: In- Figure 39: In-circuit verification of potentiation. The system is indeed plastic, and synapses potentiate stochastically depending on the strength of the teacher signal and the pre-synaptic firing rate. Note that it was not possible to drive the post-synaptic neuron above 50Hz with a teacher signal of 11Hz.7.4.3 DepressionIn order to verify the synapses ability to become depressed, a range of differentstimuli lasting 1s were applied to the SPU repeatedly, without applying a teachersignal at the same time. The depression time, i.e. the number of trials of 1s showings,required before the post-synaptic frequency decreases to zero was recorded as output,over three sets of trials. Every time a new set of measurements was taken, the SPUwas first power cycled to re-initialise the cascade states to all potentiated.Figure 40 shows a plot of the depression times against several input stimuli. Theerrorbar plot shows mean and standard deviation of the dataset measured. Indeed,the synapses are able to get depressed, and the higher the stimulus frequency is, themore slowly the depression process is, or the longer it takes to fully depress thesynapses. 7-78
  • The Synaptic Processing Unit Anthony Hsiao Figure 40: In-circuit verification of depression. 40: In-circuit depression.The system is indeed plastic, and synapses depress stochastically depending on the pre-synaptic firing rateFigure 41 shows an oscilloscope screenshot capturing the depression of more andmore synapses indirectly. This can be seen by the decreasing firing rate of the post-synaptic neuron, which eventually dies off and stops firing completely.The depression of synapses, and in fact the potentiation of synapses as well, has akind of positive feedback effect built in that stems from STADP. As more and moresynapses get depressed (potentiated), the post-synaptic neuron is driven by everfewer (ever more) pre-synaptic inputs, thereby further reducing (increasing) post-synaptic activity, leading to more depression (potentiation) events. 7-79
  • The Synaptic Processing Unit Anthony Hsiao Figure 41: Oscilloscope screenshot of decreasing post-synaptic firing rate: 41: post- rate: The post-synaptic frequency (upper waveform) decreases as more and more of the initially all-potentiated synapses get depressed. This is due to the pre-synaptic stimulus (lower waveform), which is too low to drive the post-synaptic neuron into active state (>50Hz), thereby causing mostly depression events.7.5 A real classification taskHaving performed in-circuit verification of the SPUs principal functions, it is ready tobe used in a real classification task.The task at hand is an image classification task, whereby a neuron with its cascadesynapses is supposed to learn to classify two images (it is understood that it is thesynapses that actually perform the learning, but it is the neural system as a whole,which is learning to classify). In particular, by teaching one image, and not teachingthe other one, it is supposed to learn to respond to the taught image by being ‘active’,and to give out no or only a weak response to the other image, after it has beentaught for a certain amount of time. pre-7.5.1 From image to pre-synaptic stimuliHere, the image is a 16x16 pixel greyscale bitmap image, where each pixel representsa pre-synaptic neuron firing at a rate that is given by its pixel colour. Thus, output 7-80
  • The Synaptic Processing Unit Anthony Hsiaoneuron used for the classification task sources from 256 pre-synaptic inputs, andproduces one post-synaptic output.In order to stay within biologically plausible regimes of operation, the greyscalepixels, which have values that vary between 0 (black) and 255 (white), encode meanfiring rates between 0 and 100Hz. Two pictures, one of Dylan Muir and another oneof Anthony Hsiao, were used, scaled down to a size of 16x16 and converted togreyscale. Then the two greyscale images were modified to have the same ‘averagecolour’ and therefore encode the same average pre-synaptic frequency, as well ashaving the same ‘total colour’, or intensity, (the sum of all pixel values) to ensure thatthey encode the same total pre-synaptic frequency at the input of the DPI synapse. Inparticular, the average colour of the two images is ‘grey’, a value of about 123 (outof 255), which translates into an average encoded mean firing rate just less than50Hz per pre-synaptic input, or a total mean firing rate just less than 12.8kHz at theinput of the DPI synapse. When presented to the (uniformly initialised) SPU, withapproximately half of its synapses occupying depressed states, half occupyingpotentiated states, the DPI synapse sees a total initial mean firing rate of just below6.4kHz, which is just below the activity threshold of the post-synaptic neuron (seefrequency response, Figure 35). The picture-to-stimulus conversion process isdepicted in Figure 42. 7-81
  • The Synaptic Processing Unit Anthony Hsiao Figure 42: Using pictures as pre-synaptic stimuli. 42: pre- From left to right (applies to both rows): Original picture, converted 16x16 pixel greyscale imagenormalized to have the same total intensity, mapped firing rates ([0:100]Hz) as encoded by greyscale pixel value. Pictures of Anthony Hsiao and Dylan Muir.The mean firing rates obtained are then converted into 256 poisson spike trains, asshown in Figure 43. These are the stimuli with which the classification task is carriedout. For a human, the original images look very different, but even when convertedinto spike train stimuli, the pictures show distinct patterns that the synapses arehoped to be able to learn, enabling the neuron to classify these two images. Figure 43: Spike trains derived from 16x16 pixel greyscale images of Anthony and Dylan. 43: Dylan. Firing rates vary between 0Hz and 100Hz. 7-82
  • The Synaptic Processing Unit Anthony Hsiao7.5.2 Teaching methodsDuring teaching, the two images (read: the two stimuli derived from the images) arepresented to the system in an alternating fashion, for a given period of time, e.g. 1seach, for several trials. The image that is to be learned is presented together with astrong teacher signal, as depicted in the first row of Figure 44, while the other pictureis presented without teacher signal (not shown). During this process, the learning ruleprovides for changes in synaptic efficacy, creating a synaptic efficacy mask (column 4in Figure 44). While in the learning phase, the synapses undergo plasticity, andideally learn to assume, and keep, efficacies that allow for a classification of the twoimages, i.e. driving the post-synaptic neuron active for the taught image (second rowin Figure 44), and keeping the post-synaptic neuron inactive for the other image(third row in Figure 44).Two different teaching methods are used and compared, ‘normal’ teaching and‘bottom-up teaching’. Both teaching methods are used in the same way as explainedabove, however normal teaching starts off with a uniform initialisation of the statesof the cascade synapses, whereas bottom-up teaching starts off with a uniforminitialisation of only depressed synapses.In order to decide whether or not the neuron is able to classify the two images (forboth teaching methods used), a right sided Student’s t-test for statistical significancewas applied on the difference of the two post-synaptic responses (response to thetaught image [Hz] – response to the other image [Hz]), testing the followinghypotheses: • Null hypothesis (H0): the difference in the post-synaptic responses to the taught and the other image comes from a distribution with mean zero (i.e. there is no difference in the post-synaptic frequencies in response to the taught and the other image) • Alternative hypothesis (H1): there is indeed a positive difference in the post- synaptic responses (i.e. the post-synaptic frequency in response to the taught image is greater than the post-synaptic frequency in response to the other image) 7-83
  • The Synaptic Processing Unit Anthony HsiaoThe t-tests are evaluated at the 5% level, returning a probability p, which representsthe probability that the underlying process described by the null hypothesis couldhave produced the data observed, and a confidence interval CI for the true mean forH1 at 5%. 7-84
  • The Synaptic Processing Unit Anthony Hsiao Figure 44: Conceptual procedure of a real classification task. 44: ConceptualThe classification task involves two phases: Teaching and Classification. The top row depicts one part of the teaching phase, during which both the image that is to be learned and the other image are presented to the system in an alternating fashion (the presentation of the other image is not shownhere), in order to ‘teach’ the correct synaptic weights. During the classification phase, depicted in the second and third row, presenting the taught picture should result in a high post-synaptic firing rate, while presenting the other image should result in a low (if any) post-synaptic firing rate. The different steps involved in the classification task are represented by images in the different columns: 1. Original images 2. Converted 16x16 pixel greyscale images 3. Pre-synaptic firing rates ([0:100]Hz, [maroon : dark blue]) as mapped from the greyscale pixel value 4. Synaptic mask comprising the binary synapse efficacies 5. Resulting ‘masked’ stimulus at the input of the aVLSI neuron’s DPI synapse. 7-85
  • The Synaptic Processing Unit Anthony Hsiao7.5.3 Results – Normal teachingThree different experimental parameters were changed during the normal teachingclassification experiments, namely the image taught (Dylan or Anthony), the order inwhich the images were presented to the neuron during the learning phase (show thetaught image first or show the other image first) and the strength of the teachersignal (22Hz or 25Hz).For each of those eight classification trials, each image was presented for a total of10s (i.e. showing image A for 1s, then showing image B for 1s, and repeat thisanother 9 times), before the actual classification. So after teaching (or not teaching)the images in an alternating fashion for a total of 10s each, the images were onceagain presented to the neuron one after the other, but this time without any teachersignal. After every presentation (regardless of which image and which trial), the post-synaptic frequency was measured with an oscilloscope and recorded. This wasrepeated N times to get a sets of results.The following pages show commented plots of the results for the differentclassification trials, using normal teaching. 7-86
  • The Synaptic Processing Unit Anthony Hsiao 45: show Figure 45: Classification task: Teach Dylan, show Dylan first, at 22Hz. The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is (just) unable to classify the two pictures(p = 0.0556), even though the mean post-synaptic firing rate in response to the taught signal is higher. Figure 46: Classification task: Teach Dylan, show Anthony first, at 22Hz. 46: The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures (p = 0.2500), even though the mean post-synaptic firing rate in response to the taught signal is higher. 7-87
  • The Synaptic Processing Unit Anthony Hsiao Figure 47: Classification task: Teach Dylan, show Dylan first, at 25Hz. 47: Teach The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures (p = 0.1187), even though the mean post-synaptic firing rate in response to the taught signal is higher. Figure 48: Classification task: Teach Dylan, show Anthony first, at 25Hz. 48: The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is able to classify the two pictures. The difference in post-synaptic frequencies is statistically significant at 5% level with (p = 0.0413, CI = [2.029, inf]). 7-88
  • The Synaptic Processing Unit Anthony Hsiao Figure 49: Classification task: Teach Anthony, show Anthony first, at 22Hz. 49: The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures (p = 0.1872), even though the mean post-synaptic firing rate in response to the taught signal is higher. Figure 50: Classification task: Teach Anthony, show Dylan first, at 22Hz. 50: Teac 22Hz. The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures. 7-89
  • The Synaptic Processing Unit Anthony Hsiao 51: Figure 51: Classification task: Teach Anthony, show Anthony first, at 25Hz. The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures (p = 0.0964), even though the mean post-synaptic firing rate in response to the taught signal is higher. Figure 52: Classification task: Teach Anthony, show Dylan first, at 25Hz. 52: The two pictures are presented to the neural system in an alternating fashion, for1s each. Last data point without teacher signal. On final showing, the neuron is unable to classify the two pictures (p = 0.3264), even though the mean post-synaptic firing rate in response to the taught signal is higher.Only one out of the eight classification trials was actually able to classify the twoimages successfully (using the given definition of ‘able to classify’), although in all 7-90
  • The Synaptic Processing Unit Anthony Hsiaotrials bar one, the mean post-synaptic frequency in response to the taught image washigher than the response to the other image. The results for normal teaching aresummarised in Table 2. Teach Dylan Teach Anthony Show Able to t-test Show Able to t-test first classify? results first classify? results Dylan p = 0.0556 Anthony p = 0.1872Hz Hz22 22 Anthony p = 0.2500 Dylan NaN Dylan p = 0.1187 Anthony p = 0.0964Hz Hz25 25 Anthony p = 0.0413 Dylan p = 0.3264 Table 2: Summary of normal teaching results7.5.4 Results - Bottom up teachingThe experimental procedures for the bottom-up teaching experiments were the sameas for normal teaching (i.e. 10 repetitions of alternating presentation of the images,repeated N times). However, only two experimental parameters were changed duringthe bottom-up teaching classification experiments, namely the image taught (Dylan orAnthony) and the strength of the teacher signal (50Hz or 70Hz) – since all synapsesare initially depressed in the bottom-up teaching, it would be meaningless to presentthe other image without teacher signal first. In addition, one further set of teachingtrials at 50Hz teacher strength, but presenting each image for 2s rather than 1s wasconducted.The following pages show commented plots of the results for the six differentclassification trials using the bottom-up teaching method. 7-91
  • The Synaptic Processing Unit Anthony Hsiao 53: Classification Bottom- teaching Figure 53: Classification task: Bottom-up teaching Dylan, at 50Hz. Here, the two pictures are presented to the neural system in an alternating fashion, for 1s each. On final showing, the post-synaptic neuron is able to classify the two pictures. The difference in post-synaptic frequencies is statistically significant at the 5% level, and indeed, it is significant at the 2.5% level as well (p = 0.0125, CI = [9.350, inf]). 54: Classification Bottom- teaching Figure 54: Classification task: Bottom-up teaching Dylan, at 70Hz. Here, the two pictures are presented to the neural system in an alternating fashion, for 1s each. On final showing, the post-synaptic neuron is able to classify the two pictures. The difference in post-synaptic frequencies is statistically significant at the 5% level, and indeed, it is significant at the 2.5% level as well (p = 0.0151, CI = [8.016, inf]). 7-92
  • The Synaptic Processing Unit Anthony Hsiao Figure 55: Classification task: Bottom-up teaching Dylan, for 2s at 50Hz. 55: Classification Bottom- 50Hz. Here, the two pictures are presented to the neural system in an alternating fashion, for 2s each. On final showing, the post-synaptic neuron is able to classify the two pictures. The difference in post-synaptic frequencies is statistically significant at the 5% level, and indeed, it is significant at the 0.5% level as well (p = 0.0049, CI = [2.360, inf]). Figure 56: Classification task: Bottom-up teaching Anthony, at 50Hz. 56: Bottom- Here, the two pictures are presented to the neural system in an alternating fashion, for 1s each. On final showing, the post-synaptic neuron is able to classify the two pictures. The difference in post-synaptic frequencies is statistically significant at the 5% level, and indeed, it is significant at the 2.5% level as well (p = 0.0123, CI = [6.746, inf]). 7-93
  • The Synaptic Processing Unit Anthony Hsiao Figure 57: Classification task: Bottom-up teaching Anthony, at 70Hz. 57: Bottom- Here, the two pictures are presented to the neural system in an alternating fashion, for 1s each. On final showing, the post-synaptic neuron is unable to classify the two pictures (p = 0.0601), even though the mean post-synaptic firing rate in response to the taught signal is higher. Figure 58: Classification task: Bottom-up teaching Anthony, for 2s at 50Hz. 58: Bottom- Here, the two pictures are presented to the neural system in an alternating fashion, for 2s each. On final showing, the post-synaptic neuron is able to classify the two pictures. The difference in post- synaptic frequencies is statistically significant at the 5% level, and indeed, it is significant at the 1% level as well (p = 0.0072, CI = [9.914, inf]).All bar one out of the six bottom-up teaching classification trials were able tosuccessfully classify the two images, and in all trials, the mean post-synaptic 7-94
  • The Synaptic Processing Unit Anthony Hsiaofrequency in response to the taught image was higher than the response to the otherimage. Furthermore, it appears, that the teaching for 2s each produces better results.The results for bottom-up teaching are summarised in Table 3. Teach Dylan Teach Anthony Able to Able toTeacher t-test results Teacher t-test results classify? classify? p = 0.0125 p = 0.012350Hz, 1s 50Hz, 1s CI = [9.351, inf] [9.351 CI = [6.746, inf] [6.746, 46 p = 0.0151 p = 0.060170Hz, 1s 70Hz, 1s [8.016 16, CI = [8.016, inf] n/a p = 0.0049 0.0072 p = 0.007250Hz, 2s 50Hz, 2s [2.360 CI = [2.360, inf] [9.914 914, CI = [9.914, inf] bottom- Table 3: Summary of bottom-up teaching results7.5.5 Remarks on the classification experimentsThe different classification trials presented above displayed a high degree of variation,and the final post-synaptic frequencies differ considerably from trial to trial withoutdisplaying a clear pattern or ‘preferred’ post-synaptic frequency of recognition andrejection. This can be in parts due to the stochastic processes that go on inside theSPU, as well as due to the fact that within each classification trial, only a limitedamount of data was available12.In order to be able to make a more general statement about the ability of the neuronto classify the two images, the Students t-test is applied on the two grouped datasetsof the two different teaching methods, testing whether the neuron can (in general)classify the two images using normal teaching or the bottom-up teaching method,irrespective of which image was being taught or which one was presented first, etc.The results are shown in Table 4 below.12 Unfortunately, due to time constraints during the project, it was not possible to do more extensiveexperiments and data collection. This was mainly due to the fact that the prototype hardware was indevelopment at the same time as this project and that it had to be shared between to different projectsusing it. Furthermore, the hardware is being used at a workshop in the USA as this report is beingwritten, imposing a strict deadline for using it for experiments. In addition, a considerable amount oftime was spent debugging the feedback AER bus. 7-95
  • The Synaptic Processing Unit Anthony Hsiao ottom- Bottom-up teaching Normal teachingAble to classify? t-test result Able to classify? t-test result p = 0.0000 p = 0.0043 CI = [10.353, inf] CI = [4.141, inf] data. level. Table 4: Classification results for grouped data. Significant at 0.5% level.From the results for grouped data, it becomes much more obvious that, indeed, theclassification works, and that the post-synaptic frequency in response to the taughtimage is (or should be, within limits the underlying randomness permits) in factalways higher than the response to the other image. This is an encouraging result,which can be built upon through more thorough investigation.Other general comments and experimental observations: • While working with the system, jumpy or oscillatory behaviour in response to plasticity was observed – for example, stimulating it with a homogenous pre- synaptic input would sometimes lead to sudden, jumpy increase or decrease of the post-synaptic frequency • Depression happened more quickly and readily than potentiation • Synapses were very plastic, if not ‘too plastic’, i.e. the post-synaptic frequency responded in timescales of seconds rather than 10s of seconds or minutes 7-96
  • The Synaptic Processing Unit Anthony Hsiao8 Discussion ‘If I keep an open mind, will my brain fall out?’ – AnonymousThe development of the SPU has gone a long way in this project, from developing acustom made learning rule (although it would be nice if STADP could find its wayinto other applications as well) over to the development of the SPU itself, essentiallyfrom scratch, to implementing it onto a real FPGA board and performing a realclassification task with it. Here, the hardware, learning rule and classification task willbe discussed separately, before finishing off with remarks on calibrating the neuralsystem.8.1 The hardwareWhen making practical use of the SPU, it would always have to be integrated with anaVLSI neuron chip (or a PC emulating one). This would always necessitate a certainamount of manual effort to calibrate the neuron chip and set its biases, and to ensurethat the addressing format used within the SPU and the neuron chip are compatible.Having said that, the SPU is parameterized and it should be easy to adapt it to theneural environment in place. Since it is written in VHDL, it could even be ported to adifferent architecture, should this be necessary. In that case, extra care would have tobe taken to ensure that platform specific features such as memory or other constraintsare met.The architecture of the SPU is modular and transparent. The STADP learning rulecould easily be replaced by another one if required, as long as it is fully pipelined,with few modifications necessary to be made to the SPU itself. The virtualisation ofthe cascade synapses is straightforward and allows for the implementation of a verylarge number of synapses. This greatly adds to the usability of the SPU in a realneural system with high connectivity, and further expanding the number of synapsesimplemented on the chip from currently 8192 requires little effort.Finally, since it is fully pipelined, the number of synapses implemented is virtuallyonly limited by the throughput of the AER bus and the amount of available memory 8-97
  • The Synaptic Processing Unit Anthony Hsiaoon the chip or board, which is more likely to be a bottleneck than the inability of theSPU to process all those synapses. Indeed, in the hardware environment the SPU wastested and used in, it was the aVLSI neuron chip (the DPI synapse, to be precise)rather than the SPU which proved to be a bottleneck. Having said this, at no pointwas the AER been driven to its limits of capacity, so a real challenge for synapticprocessing was not encountered.8.2 STADPSTADP was developed with the aim of providing for a simple yet capable generallearning rule to the SPU that is easily implemented into digital hardware. Its principlefunctioning was verified both in simulation and in hardware (and proven in circuit),but on closer look, it has one inherent non-ideality. As shown during all theseverification efforts, and as observed during experimentation, STADP has a slight biastowards depression, which occurs at a higher rate than potentiation, under otherwiseequal circumstances.Further investigation revealed, that this bias stems from the asymmetry in the p(active)curve of the post-synaptic neuron, the probability of it to be in active state for anygiven frequency. In particular, this is due to the fact that within the regime the SPUwas operated in during experimentation (post-synaptic frequencies of 0-100Hz), thep(active) never actually reaches a value of 1. Some measures to counter this effectwere suggested, including setting a minimum value for the expiry time interval, andfurther investigation into the usefulness of such measures would be desirable.However, another way of looking at the bias towards depression is to regard it assome form of global inhibitory process, which could, when used in the right way,have highly desirable effects on the bottom line functionality of the learning rule.Some work such as [3] actually makes use of such mechanisms, and it could beworthwhile to address this ‘non-ideality’ of STADP by making good use of it ratherthan trying to get around it.However, despite this bias, STADP has proven to be a capable learning rule, and isfully functional inside the SPU. 8-98
  • The Synaptic Processing Unit Anthony Hsiao8.3 The classification taskTo put the design of the SPU to an ultimate test, a real classification task was chosento investigate its learning capabilities and the classification abilities of a neuronaugmented with cascade synapses from the SPU.Constructing pre-synaptic stimuli from two greyscale images of Dylan and Anthony,and teaching the neuron one of them at a time using two different teachingparadigms, it was concluded that the neuron (read: the neural system consisting of aneuron with cascade synapses) can indeed learn to classify them correctly, in general,both using normal teaching as well as using bottom-up teaching. A hypothesis testusing the Students t-test at the 5% level of significance was used to decide whetherthe neuron was actually able to correctly classify two images, by testing for thedistribution of the differences in the post-synaptic responses to the taught and otherimage.Some of the results of this hypothesis test however might appear counterintuitive,since at times, the mean post-synaptic frequency in response to a taught imageseemed to be much higher than to that of the other image, while the hypothesis testwould conclude that the neuron is unable to classify the two images. The claims fromboth sides are valid, but it has to be pointed out, that unfortunately only a small set ofdata was available to base the analysis on. Since the SPU has several underlyingstochastic processes at its heart, those small data sets could have been corrupted bychance events easily. However, as the data set increases, more trust should be givento the hypothesis test (and indeed, as the amount of data increases, the number ofcounterintuitive results should decrease), which is what was done by grouping theavailable data together. It is from this grouped data set that the encouraging resultstems from, that the neuron is indeed able to learn to classify the two images, and thatthe post-synaptic response to the taught image is always higher than the response to post-the other image.The limited amount of data also prevented any conclusions to be made about the wayparameters of the teaching methods, such as the teacher frequency or which image to 8-99
  • The Synaptic Processing Unit Anthony Hsiaopresent first, had an impact on the ability to classify images correctly. This is anunfortunate experimental shortcoming that was partly due to external circumstances.Considering the variations in the reported post-synaptic frequencies uponclassification, there was not one ‘preferred’ post-synaptic frequency for ‘recognition’and ‘rejection’ but instead, a wide range of post-synaptic frequencies was observed(even though the ‘recognition’ signal, i.e. the response to the taught image, waspractically always at a higher frequency than the ‘rejection’ signal), one must ask thequestion about the underlying mechanisms that are responsible for the variations. Acloser look at what might be happening during the learning process could give ananswer.From Figure 44 it becomes clear, that the formation or modification of the synapticmask by STADP is at the heart of the learning process. STADP has the property that ittends to potentiate quickly those synapses which receive a high pre-synapticfrequency from the taught image and a low pre-synaptic frequency from the otherimage, and depress quickly those synapses which receive a low pre-synapticfrequency from the taught image and a high pre-synaptic frequency from the otherimage. Synapses with low firing rates for both images would tend to oscillatebetween efficacies but since they have low firing rates, this would not have a largeimpact on the neuron, and would not happen frequently either. Conversely, synapseswith high pre-synaptic firing rates for both image’s inputs would tend to oscillatebetween efficacies rapidly, which can potentially have significant ‘noise like’ effectson the post-synaptic neuron, since randomly, strong signals could be forwarded orblocked by the oscillating part of the synaptic mask. These expected effects for asynapse during learning is summarised in Figure 59. 8-100
  • The Synaptic Processing Unit Anthony Hsiao Figure 59: Expected effects on a synapse 59: during the learning phase of the classification taskThis implies, that synapses which receive inputs from two ‘high’ pre-synapticfrequency stimuli from the taught and the other image, cannot learn whether to bepotentiated or depressed, and puts a limit on the ability of the neuron to classify thetwo images. Furthermore, if the post-synaptic neuron is firing just below the meanfiring rate (50Hz), then a sudden and random switch of a number of such undecidedsynapses to a potentiated state can easily push the post-synaptic frequency above themean rate, and initiate a positive feedback loop which leads to more potentiationevents for all (other) synapses. Similarly, if the post-synaptic neuron is firing justabove the mean firing rate, a sudden and random switch of a number of suchundecided synapses could lead to a depressed state can easily pull the post-synapticfrequency below the mean firing rate, and initiate a positive feedback loop, whichleads to more depression events for all (other) synapses.This can explain both, the jumpy nature of plasticity and the post-synaptic frequencyobserved experimentally, as well as the variations in the reported post-synapticfrequencies upon classification. Furthermore, there are implications to STADP, too,namely, that for it to be able to learn to classify, pre-synaptic stimuli have to besufficiently dissimilar. This analysis actually helps a lot to further the understandingof the way different teaching methods work, too. 8-101
  • The Synaptic Processing Unit Anthony HsiaoFrom the results presented, it should be clear, that bottom-up learning generallyoutperforms normal learning in terms of the post-synaptic neuron’s ability to classifythe two images. This is mainly due to the ‘cleaner’ or more ideal starting condition ithas compared to the normal teaching method. This refers to the fact that a post-synaptic neuron whose synapses are being taught using bottom-up teaching does notreceive as much unwanted ‘noise’ from the other image, since a larger proportion ofthe synapses that are not readily used by the taught pre-synaptic stimulus is likely toremain depressed for a longer period of time, thereby blocking out the unwanted‘noise’. However, in an online learning system as the SPU is, whereby there isongoing plasticity which immediately affects the synapses in a continuous way, it islikely that after a long period of time and exposure to stimuli, the cascade synapsesare more likely to be in states which more closely resemble the random initialisationof the cascade memory during normal teaching, than a mostly-depressed situation asbottom-up teaching requires. Thus, while bottom-up teaching might have a betterperformance, it is not a sustainable teaching method, but can only be used oninitialised memory, unlike the normal teaching method (which can be used at anypoint in time).Finally, an interesting question is whether the results presented could have beenachieved with STDP (as pointed out in the beginning of this report, STDP was one ofthe candidate learning rules for the SPU). STDP learns by considering the absolutetime difference between pre- and post-synaptic spikes. In this classification task,STDP would probably have efficacy-oscillated many of the synapses, and hence theneuron would have been unable to classify the two images. This is because of thenature of the input stimuli. Since every pixel encodes a mean firing rate which is thenconverted into poisson spike trains, any post-synaptic spike would, on average, haveapproximately as many pre-synaptic spikes preceding it as it would have following it.This would result in an undecided synapse with all the undesirable effects mentionedbefore. The only way by which STDP could have taught a synapse in a meaningfulway is if many (most) of the post-synaptic spikes consistently precede or follow many(most) of the pre-synaptic spikes, which is very unlikely. 8-102
  • The Synaptic Processing Unit Anthony HsiaoIf, on the other hand, a slightly modified version of STDP using at least one,preferably two, integrators was used, as suggested by [19], it is entirely possible thatsimilar or even better results could be achieved. However, this modified STDP wouldthen be closer to STADP again.Despite all the non-idealities of STADP, it has proven its worth both in terms ofreduced complexity and easy of implementation as well as in the actual classificationtask.8.4 Calibration of the neural systemThe calibration between SPU and the aVLSI neuron chip was done for an arbitrarilychosen operating regime of ~0-100Hz, and a desired 50Hz mean firing rate output.Similarly, the choice of teaching methods, using 1s presentation per image with anarbitrarily chosen teacher signal for example (the teacher signals were not entirelyarbitrary, since they were chosen to be ‘strong enough’, but there is no reason whythey were chosen to have any particular strength), or of using a plasticity threshold of230, was based on intuition and reason rather than any in depth knowledge of thecharacteristics of the neural system, since this was the first system of its kind.However, there is nothing to suggest that the neural system could or should not becalibrated for a different operating regime. Indeed, experimental observations suggestthat the system could benefit from a more formal characterisation of the responses tothe change of several parameters (including the DPI parameters w and tau, theplasticity threshold, the mean firing rate fm or the total intensity of the greyscaleimages an their mapping into pre-synaptic frequencies, amongst others). This indepth understanding of the entire neural system (SPU + aVLSI neuron chip) wouldgreatly improve the performance of future classification tasks, simply by allowing theexperimenter to more closely match the system to the classification tasksrequirements.This knowledge could be gained through lengthy and thorough experimentation, or amore analytical and formal analysis of a model of the system. Yet another way to dothis would be to try to mimic a well studied biological system and to attempt to 8-103
  • The Synaptic Processing Unit Anthony Hsiaoreplicate its behaviour. This would provide for a reasonable basis from which to takefurther analysis. 8-104
  • The Synaptic Processing Unit Anthony Hsiao9 Conclusion‘If you cannot - in the long run - tell everyone what you have been doing, your doing has been worthless’ – Erwin SchrödingerThis project set out to develop a Synaptic Processing Unit (SPU) that implements alarge number of cascade synapses. By using a virtualization strategy whereby acascade representation is stored in memory and loaded and processed on demand,the SPU designed here can implement a total of 8192 binary cascade synapses.Because of the modular and transparent architecture, the SPU can easily be expandedor modified, in case more synapses are needed, or a different learning rule, forexample, is desired. It is fully pipelined, and able to handle a high throughput ofaddress events; in fact, it is expected to be able to process cascade synapses fasterthan the currently used communication protocol can supplied it with.A dedicated stochastic Hebbian learning rule called Spike Timing and ActivityDependent Plasticity (STADP) was developed, characterised and implemented inorder to equip the SPU with an on-chip learning functionality. It is dependent on thepre-synaptic spike times and the post-synaptic spike rate. This learning rule hasseveral advantages, including its simplicity and ease of implementation, as well as itsability to learn general, sufficiently dissimilar patterns, but also has drawbacks, suchas a bias towards long term depression (or reluctance towards long term potentiation).The SPU was then integrated with an aVLSI neuron chip to form a workingintegrated neural system, and put to test by performing a real classification task. Thistask involved classifying two 16x16 pixel images, which were converted into pre-synaptic spike trains and presented to a neuron with 256 cascade synapses. Twodifferent teaching methods were employed, normal and bottom-up teaching, and inboth cases, the neuron is able to classify the two images. In particular, it was testedwhether the difference in post-synaptic frequency of the response to the taught andthe other image was non-zero and of statistical significance at 5% the level, which itwas. This was used to conclude that the post-synaptic responses are indeed different 9-105
  • The Synaptic Processing Unit Anthony Hsiaofor the taught and the other image, which implies a successful classification of theimages. Furthermore, it has to be pointed out, that the post-synaptic frequency inresponse to the taught image was always higher than the frequency in response to theother image.These are two very encouraging results, and there is a lot of scope for further workon the SPU.9.1 RefinementsTo the best of the knowledge of the author, the SPU is the first hardwareimplementation of a large number of cascade synapses of its kind, in the world. Forthat reason, it is important to develop solid working knowledge of experimentalprocedures and calibration of integrated neural systems using the SPU and an aVLSIneuron chip. In particular, devising a methodology by which to determine the bestoperating regime for the SPU as well as characterising the system’s behaviour’sdependence on some of its most important parameters, including the plasticitythreshold, the balance between pre-synaptic firing rate and synaptic weight (of theaVLSI synapse on the neuron chip) and the teaching method used for a classificationtask, would be necessary to allow for efficient and application matched usage of theSPU within other neural systems in the future.Also, a more thorough analysis into the behaviour of STADP and the root cause forits bias would be an important contribution. Here, it is proposed that rather thanregarding this as a weakness of the learning rule, it could be investigated whether thisbias towards depression could be looked at as an emergent behaviour instead, whichimplements some form of global inhibition.Another modification to STADP could include the introduction of a third post-synaptic neuron state. Rather than being either active or inactive at any point in time,it would make sense to add a state where the activity is ‘neither’, ‘both’ or ‘normal’.In that state, no plasticity signals would be elicited. This would be interesting, sincecurrently there is no regime of operation where the synapses do not categoricallyundergo plasticity. The SPU is stochastic, yet with a two state STADP learning rule, it 9-106
  • The Synaptic Processing Unit Anthony Hsiaodoes not allow for any statistical variation in the activity of the post-synaptic neuron,but instead draws ‘a sharp line’ between its states of activity. In an online learningscenario such as a classification task, this three state learning rule would havedesirable properties, whereby training progress is less likely to be immediatelyoverwritten by plasticity events occurring due to statistical variation, therebyproducing better classification results.Another modification to the way the SPU interacts with the aVLSI neuron chip wouldcomplement the modification proposed above. Currently, all the pre-synaptic spikesare routed to the post-synaptic neuron through excitatory synapses only. For richerlearning dynamics, it would be interesting to make use of the inhibitory synapses onthe aVLSI neuron chip as well, which would also be expected to improve the learningand hence the classification capabilities of the SPU.Finally, since the FPGA board used was actually not fully functional, the feedbackAER port violated timing constraints of the data bus, it would be essential to fix this.Besides this, the hardware environment was good enough to last several revisions ofthe SPU. 9-107
  • The Synaptic Processing Unit Anthony Hsiao10 References [1] Fusi, Drew, Abbott. Cascade Models of Synaptically Stored Memories. Neuron, 45, 599–611, 2005. [2] C. Peterson, R. Malenka, R. Nicoll, J. Hopfield. All-or-none potentiation at ca3-ca1 synapses. PNAS, 8, 4732 – 4737, 1998. [3] S. Fusi, W. Senn. Learning Only When Necessary: Better Memories of Correlated Patterns in Networks with Bounded Synapses. Neural Computation, 17(10), 2106 – 2138, 2005. 17 [4] J.-L. Gaiarsa, O. Caillard, Y. Ben-Ari. Long-term plasticity at GABAergic and glycinergic synapses:mechanisms and functional significance. Trends in Neurosciences, 25 25(11), 564 – 570, 2002. [5] L. Abbot, S. Nelson. Synaptic plasticity: Taming the beast. Nature Neuroscience, 3, 1178 – 1183, 2000. [6] G. Indiveri, E. Chicca, R. Douglas. A VLSI array of low power spiking neurons and bistable synapses with spike timing dependent plasticity. IEEE Trans. on Neural Networks, 17 17(1), 211 – 221, 2006. [7] Gaiarsa et. al. Long-term plasticity at GABAergic and glycinergic synapses: mechanisms and functional significance. Trends in Neuroscience, 25 25(11), 564- 70, 2002. [8] S. Park, K. Miller. Random number generators: good ones are hard to find. Computing practices, 31(10), 1192 – 1201, 1988. 31 [9] S. Zhang, D. M. Miller, J. C. Muzio. Minimal Cost One-Dimensional Linear Hybrid Cellular Automata of Degree Through 500. JOURNAL OF ELECTRONIC TESTING: Theory and Applications, 6, 255 – 258, 1995. [10] D. Rubin, S. Fusi. Storing Sparse random patterns with cascade synapses. Preprint submitted to Elsevier Science, September 2006. [11] S. Fusi, L. Abbott. Limits on the memory storage capacity of bounded synapses. Nature Neuroscience, 10(4), 485 – 493, 2007. 10 10-108
  • The Synaptic Processing Unit Anthony Hsiao [12] J. Lisman, N. Sprunston. Postsynaptic depolarisation requirements for LTP and LTD: a critique of spike timing dependent plasticity. Nature Neuroscience. 8(7), 839 – 841, 2005. [13] C. Barolozzi, G. Indiveri. Synaptic Dynamics in analog VLSI. 2006. [14] V. Chan, S.-C. Liu, A. van Schaik. A matched silicon cochlea pair with address event representation interface, IEEE Transactions on Circuits and Systems I, Regular Papers. [15] D. Muir. Stochastic synapse for reconfigurable hardware. Telluride Workshop, 2005. [16] T. Kringe. A VHDL implementation of the Cascade Synapse Model. Diploma thesis, 2006. [17] S. Wolfram. Statistical mechanics of cellular automata. Reviews of Modern Physics, 55 601 – 644, 1983. 55, [18] R. Gutig, H. Sompolinsky. The Tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience 9, 420 – 428, 2006. [19] R. Legenstein, W. Maass. What can a neuron learn with Spike-Timing Dependent Plasticity? Neural Computation, 17 2337 – 2382, 2005. 17, [20] S. Mitra, S. Fusi, G. Indiveri. A VLSI spike-driven dynamic synapse which learns only when necessary. Proceedings of IEEE International Symposium on Circuits and Systems ISCAS06, 2777-2780, 2006. [21] S. Fusi, personal communication. [22] G. Kasparov. How life imitates chess. William Heinemann, 2007 [23] K. Boahen, Neuromorphic Microchips, Scientific American, May 2006 [24] G. Indiveri, T. Delbruck, S-C. Liu. Lecture notes: Computation in Neuromorphic aVLSI Systems. 2006.10.1.1 Web references [25] IBM deepblue website, www.research.ibm.com/deepblue 10-109
  • The Synaptic Processing Unit Anthony Hsiao10.1.2 Datasheets and reference books [26] Xilinx Spartan and Memory http://direct.xilinx.com/bvdocs/appnotes/xapp173.pdf [27] configuration and read back http://direct.xilinx.com/bvdocs/appnotes/xapp176.pdf [28] Spartan 3 Configuration Guide http://direct.xilinx.com/bvdocs/userguides/ug332.pdf [29] Spartan 3 Family Data Sheet http://direct.xilinx.com/bvdocs/publications/ds099.pdf [30] Quard Port RAM design http://direct.xilinx.com/bvdocs/appnotes/xapp228.pdf [31] FIFO Design http://direct.xilinx.com/bvdocs/appnotes/xapp258.pdf [32] Spartan 3 Advanced configuration Note http://direct.xilinx.com/bvdocs/appnotes/xapp452.pdf [33] Using Block RAM in Spartan 3 http://direct.xilinx.com/bvdocs/appnotes/xapp463.pdf [34] Using LUTs as distributed RAM http://direct.xilinx.com/bvdocs/appnotes/xapp464.pdf [35] Peter J. Ashenden. The Designers Guide to VHDL 10-110
  • The Synaptic Processing Unit Anthony Hsiao11 Appendix I – Supplementary filesHigh level Matlab scripts used: • General o coe.m – generate delta_t_lut coefficient file o state_init.m – generate cascade memory initialisation coefficient file o state_init_dep_pot.m – generate all dep or pot cascade initialisation • Classification o chipinit.m – set up environment variables for aVLSI chip o bias_050607.m – load neuron chip calibration o scan(127, 127) – observe membrane potential of neuron 127 at pin o createAllFiles.m – create stimuli files for classification o generateRegTeacher.m – generate regular teacher signal file o generateCoherent16x16.m – generate 256 homogenous poisson spike trains o IOResponse.mat – workspace for frequency response plot (with data) o results.mat – workspace for results (with data) o performTTest.m – functions that perform t-test on data inside results.mat • Characterisation o characterisationWorkspace.mat – workspace for STADP characterisation (with data and parameters) o characterisePActivePost.m – read in output files from class_tb_vhd.fdo testbench to characterise p(active) o characterisePlasticity.m – read in output files from class_tb_vhd.fdo testbench to characterise LTP, LTD, NetRate o generatePlasticityCharacterisationStimuliFile.m – generate stimuli file for class_tb_vhd.fdo testbench for LTP, LTD, NetRate characterisation o generatePostCharacterisationStimuliFile.m – generate stimuli file for class_tb_vhd.fdo testbench for p(active) characterisation o make_freq_sim_plot.m – simulation for STADP characterisation o make_prob_active_vs_freq_plot.m – simulation for STADP characterisation of p(active) 11-111
  • The Synaptic Processing Unit Anthony Hsiao12 Appendix II – Verification checklists12.1 Module Level Verification • Cascade State Memory o High Level Specification: Read from memory correctly, as given by address Write to memory correctly, as given by address and data. Written memory should also appear on the output to be read, instantly (WRITE_THROUGH mode) Cruicial point is, that the addresses are correctly decoded, e.g. that the decoder and the multiplexors within the memory work correctly o Corner cases If in general writing to and reading from memory works in principle or in general, then the memory architecture should be correct. should be verified over all ranges of memory for the same address, need to check precisely whether the right output is selected or not o To be verified: EN: • while EN = 0, nothing should happen at the outputs (whatever was there, stays there) rst: - only affects the output latches, and not the content of the RAM itself • if rst = 1 the output registers should be zero. • reset under memory collision: o MUX should still select the correct output WE: • unless WE = 1, nothing should be written to memory. • If WE = 0 then we should just read from memory does it write to the correct address? does it read from the correct address? What happens when read and write are same address? (only appicable to dual port memory - cascade state memory) • the memory should have a security mechanism which ensures that even if we are trying to write access the same address, it should allow write and read correctly • if the addresses collide, then chose the write through data iff we write to memory, and the read data otherwise • forwarding module o High level specification: forward a valid input address to the output, iff the target synapse has high efficacy or the spike is sent to a teacher o Corner cases: as this module is very simple, there are no critical corner cases - would be good to test if over the entire range (a representative subrange) of the neuron addresses o To be verified: EN: rst: • reset should clear all outputs to zero does the valid output work? 12-112
  • The Synaptic Processing Unit Anthony Hsiao • output (target_address_valid) should follow the input (address_valid) AND NOT address_pre_post (if EN and not rst) at the next clock edge • teacher synapse (00000) should be forwarded regardless of efficacy is the (correct) address being forwarded? • output address should follow the input address (if EN and not rst) at the next clock edge • pRNG o High level specification: generate maximal pRN sequences according to its seed has three pRN outputs outputs, which are shifted versions of each other o Corner cases: no real corner cases, as it is just generating away o To be verified: EN: rst: • should go back to its seed value on reset, and restart the sequence does it work? • should produce pRN sequences • Learning Rule (STADP) o High level specification: to determine whether a synapse should be potentiated or depressed, depending on its presynaptic firing time and its postsynaptic activity to correctly produce plasticity events (dep/pot) o Corner cases: no critical corner cases things over the full range of addresses should be verified o To be verified: EN, rst plasticity_valid -> should be valid iff a valid presynaptic address is there, and iff pRN_i > threshold (pre_above_threshold) (with 1 clk delay) • does the pre_above_threshold work? cascade_synapse_address -> should follow the input with 1clk cycle delay is delta_t p-random? does the timer work? is the new_expiry_time correct (i.e. is the addition correct? - mind that the result is only valid after the next clock edge!) is the WE signal for the memory correct? -> address valid AND address post is the post_expiry_time correct (does the read and write to the memory work?) does the comparator work? -> plasticity_dep_pot? (if post expiry time > current time) • Cascade synapse o High level specification: perform plasticity operations on an incoming cascade synapses state representation switch or chain, depending on plasticity and current efficacy o Corner cases: need to check whether it works for all cases, i.e. if do_something is correct for pRN >, =, < plasticity probability for both types of states (depressed or potentiated) o To be verified: EN, rst 12-113
  • The Synaptic Processing Unit Anthony Hsiao is the new_state_valid signal correct? • it should follow the input valid signal on the second clock cycle (given by the pipeline) is the cascade_address_out signal correct? • it should follow the input address on the second next clock edge is the do something signal correct? • should be high iff the plasticity probability is greater or equal to the pRN (on the next clock cycle) do the new_efficacy and new_state signal behave correctly with respect to the do_something signal, i.e. is the new_state i signal correct? • Chaining and switching behaviour should be correct12.2 System Level Verification • SPU o High level specification: perform spike routing implement cascade synapse learning through STADP o Corner cases: since the system should work with all the individual units combined, all/most corner cases from the individual modules also apply here. in order to breakdown the verification, perform verification in steps: • Forwarding Only o observe outputs and related internal signals • Learning Only o observe internal signals only o follow the cascading process of a synapse through for several times: by repeatedly applying pre and postsynaptic spikes to the same synapse • synapse should chain down the potentiated cascade by only applying presynaptic spikes to whichever synapse(s) • synapse should chain down the depressed cascade o do this with several representative synapses o To be verified: Forwarding only: • EN, rst, o on NOT EN, all signals should be preserved o on rst, all plasticity should be is lost • does the target_address_valid signal work as expected? o Should follow the address_valid AND NOT address_pre_post AND synapse efficacy, with delays is the address_valid_fwd signal delayed by 2 clk cycles? do we get the correct target_synapse_efficacy? • is the target neuron address correct? o should follow the synapse_address 8MSB (neuron address) with 2 clk cycles delay Learning Only • EN, rst 12-114
  • The Synaptic Processing Unit Anthony Hsiao • are the internal valid signals being forwarded correctly? o plasticity_valid should follow address_valid with 2 clock delays (if there is a plasticity to be had, which should be the case in about 50% of the time) o new_state_valid should follow plasticity_valid with 2 clock delays • are the addresses being forwarded correctly, internally? o cascade_synapse_address should follow synapse_address with 2 clk delays o new_state_address should follow cascade_synapse_address with 2 clk delays • the way it is stimulated, it should first produce pot signals, and towards the second half dep signals o Follow through: postsynaptic spike produce plasticity (can have any value, but should not be valid) cascade should not change the state (could have a different new_state, but it should not be valid, so it wont be written) presynaptic spike produce valid plasiticy (first pot, later dep) cascade should change state • SPU with I/O and FIFO (with USB interface removed, only USB FIFO accessible) o High level specification: correctly interface FIFOs and AER with SPU (it is difficult to test the USB, since we dont know the communication standard) Two stage approach: • First just observe the I/O signals, and verify that AER i/o and the FIFOs work • Secondly, once we are certain that this works, we shall follow several spikes on their journey through the SPU o Corner cases: check whether the out-post AER works check both situations where fifos are not empty, so that the iSelector has to toggle probably wont be able to fill up the output fifo, so difficult to check the EN signal.... o To be verified: First stage • clk_90, clk_45 o right frequencies? (90, 45) • rst o should reset the fifos and SPU • USB fifo input • Observe data at the USB fifo in (cant simulate the USB in as i dont know the communication standard) - pre_fifo_in, pre_fifo_we o should take two valid data in cycles in order to construct the 16bit AER data • Observe the data at the out of the fifo - usb_pre_fifo_empty (low), usb_pre_fifo_dout, o should change to the input data o on usb_pre_fifo_read, the fifo should become empty again 12-115
  • The Synaptic Processing Unit Anthony Hsiao • Trace signals through the system: o Apply signals to USB FIFO, 8bit wise should take two WE cycles before the fifo is not empty o Read signal into selector Signal should appear at the otput of the fifo when it is not empty, and usb_pre_fifo_read is high o Forward onto SPU Initially, no arbitration necessary, since the post AER should not contain any data One clk cycle later this data should appear at the output of the Selector data_valid, pre_post should be correct (high, low) for one clk cycle o Output of the SPU four clk cycles later, the output should (iff the synapse had high efficacy) be equal to the input (neuron) address (top 8MSB) address valid should go high o AER out should raise a request, wait for acknowledge and produce valid data 12-116
  • The Synaptic Processing Unit Anthony Hsiao13 Appendix III – A journey through the SPU Pre-13.1 Pre-synaptic spike Figure 60: Pre-synaptic spike arrives at SPU. 60: Pre- As pre-synaptic data becomes available (empty -> low), it is loaded into the SPU Figure 61: Valid pre-synaptic spike gets forwarded, after two clock delays 61: pre- forwarded, Figure 62: Valid pre-synaptic spike generates a plasticity event. 62: pre re- event. It is a depression event, since the current time is greater than the post-expiry-time. 13-117
  • The Synaptic Processing Unit Anthony Hsiao Figure 63: Cascade synapse changes in operation 63: Cascade synapse responds to valid depression signal by chaining. This takes 2 clock cycles. 64: Figure 64: Plasticity events 13-118
  • The Synaptic Processing Unit Anthony Hsiao Post-13.2 Post-synaptic spike Figure 65: Valid post-synaptic spike arrives at SPU 65: post ost- 66: Post- Figure 66: Post-synaptic spike does not get forwarded Figure 67: Post-synaptic spike sets post-synaptic expiry time. 67: Post- post- Post-synaptic spike draws random delta_t by reading the LUT from a random address, and adds it to the current time to get the post-expiry-time, which is stored into memory 13-119
  • The Synaptic Processing Unit Anthony Hsiao14 Appendix IV – Design hierarchy of source filesThe SPU project files all sit within the spu_i_o_wrapper, and have a hierarchy asshown below. Source files marked with * were developed by Daniel Fasnacht,marked with ^ were developed by Dylan MuirSpu_i_o_wrapper • Fx2DCM* • DCM_FxPhase* • fx2if* • fxoutfifo (coregen) • timestamp* • sequencer* • paerInput* • pinfifo (coregen) • input_source_selector • SPU o Forwarding_process o Cascade_state_memory Cascade_memory_coregen (coregen) o Stadp pRNG_stadp • ca_flag_150_90^ lut_delta_t (coregen) activity_expiry_times_stadp (coregen) o cascade_process pRNG • ca_flag_150_90^ • poutfifo (coregen) • paerOutput* 14-120