SlideShare a Scribd company logo
Daniel Stiven Marín Medina
Estudiante Medicina
Leaky Integrate-and-Fire model
Semantic Pointer
Arquitecture(SPA)
• Neural Engineering Framework
Vector space x Population A
Vector space y  Population B
Spiking activity every
neuron in A:
Decoding equation:
Connection
weights:
General equation
weight
Visual hierarchy
Auto-encoder
Boltzmann restricted
machine
NOCH(Neural
optimal control
hierarchy)
Working memory
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain
A large-scale model of the functioning brain

More Related Content

Viewers also liked

Extra Certificates.Edu.& CEU'sUW1997
Extra Certificates.Edu.& CEU'sUW1997Extra Certificates.Edu.& CEU'sUW1997
Extra Certificates.Edu.& CEU'sUW1997Fred Yezzi
 
Turismo Cultural
Turismo CulturalTurismo Cultural
Turismo CulturalPaul Crary
 
Cataclismos - Como enfrentar catástrofes naturais
Cataclismos - Como enfrentar catástrofes naturaisCataclismos - Como enfrentar catástrofes naturais
Cataclismos - Como enfrentar catástrofes naturaisSummitLighthouse
 
Ref_Thanh_from_Jim
Ref_Thanh_from_JimRef_Thanh_from_Jim
Ref_Thanh_from_JimThanh Nguyen
 
Confirmaciones 2015
Confirmaciones 2015Confirmaciones 2015
Confirmaciones 2015
Fadri
 
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02Julyanne Rodrigues
 
Confoa4 g uambe-y_bueno_2013
Confoa4 g uambe-y_bueno_2013Confoa4 g uambe-y_bueno_2013
Confoa4 g uambe-y_bueno_2013
Korajy Guambe
 
Estrategia en medios sociales
Estrategia en medios socialesEstrategia en medios sociales
Estrategia en medios sociales
Pablo Capurro
 
Galileo - Reservations
Galileo - ReservationsGalileo - Reservations
Galileo - ReservationsFahmida Seitz
 

Viewers also liked (20)

Bookkeeping L4
Bookkeeping L4Bookkeeping L4
Bookkeeping L4
 
Reference letter BHS BHM
Reference letter BHS BHMReference letter BHS BHM
Reference letter BHS BHM
 
alviro martini
alviro martinialviro martini
alviro martini
 
Team Logo Wall
Team Logo Wall Team Logo Wall
Team Logo Wall
 
vibgyoradvents
vibgyoradventsvibgyoradvents
vibgyoradvents
 
Entrevista
EntrevistaEntrevista
Entrevista
 
Sommerakademie 1981
Sommerakademie 1981Sommerakademie 1981
Sommerakademie 1981
 
Extra Certificates.Edu.& CEU'sUW1997
Extra Certificates.Edu.& CEU'sUW1997Extra Certificates.Edu.& CEU'sUW1997
Extra Certificates.Edu.& CEU'sUW1997
 
KUWAIT LABOR LAW
KUWAIT LABOR LAWKUWAIT LABOR LAW
KUWAIT LABOR LAW
 
Turismo Cultural
Turismo CulturalTurismo Cultural
Turismo Cultural
 
REF_from Mr Hieu
REF_from Mr HieuREF_from Mr Hieu
REF_from Mr Hieu
 
Crane 1
Crane 1Crane 1
Crane 1
 
Cataclismos - Como enfrentar catástrofes naturais
Cataclismos - Como enfrentar catástrofes naturaisCataclismos - Como enfrentar catástrofes naturais
Cataclismos - Como enfrentar catástrofes naturais
 
Ref_Thanh_from_Jim
Ref_Thanh_from_JimRef_Thanh_from_Jim
Ref_Thanh_from_Jim
 
Confirmaciones 2015
Confirmaciones 2015Confirmaciones 2015
Confirmaciones 2015
 
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02
Polaridadedasmolculaseforasintermoleculares 101024102915-phpapp02
 
Confoa4 g uambe-y_bueno_2013
Confoa4 g uambe-y_bueno_2013Confoa4 g uambe-y_bueno_2013
Confoa4 g uambe-y_bueno_2013
 
Estrategia en medios sociales
Estrategia en medios socialesEstrategia en medios sociales
Estrategia en medios sociales
 
1
11
1
 
Galileo - Reservations
Galileo - ReservationsGalileo - Reservations
Galileo - Reservations
 

More from Daniel Stiven Marín Medina

Diarrea
DiarreaDiarrea
Hidrocefalia
HidrocefaliaHidrocefalia
Influenza virus
Influenza virusInfluenza virus
Cerebro materno
Cerebro maternoCerebro materno
Vacunación
VacunaciónVacunación
Trastorno déficit de atención e hiperactividad
Trastorno déficit de atención e hiperactividadTrastorno déficit de atención e hiperactividad
Trastorno déficit de atención e hiperactividad
Daniel Stiven Marín Medina
 
Ciclo de vida, patología y patogénesis de la malaria
Ciclo de vida, patología y patogénesis de la malariaCiclo de vida, patología y patogénesis de la malaria
Ciclo de vida, patología y patogénesis de la malariaDaniel Stiven Marín Medina
 

More from Daniel Stiven Marín Medina (9)

Diarrea
DiarreaDiarrea
Diarrea
 
Hidrocefalia
HidrocefaliaHidrocefalia
Hidrocefalia
 
Influenza virus
Influenza virusInfluenza virus
Influenza virus
 
Cerebro materno
Cerebro maternoCerebro materno
Cerebro materno
 
Vacunación
VacunaciónVacunación
Vacunación
 
Trastorno déficit de atención e hiperactividad
Trastorno déficit de atención e hiperactividadTrastorno déficit de atención e hiperactividad
Trastorno déficit de atención e hiperactividad
 
Ciclo de vida, patología y patogénesis de la malaria
Ciclo de vida, patología y patogénesis de la malariaCiclo de vida, patología y patogénesis de la malaria
Ciclo de vida, patología y patogénesis de la malaria
 
Inflamación crónica
Inflamación crónicaInflamación crónica
Inflamación crónica
 
Creating a false memory in the hippocampus
Creating a false memory in the hippocampusCreating a false memory in the hippocampus
Creating a false memory in the hippocampus
 

Recently uploaded

Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
SAMIR PANDA
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
IvanMallco1
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
YOGESH DOGRA
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Sérgio Sacani
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
sonaliswain16
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 
Hemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptxHemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptx
muralinath2
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
RenuJangid3
 
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
Health Advances
 
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
Sérgio Sacani
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
muralinath2
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
AlaminAfendy1
 
insect taxonomy importance systematics and classification
insect taxonomy importance systematics and classificationinsect taxonomy importance systematics and classification
insect taxonomy importance systematics and classification
anitaento25
 
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
NathanBaughman3
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
silvermistyshot
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
Richard Gill
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
muralinath2
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
AlguinaldoKong
 
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptxBody fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
muralinath2
 

Recently uploaded (20)

Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 
Hemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptxHemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptx
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
 
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
 
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
 
insect taxonomy importance systematics and classification
insect taxonomy importance systematics and classificationinsect taxonomy importance systematics and classification
insect taxonomy importance systematics and classification
 
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
 
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptxBody fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
 

A large-scale model of the functioning brain

Editor's Notes

  1. Visual input: V1=primary visual cortex: the first level of the visual hierarchy, tuned to small oriented patches of different spatial frequencies V2=secondary visual cortex: pools responses from V1, representing larger spatial patterns V4=extrastriate visual cortex: combines input from V2 to recognize simple geometric shapes IT=inferior temporal cortex: the highest level of the visual hierarchy, representing complex objects Information encoding: AIT=anterior inferior temporal cortex: implicated in representing visual features for classification and conceptualization Transform calculation: VLPFC=ventrolateral prefrontal cortex: area involved in rule learning for pattern matching in cognitive tasks Reward evaluation: OFC=orbitofrontal cortex: areas involved in the representation of received reward Information decoding: PFC=prefrontal cortex: implicated in a wide variety of functions, including executive functions and manipulation of working memory Working memory: PPC=posterior parietal cortex: involved in the temporary storage and manipulation of information, particularly visual data DLPFC=dorsolateral prefrontal cortex: temporary storage and manipulation of higher level data related to cognitive control Action selection: Str(D1)=striatum (D1 dopamine neurons): input to the “direct pathway” of the basal ganglia Str(D2)=striatum (D2 dopamine neurons): input to the “indirect pathway” of the basal ganglia STN=subthalamic nucleus: input to the “hyperdirect pathway” of the basal ganglia VStr=ventral striatum: involved in the representation of expected reward in order to generate reward prediction error GPe= globus pallidus externus: part of the “indirect pathway”, projects to other components of the basal ganglia in order to modulate their activity GPi/SNr=globus palliduglobus pallidus internus and substantia nigra pars reticulata: the output from the basal ganglia SNc/VTA =substantia nigra pars compacta and ventral tegmental area: relay signal from ventral striatum as dopamine modulation to control learning in basal ganglia connections routing: Thalamus=receives output from the basal ganglia, sensory input, and coordinates/monitors interactions between cortical areas Motor processing: PM=premotor cortex: involved in the planning and guidance of complex movement Motor output: M1=primary motor cortex: generates muscle based control signals that realize a given internal movement command SMA=supplementary motor area: involved in the generation of complex movements
  2. map the visual hierarchy firing pattern to a conceptual firing pattern as needed (information encoding), (ii) extract relations between input elements (transformation calculation), (iii) evaluate the reward associated with the input (reward evaluation), (iv) decompress firing patterns from memory to conceptual firing pattern (information decoding), and (v) map conceptual firing patterns to motor firing patterns and control motor timing (motor processing). Thick black lines indicate communication between elements of the cortex; thin lines indicate communication between the action selection mechanism (basal ganglia) and the cortex. . The open-square end of the line connecting reward evaluation and action selection denotes that this connection modulates connection weights. Boxes with rounded edges indicate that the actionselection mechanism can use activity changes to manipulate the flow of information into a subsystem. The basal ganglia determine which state the network should be in, switching as appropriate for the current task goals. the elements of Spaun are not task-specific. That is, they are used in a variety of combinations to perform the chosen tasks, resulting in the same circuitry being used across tasks. Transformation calculation The transformation calculation subsystem is a recurrent attractor network similar to the working memory elements, with an input transformation. Its purpose is to compute the transformation between its inputs and store a running average of the result. As a result, this running average is the inferred relation that exists between all of the inputs it is shown. This subsystem is most critical for the RPM and rapid variable creation tasks. fMRI studies have suggested that rule learning of this kind takes place in VLPFC g is done in the basal ganglia.
  3. Semantic pointers are neurally realized representations(firing patterns) of a vector space generated through a compression method. Semantic pointers are constructed by compressing information from one or more other highdimensional neural vector representations, which can themselves be semantic pointers. The newly generated semantic pointer is of a similar or lower dimensionality than its constituents Semantic pointers carry similarity information that is derived from their source(they are systematically related to the information that they are used to reference) the visual hierarchy employs learned compression, whereas the working memory and motor hierarchies employ defined compression NEF: a group of spiking neurons can represent a vector space over time, and that connections between groups of neurons can compute functions on those vectors. NEF provides a set of methods for determining what the connections need to be to compute a given function on the vector space represented by a group of neurons. each neuron in A and B has a “preferred direction vector” (the vector (i.e. direction in the vector space) for which that neuron will fire most strongly.) ‘Ai’ is the spike train of the ith neuron in the population, ‘G’ is the spiking neural nonlinearity, ‘alpha’ is the gain of the neuron, ‘e’ is the preferred direction (or “encoding”) vector, and ‘Jbias’ is a bias current to account for background activity of the neuron. the elements in the square brackets determine the current flowing into the cell, which then drives the spiking of the chosen single cell model G. Varying p, a, and J_b produce all of the different tuning curves seen in Figure 1. this is not the only sort of tuning curve found in the brain.
  4. Encoding involves converting a quantity, x(t ), from stimulus space into a spike train (penúltima fórmula), where G_i [·]ψ is the nonlinear function describing the spiking response model (e.g., leaky integrate-and-fire, Hodgkin-Huxley, or other conductance based models), J_i is the current in the soma of the cell, ‘i’ indexes the neuron, and ‘n’ indexes the spikes produced by the neuron. where J_i(x) is the input current to neuron ‘i’ , x is the vector variable of the stimulus space encoded by the neuron, alpha_i is a gain factor, phi_i is the preferred direction vector of the neuron in the stimulus space, J_i^bias is a bias current that accounts for background activity, and η_i models neural noise. the dot product, /phi_i . x, describes the relation between a high-dimensional physical quantity (e.g., a stimulus) and the resulting scalar signal describing the input current. Por lo que es el producto punto entre la dirección preferida y el estimulo
  5. Figure S1: NEF encoding in two dimensions with four neurons. a) Both dimensions of the input plotted on the same graph, over 1.2s. The input to the two dimensions is x1 = sin(6t) (black) and x2 = cos(6t) (gray). b) The spikes generated by the neurons in the group driven by the input in a). c) The same input shown in the vector space. The path of the input is a unit circle, where the arrowhead indicates the vector at the end of the run, and the direction of movement. Older inputs are in progressively lighter gray. The preferred direction vectors of all four neurons is also shown. d) The firing rate tuning curves of all four neurons. Gains and biases are randomly chosen.
  6. linear decoders can be found to provide an appropriate estimate of any vector x given the neural activities from the encoding equation ‘N’ is the number of neurons in the group, ‘di’ are the linear decoders, and ‘x hat‘ is the estimate of the input driving the neurons. all optimizations of this type use 5000 or fewer sample points to find the decoders. This results in a significant computational savings (several orders of magnitude) over trying to learn the same function in a spiking network setting. In the NEF we characterize decoding in terms of post-synaptic currents and decoding weights. a plausible means of characterizing this decoding is as a linear transformation of the spike train we assume that post-synaptic currents are such filters, and set the time constants to reflect the kind of neurotransmitter receptors in the connection (e.g., AMPA receptors have short time constants (˜10ms) and NMDA receptors have longer time constants (˜50ms)) we can estimate the original stimulus vector x(t ) by decoding an estimate, X^(t ), using a linear combination of filters, h_i(t ), weighted by decoding weights phi_i Connection weigth: we can substitute the decoding of A into the encoding of B, thereby deriving connection weights. i indexes the neurons in group A and j indexes the neurons in B. These weights will compute the function y = x (where y is the vector space represented in B and x is the vector space represented in A)
  7. Elementary building blocks of neural microcircuits. The scheme shows the minimal essential building blocks required to reconstruct a neural microcircuit. Microcircuits are composed of neurons and synaptic connections. To model neurons, the three-dimensional morphology, ion channel composition, and distributions and electrical properties of the different types of neuron are required, as well as the total numbers of neurons in the microcircuit and the relative proportions of the different types of neuron. To model synaptic connections, the physiological and pharmacological properties of the different types of synapse that connect any two types of neuron are required, in addition to statistics on which part of the axonal arborization is used (presynaptic innervation pattern) to contact which regions of the target neuron (postsynaptic innervation pattern), how many synapses are involved in forming connections, and the connectivity statistics between any two types of neuron.
  8. is a model of the ventral visual stream, including areas V1, V2, V4, and IT This network learns the compression needed to reduce the original image to a 50-dimensional semantic pointer. These layers define vector spaces that can be embedded into spiking neurons using the NEF methods Spaun only implements IT in spiking neurons. The other layers compute the transformations between vector spaces in the original RBM, although neural dynamics are included in this computation Fig. A sample of tuning curves of neurons in the model. a) These learned model representations have a variety of orientations, spatial frequencies, and positions, like those found in V1. b) A more direct comparison of data and model tuning curves from V1. The analytical methods used to generate these images are identical for the model and monkey data Motor hierarchy: the hierarchy consists of an optimal controller in the workspace that determines control commands that are then projected to joint angle space
  9. Normalmente se implementan como redes de neuronas con tres capas (sólo una capa oculta). Un auto-codificador aprende a producir a la salida exactamente la misma información que recibe a la entrada. Por eso, las capas de entrada y salida siempre deben tener el mismo número de neuronas. Por ejemplo, si la capa de entrada recibe los píxeles de una imagen, esperamos que la red aprenda a producir en su capa de salida exactamente la misma imagen que le hemos introducido. A primera vista parece un artilugio bastante inútil, básicamente no hace nada. La clave está en la capa oculta. Imaginémonos por un momento un auto-codificador que tiene menos neuronas en la capa oculta que en las capas de entrada y salida. Dado que exigimos a esta red que produzca a la salida el mismo resultado que recibe a la entrada, y la información tiene que pasar por la capa oculta, la red se verá obligada a encontrar una representación intermedia de la información en su capa oculta usando menos números. Por tanto, al aplicar unos valores de entrada, la capa oculta tendrá una versión comprimida de la información, pero además será una versión comprimida que se puede volver a descomprimir para recuperar la versión original a la salida. De hecho, una vez entrenada, se puede dividir la red en dos, una primera red que utiliza la capa oculta como capa de salida, y una segunda red que utiliza esa capa oculta como capa de entrada. La primera red sería un compresor, y la segunda un descompresor. Precisamente por eso, este tipo de redes se denominan auto-codificadores, son capaces de descubrir por si mismos una forma alternativa de codificar la información en su capa oculta. Y lo mejor de todo es que no necesitan a un supervisor que les muestre ejemplos de cómo codificar información, se buscan la vida ellas solas. Por eso se suele decir que se trata de aprendizaje no supervisado.
  10. Unsupervised learning: Only uses the inputs x^(t) [vectors] for learning. -Automatically extract meaningful features for your data -leverage the availability of unlabeled data -add a data-dependent regularizer to training(-log p(x^(t)) RESTRICTED BOLTZMANN MACHINE Son redes recurrentes simétricas(los pesos en la conexión entre la unidad i y la unidad j es igual al peso en la conexión entre la unidad j y la unidad i(w_ij=w_ji)) que consisten en unidades binarias (+1 para ‘on’ y -1 para ‘off’). Las neuronas visibles interactúan con el sistema, pero las ocultas no lo hacen. Cada neurona es una uidad estocástica que genera un resultado (o estadp) de acuerdo con la distribución de Boltzmann de la mecánica estadística) Las que operan de modo restringido son la cuales en donde las neuronas visibles están restringidas a estados específicos determinados por el entorno. Esta arquitectura no permite las conexiones entre las unidades de las capas ocultas. Después de entrenar a una MBR las actividades de sus unidades ocultas pueden ser tratadas como datos para el entrenamiento de una MBR de nivel superior. Este método de apilamiento MBR hace que sea posible entrenar muchas capas de unidades ocultas de manera eficiente y que cada nueva capa sea añadida para mejorar el modelo generativo principal. In restricted Boltzmann machines there are only connections (dependencies) between hidden and visible units, and none between units of the same type (no hidden-hidden, nor visible-visible connections). Only one layer of stochastic binary hidden units El objetivo del aprendizaje de Boltzmann es ajustar los pesos de las conexiones de tal forma que las unidades visibles satisfagan una distribución de probabilidad particular deseada.
  11. Pretraining consists of learning a stack of restricted Boltzmann machines (RBMs), each having only one layer of feature detectors. The learned feature activations of one RBM are used as the ‘‘data’’ for training the next RBM in the stack. After the pretraining, the RBMs are ‘‘unrolled’’ to create a deep autoencoder, which is then fine-tuned using backpropagation of error derivatives.
  12. Figure S2: NEF decoding in two dimensions with 20 neurons. The inputs in the vector space are the same as in Figure S1. a) The original input and neuron estimate over 1.2s, with both dimensions plotted on the same graph over time (black is x1, gray is x2). b) The same simulation shown in the vector space. Older states are lighter gray. For both a and b, smooth lines represent the ideal x values, while noisy lines represent the estimate ^x. c) The spikes generated by the 20 neurons during the simulation, and used to generate the decodings shown in a) and b). Encoding vectors are randomly chosen from an uniform distribution around the unit circle, and gains and biases are also randomly chosen, as in Spaun.
  13. The NOCH framework. This diagram embodies a high-level description of the central hypotheses that comprise the neural optimal control hierarchy (NOCH). The numbering on this figure is used to aid description, and does not indicate sequential information flow.
  14. (Arriba, izquierda:1,2,3,4)Comparison of human and model reaching trajectories and velocity profiles. The human data are in the left column (1 and 3), and the model data in the right (2 and 4). A comparison of figures 1 and 2 demonstrates that the model reproduces the typical, smooth reaching movements of normals. The distance between points indicates the reaching velocity. A comparison of figures 3 and 4 demonstrates that the model also reproduces the velocity profiles observed in the human data. Both axes, time and velocity, respectively, have been normalized. (abajo, izq)Huntington’s patient and model trajectories.(Huntington’s model was developed by impairing the performance of the basal ganglia component selection process of the working arm model described in section Patient trajectories on the left, and model trajectories on the right. Difficulties with ending point accuracy and additional unwanted movements at low velocities are shared between the patients and the model. The model effectively reproduces the movement termination error observed in Huntington’s. Two trials are shown in each graph. (abajo, derecha) Cerebellar damaged patient and model trajectories. Patient trajectories on the left, and model trajectories on the right. Patients with cerebellar lesions tend to overshoot target locations, resulting in significant amounts of backtracking in the movement trajectory. The model displays a similar pattern of reaching error with perturbed cerebellar function.
  15. The networks that employ compression use circular convolution to perform compression. This is an example of a defined compression operator for generating semantic pointers (this operator can also be learned in a spiking network . This operator can be thought of binding two vectors together . Consequently, serial memories are constructed by binding semantic pointer of the current input with its appropriate position. where Item semantic pointers are the semantic pointers to be stored (numbers in Spaun), and Position semantic pointers are internally generated position index semantic pointers. Position indices are generated using random unitary base semantic pointers, Base, where the next position index is generated by successive convolution. This allows for the generation of as many positions as needed to encode a given list. A unitary vector is one which does not change length when it is convolved with itself. In the figures in the main text Position1 is written as P1, and so on. The overall memory trace is a sum of this encoding through two memory pathways, which have different decay dynamics -------------- Information encoding The information encoding subsystem maps semantic pointers generated from images to conceptual semantic pointers that encode progression relations. This information encoding is used for most tasks except the copy drawing task, where it is the visual features of the input, not its conceptual properties, that matter for successful completion of the task.
  16. . The storage and recall states of the network are common to many tasks. For the WM task, these states occur immediately one after the other, although the delay is task-dependent. Initially, seeing the task identifier (A3) switches Spaun into the storage state. In the storage state, the network compresses the incoming image into a visually based firing pattern (FP in the figure) that encodes visual features, maps that firing pattern to another firing pattern that represents the related concept (e.g., “TWO). And then compresses that firing pattern into a memory trace that is stored in WM. The compression operator (i.e., “⊗”) binds the concept firing pattern (e.g., TWO) to a position representation (e.g., P3) and adds the result (i.e., TWO ⊗ P3, as in Fig. 2C) to WM. As shown in Fig. 2C, this process is repeated as long as items are shown to the model. Figure 2B shows a screen capture from a movie of the WM simulation. When the model sees the “?” input (as in Fig. 2B), the basal ganglia reroute cortical connectivity to allow Spaun to recall the input stored in the dorsolateral prefrontal cortex (DLPFC). Recall consists of decompressing an item from the stored representation of the full list, mapping the resulting concept vector to a known high-level motor command and then decompressing that motor command to specific joint torques to move the arm. This process is repeated for each position in the WM, to generate Spaun’s full written response. Figure 2C shows the entire process unfolding over time, including spike rasters, conceptual decodings of the contents of DLPFC, and the input and output. a)Information flow through Spaun during the WM task. Line style and color indicate the element of the functional architecture in Fig. 1B responsible for that function. FP, firing pattern. b) A screen capture from the simulation movie of this task, taken at the 2.5-s mark of the time course plot in (C). The input image is on the right, the output is drawn on the surface below the arm. Spatially organized (neurons with similar tuning are near one another), low-pass–filtered neuron activity is approximately mapped to relevant cortical areas and shown in color (red is high activity, blue is low). Thought bubbles show spike trains, and the results of decoding those spikes are in the overlaid text. For Str, the thought bubble shows decoded utilities of possible actions, and in GPi the selected action is darkest.
  17. Time course of a single run of the serial WM task. The stimulus row shows input images. The arm row shows digits drawn by Spaun. Other rows are labeled by their anatomical area. Similarity plots (solid colored lines) show the dot product (i.e., similarity) between the decoded representation from the spike raster plot and concepts in Spaun’s vocabulary. These plots provide a conceptual decoding of the spiking activity, but this decoding is not used by the model. Raster plots in this figure are generated by randomly selecting 2000 neurons from the relevant population and discarding any neurons with a variance of less than 10% over the run. ⊗ denotes the convolution compression operator.
  18. Fig. 3A demonstrates that the low level perceptual features in the input are available to Spaun to drive its motor behavior. Figure 3B demonstrates the RPM task for one sample pattern. In this task, Spaun is presented with two groups of three related items and must learn the relation between items in the groups. Spaun then uses its inferred relation to complete the pattern of a third set of items. Similarity plots for the DLPFC show conceptual decodings of neural activities. The model learns the relation between subsequent strings of numbers by comparing patterns in DLPFC1 and DLPFC2 . Human participants average 89% correct (chance is 13%) on the matrices that include only an induction rule (5 of 36 matrices) Spaun performs similarly, achieving a match-adjusted success rate of 88% .
  19. To demonstrate that Spaun captures general psychological features of behavior, it is critical to be able to simulate populations of participants. Every time a specific instance of Spaun is generated, the parameters of the neurons are picked from random distributions . Consequently, generating many instances allows for comparison with population-wide behavioral data. Figure 4 compares the recall accuracy of the model as a function of list length and position in a serial recall task to human population data. As with human data. Spaun produces distinct recency (items at the end are recalled with greater accuracy) and primacy (items at the beginning are recalled with greater accuracy) effects. A good match to human data from a rapid serial-memory task using digits and short presentation times is also evident, with 17 of 22 human mean values within the 95% confidence interval of 40 instances of the model. Fig. 4. Population-level behavioral data for the WM task. Accuracy is shown as a function of position and list length for the serial WM task. Error bars are 95% co
  20. Reward evaluation This subsystem determines if the current input in the current context has an associated reward. In Spaun this means that during the reinforcement learning task, if a ‘1’ is shown after a guess, a positive reward signal is generated and sent to the ventral striatum and subsequently if the reward is unpredicted, to the dopamine system in the basal ganglia. All further reward processing is done in the basal ganglia Figure S5a shows a single run on a reinforcement learning task. This demonstrates the behavioral flexibility of the model, as it solves a three-armed bandit task by adjusting its behavior given changing contingent rewards. In other work, we have shown that the detailed spike patterns found in the striatum of this basal ganglia model matches those found in rats performing this same task (14). As well, figure S5b shows many more trials than in Figure S5a. This helps to demonstrate that changing contingencies are indeed learned by the model. Over 60 trials, each of the arms becomes the most highly rewarded for a period of time, and the model’s choice probability tracks those changes. Choice probability in a three-armed bandit task in Spaun over 60 trials. Every 20 trials, the probability of reward for the three choices changes, as indicated at the top of the graph (e.g. 0:0.12 indicates that choice 0 has a 12% chance of being rewarded). The probability of choosing each action is by the continuous lines. These probabilities are generated by over a 5-trial window. Reward delivery during the run is indicated by the ‘x’ marks along the top of the graph. Note that Spaun learns to vary its selected choice as appropriate for the changing environmental reward contingencies. Figure S5a shows a single run on a reinforcement learning task. This demonstrates the behavioral flexibility of the model, as it solves a three-armed bandit task by adjusting its behavior given changing contingent rewards. In other work, we have shown that the detailed spike patterns found in the striatum of this basal ganglia model matches those found in rats performing this same task (14). As well, figure S5b shows many more trials than in Figure S5a. This helps to demonstrate that changing contingencies are indeed learned by the model. Over 60 trials, each of the arms becomes the most highly rewarded for a period of time, and the model’s choice probability tracks those changes.
  21. Figure S8a shows results from the counting task. Specifically, it shows the length of time required by the model to produce a response, as a function of the number of positions counted. The model reproduces the expected linear relationship between subvocal counting and response times. Spaun’s count time per item (419+-10 ms) lies within the human measured range of 344 +-135 ms for subvocal counting (47), although the variance is lower. This is likely a result of the relative simplicity of the model. Interestingly, the model reproduces the well-known psychophysical regularity called Weber’s law (i.e., that the variance in response time increases with the mean response time (48)), typically evident in such tasks. We suspect that this feature of the model is present because, despite not adding noise to the simulation, it is highly stochastic because of the many nonlinearities present. A stochastic system will tend to perform a random walk, which diffuses over time, generating a higher variance after longer temporal delays. Figure S8b shows the accuracy rates of the model on the question answering task for lists of length 7. Cognitive modelers have long used the paradigm of question answering to evaluate knowledge representation – a model’s ability to flexibly represent and access structured information (e.g., (49)). Spaun is able to perform this task, but human data for this specific task is not available. Consequently, Spaun produces the behavioral prediction that, while primacy and recency effects will be evident, the type of question asked will not affect accuracy.
  22. Figure S7: Time course plots for rapid variable creation. All graphs are plotted as described in figure 2. Color is used to distinguish conceptual decodings (also labelled). Both graphs provide examples of Spaun learning how to complete a syntactic pattern given input/output examples. The examples are stored across DLPFC1 and DLPFC2, and the pattern is learned by comparing these in VLPFC. a) A simple syntactic pattern where the last of two items is the variable item. b) A more complex pattern, where the variable item is second last in a string of four items. shows the rapid variable creation task for two example runs. This task was iden- tified as one which no contemporary neural model could perform as quickly as humans (i.e., within 2 seconds) (20). Spaun provides an answer after 150ms of simulated time, for a variety of patterns.
  23. An additional example of fluid reasoning in Spaun. The time course of Spaun’s activity while inferring the pattern provided in the input images. This figure is plotted using the same methods as in Figure 2 in the main text. Color is used to distinguish conceptual decodings (also labelled). Spaun is able to complete the final set of three inputs by having learned the appropriate transformation from the first two sets. It learns this transformation by comparing the appropriate elements of DLPFC1 and DLPFC2.