Wyklad habilitacyjny: percepcja i pojecia

1,183 views

Published on

Percepcja i pojęcie koloru

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,183
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Izhikevich 2004:\n \nv' = 0.04 v^2 + 5v +140 -u +1\nu' = a(bv-u)\n\nif (v>30mV)\nv<-c\nu<-u+d\n \n \n \nSTDP model:\n \nCausal: If a pre-synaptic neuron fires and then the post-synaptic neuron fires, the synaptic weight is increased (LTP)\nAnti-causal: If a post-synaptic neuron fires and then the pre-synaptic neuron fires, the synaptic weight is descreased (LTD)\n\nLOCAL RULE to implement Hebbian learning\n \nSpecific stimulus: 10% neurons stimulated with an "edge" every 1/2 second: Spontaneous aperiodic bursty patterns emerge in firing rates; and neuronal groups form chains of activation. \n\n\nWhat aspects of the brain does the model include?\nThe model reproduces a number of physiological and anatomical features of the mammalian brain.  The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models.  The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex.  Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation.\nAt an anatomical level, the model includes sections of cortex, a dense body of connected neurons where much of the brain's high level processing occurs, as well as the thalamus, an important relay center that mediates communication to and from cortex.  Much of the connectivity within the model follows a statistical map derived from the most detailed study to date of the circuitry within the cat cerebral cortex.\n \nWhat do the simulations demonstrate?\nWe are able to observe activity in our model at many scales, ranging from global electrical activity levels, to activity levels in specific populations, to topographic activity dynamics to individual neuronal membrane potentials. In these measurements, we have observed the model reproduce activity in cortex measured by neuroscientists using corresponding techniques: electroencephalography, local field potential recordings, optical imaging with voltage sensitive dyes, and intracellular recordings.   Specifically, we were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons.  We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains.  In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking.  In all our simulations, we are able to simultaneously record from billions of individual model components, compared to cutting-edge neuroscience techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with an unprecedented picture of circuit dynamics.\n \n \n\nWhat will it take to achieve human-scale cortical simulations?\n Before discussing this question, we must agree upon the complexity of neurons and synapses to be simulated. Let us fix these two as described in our SC07 paper. \n The human cortex has about 22 billion neurons which is roughly a factor of 400 larger than our rat-scale model which has 55 million neurons. We used a BlueGene/L with 92 TF and 8 TB to carry out rat-scale simulations in near real-time. So, by naïve extrapolation, one would require at least a machine with a computation capacity of 36.8 PF and a memory capacity of 3.2 PB. Furthermore, assuming that there are 8,000 synapses per neuron, that neurons fire at an average rate of 1 Hz, and that each spike message can be communicated in, say, 66 Bytes. One would need an aggregate communication bandwidth of ~ 2 PBps.    Thus, even at a given complexity of synapses and neurons that we have used, scaling cortical simulations to these levels will require tremendous advances along all the three metrics: memory, communication and computation. Furthermore, power consumption and space requirements will become a major technological obstacle that must be overcome. Finally, as complexity of synapses and neurons is increased many fold, even more resources would be required. Inevitably, along with the advances in hardware, significant further innovation in software infrastructure would be required to effectively use the available hardware resources.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • \n
  • \n
  • What is the goal of the DARPA SyNAPSE project?\nThe goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains. \nWho is on your SyNAPSE team?\nStanford University: Brian A. Wandell, H.-S. Philip Wong\nCornell University: Rajit Manohar\nColumbia University Medical Center: Stefano Fusi\nUniversity of Wisconsin-Madison: Giulio Tononi\nUniversity of California-Merced: Christopher Kello\nIBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh \n
  • Wyklad habilitacyjny: percepcja i pojecia

    1. 1. Dariusz PlewczynskiICM, Uniwersytet Warszawski D.Plewczynski@icm.edu.pl
    2. 2. Jak widzimy kolory?Od percepcji do budowania pojęć Dariusz Plewczynski ICM, Uniwersytet Warszawski D.Plewczynski@icm.edu.pl
    3. 3. O czym mówimy...• Percepcja• Pojęcie - podstawowa struktura poznawcza, reprezentująca klasę obiektów (przedmiotów, zdarzeń, czynności, cech, relacji) podobnych do siebie pod pewnym względem. Tworzeniu pojęć towarzyszą procesy abstrakcji i uogólniania.• Pojęcia są wyrażane przez społecznie ustalone wyrażenia językowe. Tworzymy je od wczesnego dzieciństwa, często bezwiednie.• Prof. Chlewiński: rodzimy się z genetetycznie uwarunkowaną kompetencją pojęciową. 2
    4. 4. NEURONAUKI
    5. 5. Jak działa mózg? P. Latham P. Dayan
    6. 6. NeurocybernetykaDefinicja polska: Dział biocybernetyki, zajmujący się analizą i modelowaniem procesów przetwarzania informacji i sterowania w układach nerwowych zwierząt i człowieka. Główne kierunki prac to m.in.: ustalenie i opis matematyczny własności neuronu, analiza percepcji, badanie i modelowanie procesów uczenia się, badanie sieci neuronowych i hierarchicznej organizacji układu nerwowego, analiza systemów sterowania układu ruchu. http://en.wikipedia.org/wiki/Neurocybernetics
    7. 7. LUDZKAPERCEPCJA
    8. 8. Spostrzeganie barw• Barwa jako taka fizycznie nie istnieje,• Wiele teorii spostrzegania - m.in trójczynnikowa (Young-Helmholtz, czerwony-zielony-niebieski) i dwuczynnikowa (Hering, czerwień/zieleń oraz żółty/błękit),• Intersubiektywność procesu,• Zaburzenia w widzeniu barw,• Czy język determinuje spostrzeganie? Determinizm vs. uniwersalizm 7
    9. 9. Receptory: czopki i pręcikiCzopki, dawniej zwane słupkami – światłoczułereceptorysiatkówki oka. Czopki umożliwiają widzenie kolorów przydobrym oświetleniu. Jest to widzenie fotopowe.Pręciki - światłoczułe receptory siatkówki oka. Odpowiadają zapostrzeganie kształtów i ruchu. Pręciki umożliwiają czarno-białewidzenie przy słabym oświetleniu. Jest to widzenie skotopowe.Względna absorpcja światłaczopków (K, Ś, D)i pręcików (Pr)przez ludzkie oko.Skala długości falinie jest liniowa. http://pl.wikipedia.org/wiki/Czopki
    10. 10. Receptory kolorówLudzkie oko zawiera trzy rodzaje czopków, zktórych każdy ma inną charakterystykęwidmową, czyli reaguje na światło z innegozakresu barw.Pierwszy rodzaj reaguje głównie na światłoczerwone (ok. 700 nm), drugi na światłozielone (ok. 530 nm) i ostatni na światłoniebieskie (ok. 420 nm).Impulsy generowane pod wpływem światła wpręcikach i czopkach są wysyłane do mózguza pośrednictwem komórek dwubiegunowych,komórek zwojowych, a także bezpośredniopoprzez własne aksony. http://pl.wikipedia.org/wiki/Czopki
    11. 11. ZADANIEPERCEPCYJNE
    12. 12. decision. Subsequently, both participants wer Zadanie wzrokowe ormed of the correct choice (with the excep Który z obrazków ma większy kontrast (wzory Gabora)? Modele decyzji, agregacji i dzielenia się informacją w parach w przypadku zadania decyzyjnego związanego z rozróżnianiem bodźców A B Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
    13. 13. Funkcja psychometryczna probability of choosing the 2nd interval 1.0 1 s= p P (c) 0.8 2⇡ 0.6 slope / derivative 0.4 a measure of efficiency 0.2 contrast difference-3 -2 -1 1 2 3 error rate ~ Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
    14. 14. Decyzje w parach Czy można zaobserwowaćjakąkolwiek mierzalną zaletę jeśli bodziec wzrokowy jest obserwowany przez więcej niż jedną osobę? Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
    15. 15. observer [Fig. 4A, red bar; t(13) = 0.18, p = 0.85, benefit. However, the results do not address the Downloaded from www.sciencemag.ohis prediction using the paired t test], as predicted by the BF model. More question of whether communication alone, with- Wyniki Bahramiegoment 1, modified so that t allowed to communi- important, dyad sensitivity was significantly lower than the upper bound predicted by the WCS model out feedback, is sufficient for achieving collabo- ration benefit. Could dyads achieve any group benefit at all without ever receiving any objective feedback about the accuracy of their decisions? This is an important question, because feedback is not formally incorporated in the confidence- sharing model (9). Taking this model seriously at face value, one maymax(s1 , s2 ) counter- make the extremely p q1 + s2 )/ 2 (s intuitive assumption that, as long as accurate com- munication of confidence is ensured, dyad benefit 2 + s2 can still be achieved without any feedback (that sknowledge of decision is, without any definitive 1 2 outcomes). In experiment 4, we removed the feedback stage of the task to test this prediction (9): After the joint decision was made (either automatically nt no communication in the agreement trials or after interaction in the1. (A) Experimental paradigm. Each trial consisted of two observation intervals. In each interval,the participants were notpe disagreement trials), tically oriented Gabor patches were displayed equidistantly around an imaginary circle (duration: All other aspects of the al told the correct answer. ). => Best Decides n- In either the first or second interval, there was one oddball target that had slightly higher contrast experiment were identical to experiment 1. Con-nd of the others (in this example, upper-left target in interval 1). (B) Two example psychometricall sistent with our prediction, even without feed-neons and the group average in experiment 1. The proportion of trials in which the dyads nevertheless achieved a significant back, the oddball was ds to be in the second interval is plotted against the contrast difference at thecollaboration benefit [Fig. 4A, blue bar; t(10) = ed oddball location (i.e., st no feedbackad in the second interval minus contrast in the first). A highly sensitive observer would0.022, paired t test], and dyad sen- 2.68, p = produce aellrising psychometric function with a large slope. Blue circles, performance of the less sensitive y sitivity was statistically indistinguishable fromer.he => (still) WCSver (smin) of the dyad; red squares, performance of the more sensitive observerprediction ofblackconfidence sharing model the (smax); and the nds, performance of the dyad (sdyad). The blue and red dashed curves are the best fitblueabar; t(10) = 1.16, p = 0.27, paired [Fig. 4B, to cumu-ndGaussian function (9); the solid black curve is the prediction of the WCS. N = 15 dyads. (C)These findings indicate that objective e- t test]. Predictions ofnt. models (see Eqs. 1 to 4). The x axis shows the ratio of individual sensitivities (smin/smax),was not necessary, and communica-ur feedback with values to corresponding to dyad members with similar sensitivities and values near zero alone was sufficient forRees, Frith Science, 2010 ne Bahrami, Olsen,dyad members tion to Latham, Roepstorff, achieving collective
    16. 16. Dlaczego musimy modelowaćw populacjach?Rozróżnianie bodźców, agregacja informacji - nawet tejnajprostszej, tj. percepcyjnej - jest zależna jeśli percepcja iprzyporządkowanie, kategoryzacja bodźca jest dokonywanaprzez więcej niż jedną osobę,a więc potrzebujemy: populacji!czyli: modelowania wielo-agentowego, gdzie integracjainformacji, opis rzeczywistości dokonuje się w grupie, w sposóbrozproszony.
    17. 17. PRZYPADEK KOLORÓW
    18. 18. Percepcja kolorów Proces kategoryzacjisemantycznej pojęć w przestrzenibarw - analiza danych światowych i modelowanie populacyjne
    19. 19. Trzy koncepcje językaNatywizm Chomsky &FodorStruktura - ogólny konstrukt - języka jest wrodzona. Zbiórkategorii językowych - zarówno pojęć jak i gramatyk jestwspółdzielony przez wszystkich ludzi od urodzenia. Uczenie sięjęzyka polega na wypełnianiu aktualnymi formami językowymistruktur uprzednio istniejącychPrzejmujemy od otaczających nas ludzi strukturę języka.
    20. 20. Trzy koncepcje językaEmpirycyzmMechanizm nabywania języka jest współdzielony przezspołeczności, jednak samo kształtowanie się jego struktur jestodbiciem otaczającej nas rzeczywistości. Funkcjonalizm.KulturalizmPoza językotwórczm oddziaływaniem środowiska silny wpływna strukurę języka i zbiór pojęć ma konsensus kulturowy.Przejmujemy od otaczających nas ludzi strukturę języka.
    21. 21. nd Roberts (1956), consisting of 320 Munsell chips of 40 equally spaced hues and eight levels of lightness (Value) at aximum saturation (Chroma) for each (Hue, Value) pair, was supplemented by nine Munsell achromatic chips (black through Pomiary World Color Survey (WCS) ay to white) – the resulting stimulus array is shown in Figure 1a2. First, without the stimulus array present, the major color rms of the collaborator’s native language were elicited by questioning that was designed to find the smallest number ofmple words with which the speaker could name any color (basic color terms)3. Once this set of basic color terms was tablished, the collaborator was asked to perform two tasks. In the naming task the stimulus array was placed before the eaker and for each color term t, a piece of clear acetate was placed over the stimulus board and the collaborator was asked to dicate, with a grease pencil on the acetate sheet, all the chips that he or she could call t. In the focus task the stimulus array as shown as before and the collaborator was asked to indicate the best example(s) of t for each basic color term t. The World Color Survey, Berlin & Kay 1969oundaries of categories showed great variability, perhaps because of the vagueness of the instruction of the naming task: Basic Color Terms: Their Universality and Evolution. Berkeley obably some subjects took the instruction to call for all the chips that were more t than anything else, while others appear toave taken it to call for all chips in which any trace of t was visible.4 The focal choices of the B&K subjects were much more and Los Angeles. University of California Press, 1969. ustered and led to the conclusion that Badano przyporządkowanie nazwy do pól palety Munsella wykorzystano przestrzeńterms ofcategories become encodeddrawn history set aofgiven language ... [1] the referents for the basic color barw CIE L*a*bto be in the from a of eleven universal perceptual categories, and [2] these all languages appear zbadano 110 niepiśmiennych kultur z całego świata in a partially fixed order (Berlin and Kay 1969: 4f).5 Berlin & Kay, 1969 Figure 1a. The WCS stimulus array.
    22. 22. Analiza danych zebranych przezbadaczy z Berkeley 1" 1" 1" 0.9" 0.9" 0.9" Udział"mody" 0.8" 0.7" 0.8" 0.7" 0.8" 0.7" 0.6" 0.6" 0.6" 0.5" 0.5" 0.5" 0.4" Odch."stand." 0.4" 0.4" 0.3" 0.3" 0.3" 0.2" 0.2" 0.2" 0.1" 0.1" 0.1" 0" 0" 0" Zielony" Żółty" Czerwony" Niebieski" Zielony" Żółty" Czerwony" Niebieski" Zielony" Żółty" Czerwony" Niebieski" Berlin & Kay, 1969
    23. 23. Yaminahua107 1 fiso FI107 2 oxo OX107 3 oshin OS107 4 chaxta CH107 5 dada DA Berlin & Kay, 1969
    24. 24. World Color Survey (WCS)pokazano hierarchiczność w słownikach pomiędzyposzczególnymi językami,argumentacja na rzecz koncepcji uniwersalistycznej All languages contain terms for black and white. If a language contains three terms, then it also contains a term for red. If a language contains four terms, then it also contains a term for eithergreen or yellow (but not both). If a language contains five terms, then it contains terms for both green andyellow. If a language contains six terms, then it also contains a term for blue. If a language contains seven terms, then it also contains a term for brown. If a language contains eight or more terms, then it contains a term forpurple, pink, orange, and/or grey. Berlin & Kay, 1969
    25. 25. Trzy koncepcje językaCzy środowisko zewnętrzne wpływa na proceskategoryzacji ?Generalnie nie, ale ….możliwe subtelne efektyCzy jest związek pomiędzy nazewnictwem kolorów w danymjęzyku, a natężeniem kolorów na zdjęciach przedstawiającychdane środowisko* ?* - uogólnione do jednego z 13 biomów
    26. 26. Uniwersalizm A B Chromatic Achromatic RED GREEN YELLOW/ORANGE BLUE WHITE GRAY BLACK PINK PURPLE BROWN GRUEMotywy: wykonano klastrowanie badanych osób (k-means) (VI) GBP (VI) C (V) (?) Grue (IV) [IVb] (IVb) [IIIa?] Gray [IIIa?] [IIIb?] Dark (II) (IV) K=1 2 3 4 5 6 7 8 9 Fig. 1. Glossary and motifs in the WCS. (A) The WCS color chart, arranged according to Munsell hue (horizontal) and value (vertical), with 10 neutral samples (leftmost column). (B) Concordance maps of the 11 color terms, in false color, with the color terms used in this paper. (C) Concordance maps of the color-naming systems (motifs). Columns indicate solutions for K clusters (K ϭ 1 is the whole dataset). Titles are motif names. Roman numerals indicate corresponding stages from Berlin and Kay (parentheses) or Kay and Maffi (brackets). At K ϭ 4 (concordance maps enlarged for clarity), 614 informants used the Green/Blue motif, 1,063 used the Grue motif, 313 used the Gray motif, and 377 informants used the Dark motif. Lindsey, Brown. PNAS, 2009
    27. 27. UniwersalizmMotywy w nazewnictwie kolorów grupują osoby z różnych,niespokrewnionych grup językowych.W większości języków istnieją podstawowe motywynazewnictwa kolorów GBP, Grue, Grey, Dark. Dark Gray Grue GBP Excluded1-5 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 43. Gunu, Cameroon 86. Shipibo, Peru 88. Slavé, Canada 103. Walpiri, Australia Lindsey, Brown. PNAS, 2009
    28. 28. NAZYWANIE KOLORÓW
    29. 29. ModelowaniePowstawania kategorii w przestrzeni barw i przypisania nazwydo danej kategorii.Spektrum oraz intesywność barw prezentowana jest poprzez1269 (lub 330) kostek Munsellatransformacja bodźca S(λ) ➡ {L,a,b}jednostki centrujące (reactive units) N 2 1 ⎛ xi − mij ⎞ − ∑ 2 i=1 ⎜ σ ⎟ ⎝ ⎠ z j (x) = e Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
    30. 30. Kategoryzacja bodźca percepcyjnegokategoria koloru jest zdefiniowana przez sieć adaptacyjną J yk (x) = ∑ w j z j (x) j =1perceptrony poznawczekażdy agent posiada zbiór kategorii i wybiera kategorięnajmocniej odpowiadającą na bodziec - argmax(y(x)) w trakciegry w dyskryminację. Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
    31. 31. Gra w dyskryminacjęopisuje funkcjonalizm środowiskowyalgorytm: prezentacja kontekstu O = {o1,…,oN} wraz z próbkąwyróżnioną. wybór najsilniej odpowiadającej kategorii CS0= argmax(yc) sukces dyskryminacyjny jeśli: istnieje kategoria zwracająca najwyższą wartość dla próbki wyróżnionej (jeśli nie - modyfikacja kategorii) żadna inna próbka w kontekście nie ma równie wysokiego wyniku Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
    32. 32. Gra w zgadywanieproces uzgadniania przestrzeni nazw w populacjiopisuje wpływ wpływ kulturowy i środowiskowy,występują: mówca i słuchaczalgorytm: prezentacja kontekstu O = {o1,…,oN} mówcy i słuchaczowi prezentacja próbki wyróżnionej mówcy mówca gra w grę dyskryminacyjną - jeśli gra odnosi sukces -kontynuacja, w innym przypadku gra w zgadywanie jestprzerywana w przypadku sukcesu wybrana zostaje kategoria zwycięska Cs Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
    33. 33. Statystyka opisująca semiologięstopień uwspólnienia pojęć w populacjicommunicative success [zielony]zdolność do klasyfikacji każdego z podanych kolorówdiscriminative success [niebieski]ilość powstałych kategorii w populacji Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
    34. 34. PSYCHOLOGIA POZNAWCZA
    35. 35. Triada semiotyczna concept method object symbolTriada semiotyczna wiąże symbol, obiekt, oraz znaczeniektóry można przyporządkować do obiektu. Metoda jest z koleiprocedurą, która umożliwia zdecydowanie czy znaczeniepasuje do obiektu, czy też nie.Czasami, metoda jest ograniczeniem użycia symbolu doobiektów, z którymi jest związana. L. Steels
    36. 36. Triada semiotyczna concept method object symbolMetoda ogranicza użycie symbolu do obiektów z którymi jeston związany: np. jako klasyfikator, percept, wzorzec czy procesrozpoznania który operując na danych sensoryczno-motorycznych decyduje czy obiekt pasuje do konceptu.Jeśli możemy zdefiniować taką metodę, mówimy, że symboljest ucieleśniony dzięki procesowi percepcji. L. Steels
    37. 37. Relacje semantyczne concept concept concept method method method object symbol object symbol object symbolRelacje Semantyczne umożliwiają przemieszczanie się i nawigacjęmiędzy znaczeniami, obiektami i symbolami: objekty występują w kontekście (przestrzennych lub czasowychrelacjach) symbole współwystępują z innymi symbolami znaczenia mogą mieć relacje semantyczne między sobą metody także mogą być powiązane L. Steels
    38. 38. concept Sieci semiotyczne concept method conceptmethod method object symbol object symbol object symbolSieć semiotyczna jest zbiorem połączeń między obiektami,symbolami, znaczeniami, oraz implementacjami, tj. metodami.Każda osoba tworzy i podtrzymuje taką sieć, która jestmodyfikowana, rozszerzana, reorganizowana za każdym razemkiedy osoba myśli, poznaje, oddziałuje ze światem zewnętrznym iinnymi osobami. L. Steels
    39. 39. Komunikacja i adaptacjaOsoby nawigują przez sieci semiotyczne w celu osiągnięciasukcesu w komunikacji.Podróżowanie przez symbole w celu konceptualizacji sytuacji wktórej znajduje się podmiot.Mówiący i słuchający muszą dostosować się, uliniowić ichsystemy komunikacyjne na wszystkich poziomach w trakciekażdego aktu komunikacji między nimi.Ciągła i postępująca adaptacja sieci semiotycznych.Krajobraz semiotyczny: zbiór wszystkich sieci semiotycznych wcałej populacji oddziałujących między sobą osób, lub agentów.“When a speaker wants to draw the attention of an addressee to an object, he canuse a concept whose method applies to the object, then choose the symbolassociated with this concept and render it in speech or some other medium. Thelistener gets the symbol, uses his own vocabulary to retrieve the concept and hencethe method, and applies the method to decide which object might be intended.” L. Steels
    40. 40. Symbol ugruntowanySearle (1980): Czy robot będzie mógł radzić sobie z ucieleśnionymi pojęciami?Posiadanie ciała, które wchodzi w relacje ze światem zewnętrznym, posiadafizyczną strukturę, zmysły i aktuatory, przetwarza dochodzące do niego sygnały,rozpoznaje wzorce - to wszystko łączy rzeczywistość ze światem symboli.”Is it possible to build an artificial system that has a body, sensors and actuators, signal and imageprocessing and pattern recognition process, and information structures to store and use semioticnetworks, and uses all that for communicating about the world or representing information about theworld?” Searle, 1980
    41. 41. Reprezentacje i znaczeniaLudzki świat reprezentacji jest bogaty, służy wielu celom jednocześnie.Jak interpretować, mapować znaczenia, na które wskazująreprezentacje?”Meaning and representation are different things. We need a task, an environment, and aninteraction between the agent and the environment which works towards an achievementof the task in order to see the emergence of meaning.” L. Steels
    42. 42. Symbol•  Inteligencja szachisty •  Inteligencja karalucha –  Umysł: system symboli –  Dynamiczne dostrojenie do –  Poziom wyjaśniania: świata zewnętrznego abstrakcyjne struktury umysłowe jednostki –  Non-Reprezentacjonalizm –  Nauka o koordynacji –  Potrzeba (paląca) •  W oparciu o ugruntowania symboli synergetykę •  Struktury skoordynowane: wewnątrz/między jednostkami 41 J. Rączaszek-Leonardii
    43. 43. symbol i dynamika• Wiele teorii lata 50. i 60.: teorie informacji w biologii: – Von Neumann, Polanyi, Turing (?) – Howard Pattee• Konieczność symbolu: przekazywalnej struktury, która ma kontrolującą funkcję w stosunku do dynamiki – Von Neumann’66: adaptacyjny wzrost złożoności jest niemożliwy bez samo-opisu (samo-rekonstrukcja) – Pattee: Procesy kontroli i pomiaru wymagają „czegoś innego” niż opis w terminach praw fizycznych 42
    44. 44. Co dalej? Niels Bohr Predicting is very difficult especially about the future…

    ×