Your SlideShare is downloading. ×
0
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Wyklad habilitacyjny: obliczenia poznawcze

6,556

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
6,556
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • \n
  • \n
  • Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.\n \n \nCognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children.\nCognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.\n \n
  • Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.\n \n \nCognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children.\nCognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.\n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • \n
  • The term “neuron” was coined by Heinrich Wilhelm Gottfried von Waldeyer-Hartz in 1891 to capture the discrete information processing units of the brain. \n \nThe junctions between two neurons were termed “synapses” by Sir Charles Sherrington in 1897. \n \nInformation flows only along one direction through a synapse, thus we talk about a “pre-synaptic” and a “post-synaptic” neuron. Neurons, when activated by sufficient input received via synapses, emit “spikes” that are delivered to those synapses that the neuron is pre-synaptic to. \n \nNeurons can be either “excitatory” or “inhibitory.” \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.\n \n \nCognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children.\nCognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.\n \n
  • On a historical note, in 1956, a team of IBM researchers simulated 512 neurons (N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956.).\n  \n Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times.\n The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing. \n  \n
  • \n \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  \n
  • \n  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n
  • Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \nThe key observation is that synapses dominate all the three costs!\n
  • \n
  • \n
  • Izhikevich 2004:\n \nv' = 0.04 v^2 + 5v +140 -u +1\nu' = a(bv-u)\n\nif (v>30mV)\nv<-c\nu<-u+d\n \n \n \nSTDP model:\n \nCausal: If a pre-synaptic neuron fires and then the post-synaptic neuron fires, the synaptic weight is increased (LTP)\nAnti-causal: If a post-synaptic neuron fires and then the pre-synaptic neuron fires, the synaptic weight is descreased (LTD)\n\nLOCAL RULE to implement Hebbian learning\n \nSpecific stimulus: 10% neurons stimulated with an "edge" every 1/2 second: Spontaneous aperiodic bursty patterns emerge in firing rates; and neuronal groups form chains of activation. \n\n\nWhat aspects of the brain does the model include?\nThe model reproduces a number of physiological and anatomical features of the mammalian brain.  The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models.  The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex.  Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation.\nAt an anatomical level, the model includes sections of cortex, a dense body of connected neurons where much of the brain's high level processing occurs, as well as the thalamus, an important relay center that mediates communication to and from cortex.  Much of the connectivity within the model follows a statistical map derived from the most detailed study to date of the circuitry within the cat cerebral cortex.\n \nWhat do the simulations demonstrate?\nWe are able to observe activity in our model at many scales, ranging from global electrical activity levels, to activity levels in specific populations, to topographic activity dynamics to individual neuronal membrane potentials. In these measurements, we have observed the model reproduce activity in cortex measured by neuroscientists using corresponding techniques: electroencephalography, local field potential recordings, optical imaging with voltage sensitive dyes, and intracellular recordings.   Specifically, we were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons.  We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains.  In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking.  In all our simulations, we are able to simultaneously record from billions of individual model components, compared to cutting-edge neuroscience techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with an unprecedented picture of circuit dynamics.\n \n \n\nWhat will it take to achieve human-scale cortical simulations?\n Before discussing this question, we must agree upon the complexity of neurons and synapses to be simulated. Let us fix these two as described in our SC07 paper. \n The human cortex has about 22 billion neurons which is roughly a factor of 400 larger than our rat-scale model which has 55 million neurons. We used a BlueGene/L with 92 TF and 8 TB to carry out rat-scale simulations in near real-time. So, by naïve extrapolation, one would require at least a machine with a computation capacity of 36.8 PF and a memory capacity of 3.2 PB. Furthermore, assuming that there are 8,000 synapses per neuron, that neurons fire at an average rate of 1 Hz, and that each spike message can be communicated in, say, 66 Bytes. One would need an aggregate communication bandwidth of ~ 2 PBps.    Thus, even at a given complexity of synapses and neurons that we have used, scaling cortical simulations to these levels will require tremendous advances along all the three metrics: memory, communication and computation. Furthermore, power consumption and space requirements will become a major technological obstacle that must be overcome. Finally, as complexity of synapses and neurons is increased many fold, even more resources would be required. Inevitably, along with the advances in hardware, significant further innovation in software infrastructure would be required to effectively use the available hardware resources.\n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • CoSyNe 2007\nCNS 2007\nCognitive Computing 2007\nCombines extremely large cortical simulations with extremely short turn-around times.\n\nmouse = 16 mln neurons \n                  16 x 10^6\nrat = 3.5 x mouse\ncat = 10 x rat\nmonkey = 10 x cat\nhuman = 10 x monkey \nThe rat cerebral cortex itself is a remarkable wonder of nature with a surface area of only 6 square cm, a thickness of roughly 1.5-2 mm, and consumes minimal power, but yet hides untold secrets not to mention richness of neurons and synapses which certainly dwarf the relatively simple phenomenological models that we can simulate today. Philosophically, any simulation is always an approximation (a kind of “cartoon”) based on certain assumptions. A biophysically-realistic simulation is NOT the focus of our work.\n \n1956, A large simulation of 512 neurons, IBM lead by N. Rochester -> "Tests on a cell assembly theory of the action of the brain, using a large digital computer"\n\na proposal for the Dartmouth summer research project on Artificial Intelligence by J. McCarthy, M. Minsky, N. Rochester, C. Shannon in August 31, 1955\n \n \n1990 Moshe Abeles, Corticonics\n"For a large network of excitatory and inhibitory neurons with small EPSPs it is very difficult, if not impossible, to attain steady ongoing activity at low firing rates."\n
  •  \nEach of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex.  Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory.  The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements.  First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory.  Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory.  Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory.  Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory.  The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.\n \n \nWhen will human-scale simulations become possible?\nThe figure shows the progress that has been made in supercomputing since the early 90s.  At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines.  These lines show a nice trend, which we’ve extrapolated out 10 years.\nThe IBM team’s latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s.\nTurning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s.  If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.\n \n \nCan you place the cat-scale simulation in context of relate to your past work?December 2006:Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory40% mouse-scale with 8 million neurons, 50 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \n\nApril 2007:Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memoryRat-scale with 56 million neurons, 448 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \nMarch 2009:Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory1% of human-scale with 200 million neuron, 2 trillion synapses100 - 1000 times slower than real-time at 0.1ms simulation resolution\n \n SC09: this announcement:Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memoryCat-scale with 1 billion neurons, 10 trillion synapses100-1000 times slower than real-time at 0.1ms simulation resolutionNeuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delaysPrediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time human-scale simulation may become possible.\n
  •  \nEach of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex.  Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory.  The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements.  First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory.  Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory.  Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory.  Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory.  The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.\n \n \nWhen will human-scale simulations become possible?\nThe figure shows the progress that has been made in supercomputing since the early 90s.  At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines.  These lines show a nice trend, which we’ve extrapolated out 10 years.\nThe IBM team’s latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s.\nTurning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s.  If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.\n \n \nCan you place the cat-scale simulation in context of relate to your past work?December 2006:Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory40% mouse-scale with 8 million neurons, 50 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \n\nApril 2007:Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memoryRat-scale with 56 million neurons, 448 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \nMarch 2009:Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory1% of human-scale with 200 million neuron, 2 trillion synapses100 - 1000 times slower than real-time at 0.1ms simulation resolution\n \n SC09: this announcement:Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memoryCat-scale with 1 billion neurons, 10 trillion synapses100-1000 times slower than real-time at 0.1ms simulation resolutionNeuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delaysPrediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time human-scale simulation may become possible.\n
  •  \nEach of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex.  Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory.  The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements.  First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory.  Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory.  Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory.  Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory.  The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.\n \n \nWhen will human-scale simulations become possible?\nThe figure shows the progress that has been made in supercomputing since the early 90s.  At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines.  These lines show a nice trend, which we’ve extrapolated out 10 years.\nThe IBM team’s latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s.\nTurning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s.  If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.\n \n \nCan you place the cat-scale simulation in context of relate to your past work?December 2006:Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory40% mouse-scale with 8 million neurons, 50 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \n\nApril 2007:Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memoryRat-scale with 56 million neurons, 448 billion synapses10 times slower than real-time at 1 ms simulation resolution\n \nMarch 2009:Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory1% of human-scale with 200 million neuron, 2 trillion synapses100 - 1000 times slower than real-time at 0.1ms simulation resolution\n \n SC09: this announcement:Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memoryCat-scale with 1 billion neurons, 10 trillion synapses100-1000 times slower than real-time at 0.1ms simulation resolutionNeuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delaysPrediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time human-scale simulation may become possible.\n
  • Can I see the simulator in action? \nYes, if you can download a 150 MB movie \nThe following is a frame from the movie.  An earlier frame showing the input is here and a later frame is here. To understand the figure and the movie, it is helpful if you study Figure 1 in the paper. \n \nCaption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's cortical simulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through different cortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bring static structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool, compact cognitive computing systems.\nPlease note that the simulator is demonstrating how information percolates and propagates. It is NOT learning the IBM logo. \n \nHow close is the model to producing high level cognitive function?\nPlease note that the rat (-scale simulation) does not sniff cheese, and the cat (-scale simulation) does not chase the rat.  Up to this point, our efforts have primarily focused on developing the simulator as a tool of scientific discovery that incorporates many neuroscientific details to produce large-scale thalamocortical simulations as a means of studying behavior and dynamics within the brain.  While diligent researchers have made tremendous strides in improving our understanding of the brain over the past 100 years, neuroscience has not yet reached the point where it can provide us with a recipe of how to wire up a cognitive system.  Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as a general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition.\nWhat do you see on the horizon for this work in thalamocortical simulations?\nWe are interested in expanding our model in both scale and in the details that it incorporates.  In terms of scale, as the amount of memory available in cutting edge supercomputers continues to increase, we foresee that simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon be within reach.  As supercomputing speed increases, we also see the speed of our simulations increasing to approach real-time.\nIn terms of details in our simulations, we are currently working on differentiating our cortical region into specific areas (such as primary visual cortex or motor cortex) and providing the long-range connections that form the circuitry between these areas in the mammalian brain.  For this work, we are drawing from many studies describing the structure and input/output patterns of these areas as well as a study recently performed within IBM that collates a very large number of individual measurements of white matter, the substrate of long-range connectivity within the brain.\n \n
  • Can I see the simulator in action? \nYes, if you can download a 150 MB movie \nThe following is a frame from the movie.  An earlier frame showing the input is here and a later frame is here. To understand the figure and the movie, it is helpful if you study Figure 1 in the paper. \n \nCaption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's cortical simulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through different cortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bring static structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool, compact cognitive computing systems.\nPlease note that the simulator is demonstrating how information percolates and propagates. It is NOT learning the IBM logo. \n \nHow close is the model to producing high level cognitive function?\nPlease note that the rat (-scale simulation) does not sniff cheese, and the cat (-scale simulation) does not chase the rat.  Up to this point, our efforts have primarily focused on developing the simulator as a tool of scientific discovery that incorporates many neuroscientific details to produce large-scale thalamocortical simulations as a means of studying behavior and dynamics within the brain.  While diligent researchers have made tremendous strides in improving our understanding of the brain over the past 100 years, neuroscience has not yet reached the point where it can provide us with a recipe of how to wire up a cognitive system.  Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as a general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition.\nWhat do you see on the horizon for this work in thalamocortical simulations?\nWe are interested in expanding our model in both scale and in the details that it incorporates.  In terms of scale, as the amount of memory available in cutting edge supercomputers continues to increase, we foresee that simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon be within reach.  As supercomputing speed increases, we also see the speed of our simulations increasing to approach real-time.\nIn terms of details in our simulations, we are currently working on differentiating our cortical region into specific areas (such as primary visual cortex or motor cortex) and providing the long-range connections that form the circuitry between these areas in the mammalian brain.  For this work, we are drawing from many studies describing the structure and input/output patterns of these areas as well as a study recently performed within IBM that collates a very large number of individual measurements of white matter, the substrate of long-range connectivity within the brain.\n \n
  • Allen Newell 1989\n"I mean a single set of mechanisms for all of cognitive behavior. ... Our ultimate goal is a unified theory of human cognition..."\n\nSingle algorithm for mind, not making several algorithms for each part of it (such as visual system etc.)\n\nJohn R. Anderson, 1983 \n"The most deeply rooted preconception guiding my theorizing is a belief in the unity of human cognition, that is, that all the higher cognitive processes, such as memory, language, problem solving, imagery, deduction, and induction are difference manifestations of the same underlying system"\n\nPatricia Churchland\nTerrence Sejnowski\n"It would be convenient if we could understand the nature of cognition without understanding the nature of the brain itself.\nUnfortunately, it is difficult if not impossible to theorize effectively on these matters in the absence of neurobiological constraints"\n\nThe mind is what the brain does\nShreeve, Wolinsky\nNational Geographic, March 2005\n \nJohn Searle\n"The dirty secret of contemporary neuroscience is  ... we do not have a unifying theoretical principle of neuroscience.\n... we do not in that sense have a theory of how the brain works.\nWe know a lot of facts about what actually goes on in the brain,\nbut we do not yet have a unifying theoretical account of how what goes on at the level of the neurobiology enables brain to do what it does by way of causing, structuring, and organizing our mental life."\n
  •  \nHae-Jeong Park 2003\nGray matter, short distance\nWhite matter, long-distance\n \n \nThe figure displays results from BlueMatter, a parallel algorithm for white matter projection measurement.  Recent advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have allowed the unprecedented ability to non-invasively measure the human white matter network across the entire brain. DW-MRI acquires an aggregate description of the diffusion of water molecules, which act as microscopic probes of the dense packing of axon bundles within the white matter.  Understanding the architecture of all white matter projections (the projectome) may be crucial for understanding brain function, and has already lead to fundamental discoveries in normal and pathological brains.  The figure displays a view from the top of the brain (top) and a view from the left hemisphere (bottom).  The cortical surface is shown (gray) as well as the brain stem (pink) in context with a subset of BlueMatter’s projectome estimate coursing through the core of the white matter in the left hemisphere.  Leveraging the Blue Gene/L supercomputing architecture, BlueMatter creates a massive database of 180 billion candidate pathways using multiple DW-MRI tracing algorithms, and then employs a global optimization algorithm to select a subset of these candidates as the projectome. The estimated projectome accounts for 72 million projections per square centimeter of cortex and is the highest resolution projectome of the human brain.\nWhat role will BlueMatter play in the SyNAPSE project?\nLong term, we hope that our work will lead to insights on how to wire together a system of cognitive computing chips. Short term, we are incorporating data from BlueMatter into our cortical simulations. \n
  • Is it a Rat Brain?\n No.\n The rat cerebral cortex itself is a remarkable wonder of nature with a surface area of only 6 square cm, a thickness of roughly 1.5-2 mm, and consumes minimal power, but yet hides untold secrets not to mention richness of neurons and synapses which certainly dwarf the relatively simple phenomenological models that we can simulate today. Philosophically, any simulation is always an approximation (a kind of “cartoon”) based on certain assumptions. \n  \n A biophysically-realistic simulation is NOT the focus of our work. \n  \n Our focus is on simulating only those details that lead us towards insights into brain's high-level computational principles. Elucidation of such high-level principles will lead, we hope, to novel cognitive systems, computing architectures, programming paradigms, and numerous practical applications.\n So, no, it is not a rat brain, and it most certainly does not sniff cheese yet! But, it is rat-scale, and it does consume a lot of processing cycles and power!!\n  \n What can brain teach us about new computing architectures?\n The cortex is an analog, asynchronous, parallel, biophysical, fault-tolerant, and distributed memory machine. C2 represents one logical abstraction of the cortex that is suitable for simulation on modern distributed memory multiprocessors. Computation and memory are fully distributed in the cortex, whereas in C2 each processor houses and processes several neurons and synapses. Communication is implemented in the cortex via targeted physical wiring, whereas in C2 it is implemented in software by message passing on top of an underlying general-purpose communication infrastructure. Unlike the cortex, C2 uses discrete simulation time steps and synchronizes all processors at every step. \n  \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
  • Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.\n \n \nCognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children.\nCognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.\n \n
  • The brain is fundamentally different from and complementary to today’s computers. The brain can exhibit awe-inspiring function of sensation, perception, action, interaction, and cognition. It can deal with ambiguity and interact with real-world, complex environments in a context-dependent fashion. And yet, it consumes less power than a light bulb and occupies less space than a 2-liter bottle of soda. \nOur long-term mission is to discover and demonstrate the algorithms of the brain and deliver cool, compact cognitive computers that that complements today’s von Neumman computers and approach mammalian-scale intelligence. We are pursuing a combination of computational neuroscience, supercomputing, and nanotechnology to achieve this vision. \nTowards this end, we are announcing two major milestones.\nFirst, using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time. \nSecond, we have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputing architecture to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.\nThese milestones will provide a unique workbench for exploring a vast number of hypotheses of the structure and computational dynamics of the brain, and further our quest of building a cool, compact cognitive computing chip. \n
  • What role will BlueMatter play in the SyNAPSE project?\nLong term, we hope that our work will lead to insights on how to wire together a system of cognitive computing chips. Short term, we are incorporating data from BlueMatter into our cortical simulations. \nWhat makes all the computational power necessary?\nBecause of the relatively low resolution of the data compared with the white matter tissue, there are many possible sets of curves one may draw in order to estimate the projectome and compare it with a global error metric as we have done.  Searching this space leads to a combinatorial explosion of possibilities.  This has led many researchers to focus on individual tract estimation at the cost of ignoring global constraints, such as the volume consumption of the tracts.  Rather than simplify our model, we have addressed the computational challenge with an algorithm designed to specifically leverage a supercomputing architecture of Blue Gene.\nWhat are the next steps?\nWe are also interested in using our technique to make measurements on the projectome and communication between brain areas that can generate hypothesis about brain function that may be validated with behavioral results or perhaps functional imaging and can be integrated with large-scale simulations.  \nFutureHow will your current project to design a computer similar to the human brain change the everyday computing experience?\nWhile we have algorithms and computers to deal with structured data (for example, age, salary, etc.) and semi-structured data (for example, text and web pages), no mechanisms exist that parallel the brain’s uncanny ability to act in a context-dependent fashion while integrating ambiguous information across different senses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities. Success of cognitive computing will allow us to mine the boundary between digital and physical worlds where raw sensory information abounds. Imagine, for example, instrumenting the world’s oceans with temperature, pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time to a cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a face in a crowd. We think that cognitive computing has the ability to profoundly transform the world and bring about entirely new computing architectures and, possibly even, industries.\nWhat is the ultimate goal? \nCognitive computing seeks to engineer the mind by reverse engineering the brain.  The mind arises from the brain, which is made up of billions of neurons that are liked by an internet like network. An emerging discipline, cognitive computing is about building the mind, by understanding the brain. It synthesizes neuroscience, computer science, psychology, philosophy, and mathematics to understand and mechanize the mental processes.  Cognitive computing will lead to a universal computing platform that can handle a wide variety of spatio-temporally varying sensor streams.\n
  • What is the goal of the DARPA SyNAPSE project?\nThe goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains. \nWho is on your SyNAPSE team?\nStanford University: Brian A. Wandell, H.-S. Philip Wong\nCornell University: Rajit Manohar\nColumbia University Medical Center: Stefano Fusi\nUniversity of Wisconsin-Madison: Giulio Tononi\nUniversity of California-Merced: Christopher Kello\nIBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh \n
  • Transcript

    • 1. Dariusz Plewczynski, PhD ICM, Uniwersytet Warszawski D.Plewczynski@icm.edu.pl
    • 2. Obliczenia poznawcze Modelowanie wielko-skalowe kory mózgowej ssaków Dariusz Plewczynski, PhD ICM, Uniwersytet Warszawski D.Plewczynski@icm.edu.pl
    • 3. Czym jest poznanie?
    • 4. Czym jest poznanie?1. Cognoscere łacina: "wiedzieć" lub "rozpoznawać"2. Poznanie jest ogólnym terminem opisującym wszystkie znane formy wiedzy (e.g. uwaga, pamięć, wnioskowanie i rozumienie koncepcji, faktów, wniosków czy reguł).3. Proces poznawczy odnosi się do przetwarzania informacji, wykorzystania wiedzy, oraz zmiany wniosków/preferencji.4. Psychologia poznawcza zajmuje się poznaniem.5. Nauki poznawcze są przykładem interdyscyplinarnej dziedziny, która stara się wykorzystując zdobycze psychologii poznawczej zastosować odkryte przez nią reguły i mechanizmy do innych systemów przetwarzających informację.6. Informatyka Poznawcza bada naturalną inteligencję i to jak przetwarzana jest informacja w mózgu, jak również procesy związane z percepcja i poznaniem. http://en.wikiversity.org/wiki/Cognition
    • 5. Nauki o Poznaniu
    • 6. Nauki o PoznaniuInterdyscyplinarna nauka poświęconastudiowaniu umysłu, myśli. Zawiera wielezróżnicowanych dziedzin wiedzy, takichjak psychologia, sztuczna inteligencja,filozofia, neuronauka, lingwistyka,antropologia, socjologia czy biologia.Opiera się na zróżnicowanym warsztaciebadawczym (e.g. doświadczeniachbehawioralnych, symulacjachkomputerowych, neuro-obrazowaniu,analizie statystycznej), opisującym wielepoziomów analizy umysłu (od uczenia się iprocesów decyzyjnych w mikroskali, aż dologiki wyższego poziomu, planowania, odmikro- i mezo-obwodów w mózgu aż dowielkoskalowej, modularnej architekturymózgu itp.). http://en.wikiversity.org/wiki/Cognitive_science
    • 7. Informatyka neurokognitywna
    • 8. Informatyka neurokognitywna Funkcje poznawcze wspomagane są przez pamięć różnego typu:• pamięć rozpoznawczą, pozwalającą na identyfikację znanych obiektów lub do- strzeżenie odstępstw od oczekiwań;• pamięć skojarzeniową, prowadzącą automatycznie do prostych wniosków, za- pewniającą realizację procesów warunkowania klasycznego;• pamięć proceduralną, czyli pamięć umiejętności manualnych i sekwencji działań;• pamięć semantyczną, pozwalającą na interpretację sensu oraz na dostęp do złożo- nych struktur wiedzy;• pamięć roboczą, pozwalająca na łączenie ze sobą w kombinatoryczny sposób róż- nych fragmentów informacji w większe całości.W. Duch w “Neurocybernetyka teoretyczna” podredakcją Prof. Ryszarda Tadeusiewicza
    • 9. Jak działa mózg? P. Latham P. Dayan
    • 10. Symulowanie mózguneuron opisany przez Heinricha von Waldeyer-Hartz 1891 http://en.wikipedia.org/wiki/Neuron
    • 11. Symulowanie mózgusynapsa zaproponowana przez Charlesa Sherrington 1897 http://en.wikipedia.org/wiki/Synapse
    • 12. Jak działa mózg?Kora mózgowa (liczby): 1 mm^2 1 mm3 of kory: 50,000 neuronów 10,000 połączeń/neuron (=> 500 million połaczeń) 4 km of aksonów cały mózg (2 kg): 1011 neuronów 1015 połączeń 8 million km aksonów P. Latham P. Dayan
    • 13. Jak mózg się uczy?Czas & nauka: Typowo posiadamy ok. 1015 synaps. Jeśli wystarczy 1 bit informacji do zdefiniowania synapsy, topotrzebujemy 1015 bitów żeby ustalić stan każdej z nich. 30 lat ≈ 109 sekund. Żeby ustalić stan 1/10 Twoich synaps w ciągu 30 lat, musiszwchłaniać aż 100,000 bitów/sekundę.Uczenie się mózgu jest prawie bez nadzoru P. Latham P. Dayan
    • 14. 6 Neurocybernetyka teoretyczna Sztuczne sieci neuronowe strukturę sieci, dodawać albo usuwać neurony, zmieniać połączenia i ich parametry, 124 słowem, eksperymentować. Tego rodzaju możliwości najłatwiej uzyskuje się wtedy, teoretyczna Neurocybernetyka gdy sieć neuronowa nie ma postaci fizycznej, lecz istnieje jako wirtualny model kom- puterowy. I właśnie takich symulowanych komputerowo sieci neuronowych najczę- ściej używa się zarówno w celach badawczych, jakYCINA 6.1. Najbardziej po lewej). R i praktycznych (u dołu 6.3. Sieci w zastosowaniach praktycznych zdobyta w czasie procesu uczenia Publikacji poświęconych sieciom neuronowym traktowanym jako narzędzia do roz- wiązywania różnych zadań, a zwłaszcza do modelowania różnych systemów, uka- zało się już bardzo wiele. Autor tego rozdziału sam napisał kilka książek i ponad sto artykułów na ten temat. Systemy te mimo wielu lat intensywnej eksploatacji nie do- czekały się jednak jednolitej i konstruktywnej teorii, na przykład takiej, która pozwo- liłaby na sformułowanie ogólnej metodologii ich wykorzystania (nie odpowiedziano DANE WYNIK np. na pytanie, jaką strukturę sieci i jaki jej rozmiar powinno się zastosować do roz- reprezentujàce reprezentujàcy wiązania każdego konkretnego problemu) oraz takiej, która uzasadniałaby teoretycz- zadanie rozwiàzanie nie obserwowane fenomeny zachowania sieci (na przykład tego, że do wykonania przy pozornie zadania identycznych zadaniach raz proces uczenia pozwala na stworzenie neuronowego roz- WIEDZA TABELA 6.1. Wady Sieci liniowe Proste; szybki proces uczenia, Nie pozwalają budować modeli przewidywalne zachowanie nieliniowych Sieci MLPa Zwarta budowa; łatwo dostępne Bardzo powolny proces uczenia, częste gotowe programy niepowodzenia Sieci RBFb Szybko się uczą, dobrze Duże rozmiary, nie dokonują ekstrapolacji interpolują skomplikowane dane Sieci Kohonenac Porządkują i samoorganizują przykładowych obserwacji pokazujących (na konkretnych przykładach), jak w roz- Proces samoorganizacji jest słabo wielowymiarowe dane ważanym problemie na podstawie wektora wejść kształtuje się wektor wyjść, do tego, kontrolowany, trudna interpretacja Sieci Hopfieldad Pozwalają na kojarzenie danych Ze względu na obecność sprzężeń procesu uczenia wytworzyć w swojej strukturze (a dokładniej – w warto- aby w toku i optymalizację zwrotnych skomplikowana dynamika ściach tzw. wag synaptycznych) wiedzę potrzebną do rozwiązywania szeregu zadań, Sieci bayesowskie Bardzo szybki proces uczenia, Bardzo duże rozmiary, powolne działanie, (probabilistyczne) dobre podstawy teoretyczne podobnych do tych, które były przedmiotem przykładów. nie dokonują ekstrapolacji a Skrót MLP pochodzi od angielskiego terminu Multi-Layer Perceptron. Jest to typ sieci neuronowych najczęściej wykorzy- stywany w praktyce. Sieć taka tym się wyróżnia, że używane w niej neurony mają charakterystyki sigmoidalne (oparte na formule funkcji logistycznej lub na funkcji tangens hiperboliczny). Bliższe omówienie tej sieci jest w tym miejscu nie- możliwe, dlatego zainteresowanego czytelnika odsyłamy do stosownej literatury, na przykład do książki (Tadeusiewicz i in. 2007), która przedstawia tę problematykę w sposób bardzo przystępny. b Skrót RBF pochodzi od angielskiego terminu Radial-Basis Functions. Jest to typ sieci neuronowych, w których bardzo duża (z reguły) warstwa ukryta sieci wykorzystuje neurony o charakterystykach radialnych (oparte na wykorzystaniu funkcji Długi proces, prowadzący do stworzenia sieci neuronowych, schematycznie przedsta- Gaussa). Ich parametry pozwalają na identyfikowanie skupień obiektów występujących w zbiorze sygnałów wejścio- wych (Tadeusiewicz i in. 2007). wiono na rycinie 6.2. Zainicjowało go zaciekawienie budową, działaniem i właściwo- c Są to sieci samouczące się (niewymagające nauczyciela), które pozwalają na jakościową (intuicyjną) interpretację wielo- wymiarowych danych (Tadeusiewicz i in. 2007). R. Tadeusiewicz ściami mózgu (ryc. 6.2 po lewej u góry), co doprowadziło do licznych i bardzo wnikli- d Są to sieci rekurencyjne, w których na skutek obecności sprzężeń zwrotnych dochodzi do skomplikowanych procesów dynamicznych, pozwalających (między innymi) na tworzenie pamięci skojarzeniowych (Tadeusiewicz i in. 2007). wych badań tego narządu. Badania te podejmowali i podejmują liczni specjaliści: neu-
    • 15. The I&F model is one-dimensional (1-D), hence it cannot burst or have other properties of cortical neurons. One may think that having a second linear equation if thenModele neuronów where and are the resting and threshold values of the membrane potential. This model is canonical in the sense that any Class 1 excitable system described by smooth ODEs can be transformed into this form by a continuous change of describing activation dynamics of a high-threshold K current variables [18]. It takes only seven operations to simulate 1 ms can make an improvement, e.g., endow the model with spike- of the model, and this should be the model of choice when one frequency adaptation. Indeed, each firing increases the K acti- simulates large-scale networks of integrators. Unlike its linear vation gate via Dirac delta function and produces an outward analogue, the quadratic I&F neuron has spike latencies, activity- current that slows down the frequency of tonic spiking. Simula- dependent threshold (which is only when ), and tions of this model take 10 floating point operations/1 ms time bistability of resting and tonic spiking modes. step, yet the model still lacks many important properties of cor- tical spiking neurons. F. Spiking Model by Izhikevich (2003) C. Integrate-and-Fire-or-Burst All of the responses in Fig. 1 were obtained using a simple model of spiking neurons proposed recently by Izhikevich [15] Smith and coauthors [24] suggested an improvement—inte- grate-and-fire-or-burst (I&FB) model (1) (2) if then with the auxiliary after-spike resetting if if then IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004 (3) if Here variable represents the membrane potential of the neuron to model thalamo-cortical neurons. Here describes the inac- and represents a membrane recovery variable, which accounts tivation of the calcium T-current, , , , , and are for the activation of ionic currents and inactivation of Na parameters describing dynamics of the T-current, and is the ionic currents, and it provides negative feedback to . After the Heaviside step function. spike reaches its apex mV , the membrane voltage and Having this kind of a second variable creates the possibility the recovery variable are reset according to the (3). If skips for bursting and other interesting regimes summarized in Fig. 2. over 30, then it first is reset to 30, and then to so that all spikes But this comes with a price: It takes between 9 and 13 operations have equal magnitudes. The part is chosen (depending on the value of ) to simulate 1 ms of the model. so that has mV scale and the time has ms scale. Geometrical derivation of the model based on fast and slow nullclines can be D. Resonate-and-Fire found in [11]. The resonate-and-fire neuron [16] is a two-dimensional (2-D) The model can exhibit firing patterns of all known types analogue of the I&F neuron of cortical neurons with the choice of parameters , , , and given in [15]. It takes only 13 floating point oper- ations to simulate 1 ms of the model, so it is quite effi- if then cient in large-scale simulations of cortical networks. When and , the model has where the real part of the complex variable is the membrane chaotic spiking activity, though the integration time step potential. Here , , and are parameters, and is should be small to achieve adequate numerical precision. E. Izhikevich
    • 16. Poziom opisu Fig. 2. Comparison of the neuro-computational properties of spiking and bursting models; see Fig. 1. “# of FLOPS” is an approximate number of floating point operations (addition, multiplication, etc.) needed to simulate the model during a 1 ms time span. Each empty square indicates the property that the model should exhibit in principle (in theory) if the parameters are chosen appropriately, but the author failed to find the parameters within a reasonable period of time. E. Izhikevich III. SPIKING MODELS implemented using a fixed-step first-order Euler method
    • 17. nction could be replaced by a simplified qualitative model of its effects on signals and Emulacja całego mózgunaptic strengths. Another possible scale separation level might occur between individu lecules and molecular concentration scales: molecular dynamics could be replaced wit ss action interactions of concentrations. A perhaps less likelyparation could also occur on higher levels if what matters is the ivity of cortical minicolumns rather than individual neurons. A al likely but computationally demanding scale or separation uld be the atomic scale, treating the brain emulation as a N body tem of atoms. nversely, if it could be demonstrated that there is no such scale, it uld demonstrate the infeasibility of whole brain emulation. Duecausally important influence from smaller scales in this case, amulation at a particular scale cannot become an emulation. Theusal dynamics of the simulation is not internally constrained, so itnot a 1 to 1 model of the relevant dynamics. Biologically eresting simulations might still be possible, but they would be al to particular scales and phenomena, and they would not fully Figure 3: Size scalproduce the internal causal structure of the whole brain. the nervous system A. Sandberg, N. Bostrom
    • 18. Rozdzielczość WBEWBE (whole brain emulation): W trakcie trwania konferencji WBE w 2008 uczestnicy w kwestionariuszu określali jaka rozdzielczość jest niezbędna do emulacji całego mózgu ludzkiego. Wynik konsensusowy określał ją na poziomie 4-6. Dwóch uczestników dość optymistycznie wskazało wyższe skale, zaś dwie pozostałe osoby sugerowały poziomy 8-9 w celu postawienia hipotez badawczych (następnie jednak wystarczą wg. nich poziomy 4-5 po zrozumieniu zasad ogólnych). Zatem aby zrozumieć działanie mózgu należy osiągnąć rozdzielczość 5×5×50 nm. Dlatego zapostulowano skupienie się na poziomach 4-6, pozostając otwartym na bardziej szczegółowy opis zjawisk w razie pojawienia się takich potrzeb. A. Sandberg, N. Bostrom
    • 19. Symulatory mózguPakiety oprogramowania: Neuron http://www.neuron.yale.edu/neuron/ NEST http://www.nest-initiative.org/ Brian http://www.briansimulator.org/ Genesis http://genesis-sim.org/ ...
    • 20. Coś mniejszegoniż cały mózg...System wzgórzowo-korowyE. Izhikevich: Symulacje w ramach tego modelu mają rozmiar porównywalny z ludzkim mózgiem: dokładność modelu wzgórzowo-korowego opiera się na wynikach doświadczeń z wielu gatunków ssaków. “The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. It describes spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales” E. Izhikevich
    • 21. it reflects a generic principle that the majority of pyramidal neurons inMaterials and Methods... coś mniej złożonego?AnatomyAs compared to real cortices, the model is obviously greatly reducedin the number of its neurons and synapses as well as in its anatomicalcomplexity (see Fig. 1). Nevertheless, we made efforts to preserveimportant ratios and relative distances found in the mammalian cortex(Braitenberg and Schuz, 1991). The ratio of excitatory to inhibitoryneurons was 4/1. The span of local non-myelinated axonal collateralsof an excitatory neuron was 1.5 mm, and of an inhibitory neuron was0.5 mm. The probability of synaptic connection between two nearbyexcitatory neurons was 0.09. Each excitatory neuron sent a straight12 mm long myelinated axon to a randomly chosen distant part of thesphere, and its collaterals spanned an area of radius 0.5 mm, as illus-trated in Figure 1. Each excitatory neuron innervated 75 local post-synaptic targets chosen randomly from those within a circle of radius1.5 mm, as well as 25 distant postsynaptic targets chosen from thosewithin a circle of radius 0.5 mm. Each inhibitory neuron innervated 25randomly chosen neurons within a circle of radius 0.5 mm.Spike Propagation VelocityThe conduction velocity for the myelinated axons was 1 m/s, inaccord with the value experimentally measured for myelinatedcortico-cortical fibers in awake adult rabbits (Swadlow, 1994). Incontrast, the conduction velocity of non-myelinated local axonalcollaterals was only 0.15 m/s, in accord with values reviewed byWaxman and Bennett (1972); corresponding delays could be as longas 10 ms.Neuronal DynamicsEach model neuron is described by the following equations Figure 1. Summary of the gross properties of anatomical connections in the model.(Izhikevich, 2003): 80 000 excitatory neurons (red) and 20 000 inhibitory neurons (blue) are randomly distributed on a surface of a sphere of radius 8 mm. The total number of excitatory · 2 v = 0.04v + 5v + 140 – u – I syn (1) synaptic connections is 8 000 000 and of inhibitory synaptic connections is 500 000. · u = a ( bv – u ) (2) $ if v ( t ) = 30 mV, then # v ←c " u ←u + d E. IzhikevichThe variable v denotes the membrane potential of the neuron and u
    • 22. Mniejszy model?Wielko-skalowy Model ssaczegosystemu wzgórzowo-korowego Model składa się z 1011 neuronów oraz ok. 1015 synaps. Reprezentuje ok. 300x300 mm2 ssaczej powierzchni kory mózgowej, and regiony śródmózgowia, zarówno modele pulsacyjne neuronów jak i własności fizyko-chemiczne odpowiadają tym, które zaobserwowano w ludzkim mózgu. Model symuluje jeden milion wielo-komorowych modeli neuronów pulsacyjnych dostrojonych żeby wykazywać znane z obserwacji mózgu szczura typy zapisu czynności mózgu. Model uwzględnia ok. pół miliarda synaps, wraz z ich kinetyką/dynamiką, uczeniem krótko- i długo-czasowym. E. Izhikevich
    • 23. Czas ...<<Dlaczego to zrobiłem?>> “Question: When can we simulate the human brain in real time? Answer: The computational power to handle such a simulation will be available sooner than you think. “ Jego test: 1 sec = 50 days on 27 3GHz processors “However, many essential details of the anatomy and dynamics of the mammalian nervous system would probably be still unknown.” Rozmiar nie ma znaczenia: to co się wkłada do modelu, jak model zostaje zanurzony w środowisku (closing the loop). E. Izhikevich
    • 24. ... i przestrzeńModel aktywności pulsacyjnejmózgu ludzkiego Łączy trzy zupełnie rózne skale modelu: Zbudowana jest na anatomi kory mózgowej (biała materia) uzyskanej z różnych doświadczeń obrazowania tensora dyfucji (ang. diffusion tensor imaging, DTI) mózgu ludzkiego. Zawiera śródmózgowie, sześć warstw mikro-obwodów, których budowę i połączenia zbadano w licznych doświadczeniach in vitro, oraz trójwymiarowych rekonstrukcji pojedynczych neuronów kory wzrokowej kota. Posiada aż 22 głównych typów neuronów z ich odpowiednią, warstwową strukturą i przyporządkowaniem, oraz rozkładem drzew dendrycznych zgodnych z pomiarami eksperymentalnymi. E. Izhikevich “Single neurons with branching dendritic morphology (pyramidal, stellate, basket, non-basket,
    • 25. Czy naprawę model jest prostszy? E. Izhikevich
    • 26. a co widać w doświadczeniach? E. Izhikevich
    • 27. Blue Brain ProjectWielko-skalowy modelpojedynczej kolumnykory mózgowej Od roku 2005 grupa z Lozanny (Federalna Szkoła Politechniczna) pod kierunkiem prof. Markrama buduje dokładny i realistyczny model reprezentatywnego fragmentu ludzkiego mózgu. Pojedyncza kolumna zawiera ok. 10 000 neuronów. Model jest w pełni zgodny z doświadczeniem, choć pracuje dwa rzędy wielkości wolniej niż rzeczywista tkanka nerwowa, mimo użycia najbardziej rozbudowanych komputerów wielkiej mocy (IBM Blue Gene).
    • 28. Metodologia badawcza The architecture of the Blue Brain Facility takes the form of a network o each step in every workflow is supported by a set of dedicated software steps are: •Neuroscience: systematic, industrial-scale collection of experimental d to describe all possible levels of structural and functional brain organiza subcellular, through the cellular, to the micro-circuit, meso-circuit and m •Neuroinformatics: automated curation and databasing of data, use of P Engineering to predict unknown data from a smaller sample of known d describing other levels of brain organization; •Mathematical abstraction: definition of parameters, variables, equation constraints representing the structure and functionality of the brain at di organization; •Modeling: building geometric and computational models representing d structural and functional brain organization; •Virtual experiments: use of models for virtual experiments and explorat Experiment Configuration: configuration of the experiment to exactly de stimulation and recording protocols, initial conditions, and protocols of a •Simulation: simulation of the evolution of model states (firing dynamics strengths etc.); replication of previous in vivo experiments (application o simulation, administration of a drug etc.), design and implementation of •Visualization: use of advanced techniques to display the structure and simulations and (in the medium-long term) to interactively “steer” and “n •Analysis: analysis of simulation results, initially for model validation, su simulation-based investigations of brain function and dysfunction, diagno treatments.Data acquisition is the first step in the Blue Brain workflow and involves differentlevels of effort and standardization, from exploratory experiments, collectingpreliminary data and testing techniques, to industrial-scale efforts to collect largevolumes of standardized data.The goal is to collect multiomics data describing every different level in the functionaland structural organization of the brain. The project will collect structural informationwhich includes information on the genome, the transcriptome, the proteome, thebiochemicalome, the metabolome, the organellome, the cellome, the synaptome,extracellular space, microcircuits, mesocircuits, macrocircuits, vasculature, blood, theblood brain barrier, ventricles, cerebrospinal fluid, and the whole brain.The information collected will be used to define parameters and geometric modelsdescribing the structural organization of the brain. Required functional informa¬tionincludes information on gene transcription, protein translation, cell biology processes,signaling, receptor functions, biochemical, biophysical and electrochemical processesand properties, neuronal and synaptic information processing, micro-meso- Blue Brain Project
    • 29. Narzędzia modelowania• Cell Builder (budowanie pojedycznych komórek nerwowych)• Microcircuit Builder (budowanie pojedynczych mikro-obwodów dowolnej częsci mózgu)• Mesocircuit Builder (budowanie w mezo-skali, np. neuro-obwody obejmujące kilka kolumn korowych, modułów, czy mikro-obwodów)• Experiment Builder (narzędzie do przewidywania czy replikacji wyników doświadczeń) Blue Brain Project
    • 30. h scaling behavior of our simulation; hence, better and faster. supercomputers will certainly reduce the simulation times.s l Inne podejście: Symulator kory Finally, we have demonstrated nearly perfect weak scaling of our simulation; implying that, with further progress ine supercomputing, realtime human-scale simulations are nota only within reach, but indeed appear inevitable (Figure 8).zr C2 Simulator IBM Research-n 1 EFlop/s Performance 4.5% of human scale 4.5%y 100 PFlop/s 1/83 realtimee Resources 10 PFlop/s- 144 TB memory 0.5 PFlop/ss 1 PFlop/sr 100 TFlop/s SUMs 10 TFlop/sr N=1 Performance: 1 TFlop/s 100% of human scale Real time 100 GFlop/s N=500 Predicted resources 10 GFlop/s 1 GFlop/s 4 PB memory > 1 EFlop/s ?t 100 MFlop/sd 1994 1998 2002 2006 2010 2014 2018 Yeara Figure 8: Growth of Top500 supercomputers [25] overlaid with D. Modha our result and a projection for realtime human-scale cortical sim-- ulation.
    • 31. Symulator kory: wyzwania technicznePamięć ABY osiągnąć szybkość obliczeń w czasierzeczywistym, wymagane jest żeby w stan wszystkichneuronów oraz synaps mieścił się w pamięci RAM systemu.Ponieważ ilość synaps przekracza znacząco ilość neuronów,dlatego też rozmiar całej dostępnej pamięci podzielony przezliczbę bitów na synapsę stanowi naturalne ograniczenie ilościsynaps, które mogą być modelowane równocześnie.Mózg kota (~55 milionów neuronów, ~450 miliardów synaps)wymaga odpowiedniej ilości pamięci aby przechować stanwszystkich neuronów i synaps, przy czym to pierwsze jestzaniedbywalne w porównaniu z ilością połączeń. D. Modha
    • 32. Symulator kory: wyzwania techniczneKomunikacja ABY umożliwić symulację przynajmniej jednegopulsu na sekundę w przypadku każdego z neuronów.Każdy neuron łączy się z około 8,000 innych neuronów, dlategoteż każdy z neuronów może wygenerować ok. 8,000 pulsów(“wiadomości" lub precyzyjniej neurobitów) na sekundę.Czyli około 448 miliardów sygnałów/neurobits na sekundę. D. Modha
    • 33. Symulator kory: wyzwania techniczneObliczenia ABY modelować na średnio jeden puls na każdy zneuronów na sekundę.Dlatego, znowu na średnio, każda z synaps musi byćaktywowana dwa razy: po raz pierwszy kiedy pojawia sięsygnał w neuronie pre-synaptycznym, oraz po raz drugi kiedyjej neuron post-synaptyczny wysyła puls.To określa ok. 896 miliardów aktualizacji stanów synaps nasekundę.Załóżmy, że stan każdego neuronu jest aktualizowany raz namilisekundę. To oznacza, że trzeba wykonać 55 miliardówaktualizacji stanów neuronów. Ponownie, koszt synaptycznyD. Modhadominuje całkowity koszt obliczeniowy.
    • 34. Symulator kory: sprzętSprzętowe rozwiązanie: Komputer IBM wielkiej mocyBlueGene/L z 32,768 CPU, 256MB pamięci na procesor (wcałości 8 TB), oraz 1.05GB/sec wymiany sygnałów in/out nakażdy węzeł.Aby wykorzystać powyższe trzy ograniczenia, trzebazaprojektować powyżej 16 bajtów pamięci na synapsę, 175Flops operacji na synapsę na sekundę, oraz 66 bajtów nakażdą wiadomość, tj. neurobit (pojedynczy puls).W wyniku otrzymujemy rozmiar bliski rozmiarowi mózguszczura, dodatkowo symulacja liczy się w czasie rzeczywistym! D. Modha
    • 35. Symulator kory: oprogramowanieOprogramowanie: masywnie równoległy symulator koryssaków, C2, który wykorzystuje rozproszoną pamięć i wieleprocesorów.Usprawnienia algorytmiczne:1. obliczeniowo efektywny sposób na symulację neuronów przy wykorzystaniu synchronicznej aktualizacji wszystkich neuronów, oraz asynchronicznej (bazującej na zmianie) aktualizacji synaps;2. efektywna reprezentacja układu w pamięci, kompaktowy rozmiar reprezentacji stanu symulacji;3. efektywna komunikacja minimalizująca ilość wymienianej informacji przez łączenie jej, pakowanie i wykorzystanie protokołów MPI do synchronizacji całego układu. D. Modha
    • 36. Symulator kory: tzw. C2 symulatorC2 symulator zawiera: 1. Fenomenologiczne neurony Izhikevich, 2004; 2. Fenomenologiczne STDP synapsy (plastyczność związana z czasem dotarcia impulsu do neuronu) Song, Miller, Abbot, 2000; 3. opóźnienie przesyłu aksonem 1-20 ms; 4. 80% neuronów pobudzających oraz 20% hamujących; 5. Wybrany graf losowy połączeń neuronalnych w skali mózgu myszy Braitenberg & Schuz, 1998:  16x106 neuronów  8x103 synaps na 1 neuron  0.09 prawdopodobieństwo lokalnego połączeniaPierwszy graf neuro-anatomiczny tego rozmiaru Zjawiska emergentne, ich dynamika, własności D. Modha małego świata
    • 37. Symulator koryModel kory łączy ze sobą neurony za pomocą połączeńsynaptycznych. Celem jest zrozumienie własnościprzetwarzania informacji przez takie sieci.1. Dla każdego neuronu: a. W każdym kroku czasowym (~1 ms): i. Aktualizuj stan każdego neuronu ii. Jeśli pojawi się puls w którymś z neuronów, zmień stan synaps łączących wybrany neuron z każdym po-synaptycznym i pre-synaptycznym neuronem sąsiednim.2. Dla każdej synapsy: Jeśli otrzymuje sygnał pre- lub post-synaptyczny, aktualizuj jej stan, a także jeśli jest taka potrzeba stan neuronu po-synaptycznego D. Modha
    • 38. D. Modha (IBM)Złożoność obliczeniowa problemu MOUSE HUMAN BlueGene/L Neurons 2x8x106 2x50x109 3x104BlueGene/L może CPUssymulować 1sekundę modelu w Synapses 128x109 1015 109 CPUs pairsciągu 10 sekundprzy założeniu 1Hz Communication 128x109 1015częstotliwości (66 B/spike) Spikes/sec Spikes/sec 1.05 GB/sec in/outpulsów oraz 1msrozdzielczości Computation (350F/synapse/sec) 45 TFsymulacji przy 45 TF 350 PF 8,192 CPUsprzypadkowym Memorybodźcu. (32B/synapse) 4 TB 32 PB 4 TB D. Modha
    • 39. D. Modha Przyszłość obliczeń całej kory
    • 40. D. Modha Przyszłość obliczeń całej kory
    • 41. o floating point operations. supercomputers will certainly reduce the simulation times. Finally, we have demonstrated nearly perfect weak scaling D. Modha a⌅c averaged over all nodes he same 12K run, the total of our simulation; implying that, with further progress in 52 bytes per cycle. In the supercomputing, realtime human-scale simulations are not Przyszłość obliczeń całej kory emory controller [17] for a only within reach, but indeed appear inevitable (Figure 8). ytes per cycle (at 850 MHzAs noted in the MPI Profiler me is spent in communica- e of 9.6 bytes per cycle in 1 EFlop/s Performance 4.5% of human scale 4.5% of 16). Thus, the memory 100 PFlop/s 1/83 realtime .52 out of 9.6). This large Resources 10 PFlop/s the dynamic, activity de- 144 TB memory 0.5 PFlop/s ts of neurons and synapses 1 PFlop/she simulation. Unlike other 100 TFlop/s SUM ications, these observations 10 TFlop/s⌅cult to obtain super-linear N=1 Performance large number of processors: 1 TFlop/s 100% of human scale Real timeme. 100 GFlop/s N=500 Predicted resourcesCONCLUSIONS 10 GFlop/s 1 GFlop/s 4 PB memory > 1 EFlop/s ? This is one of the most 100 MFlop/s s facing the scientific and 1994 1998 2002 2006 2010 2014 2018 Year from the cerebral cortex; a Figure 8: Growth of Top500 supercomputers [25] overlaid with our result and a projection for realtime human-scale cortical sim-ains roughly 20 billion neu- ulation. Historically, e orts to un-
    • 42. ulator [1] that incorporates relatively simpler single com- using a super-threshold stimulus delivered to every neu partment spiking neurons [2], spike-timing dependent ron at 4 Hz, we were able to simulate 5 s of model time inSymulator kory w akcji plasticity (STDP) [3], and axonal delays. 168 s of real-time at a mean firing rate of 4.95 Hz (in sta ble mode). To further push the boundaries of scaling, by We created a mouse-scale network by using 32,768 using c = 160 above, we created a network with "groups" (80% excitatory) each with 500 neurons such 16,384,000 neurons and 16,000 synapses per neuron that each group connects to 100 randomly selected groups Using 16,384 processors and 8 TB of memory, using a 5 and each neuron from the projecting group makes a total Hz stimulation, we were able to achieve 5 s of model time of c = 80 synapses with the neurons of the receptive group. in 265 s of real-time at a mean firing rate of 5 Hz (in stable Excitatory groups had axonal delays uniformly ranging mode). Figure 1 Damped, stable and avalanche modes in network simulations Damped, stable and avalanche modes in network simulations. Page 1 of 2 (page number not for citation purposes D. Modha
    • 43. Symulator kory w akcjiC2 symulator pokazuje jak zjawisko propagacji i perkolacjiinformacji w systemie. Uwaga: to NIE jest uczenie! D. Modha
    • 44. D. Modha IBM Almaden Research CenterRobocza definicja mózgu :) COGNITION MIND ACTION PERCEPTION
    • 45. Problem: architektura połączeńw mózgu Towards visualization and measurement of the long range circuitry (interior white matter) that allow geographically separated regions of the brain to communicate. The labels or colors of the fibers represent divisions of these fibrous networks that we are measuring.Red - Interhemispheric fibers projecting between the A highly parallelized algorithm for identifying white matter projectomes written to take advantage of thecorpus callosum and frontal cortex. Blue Gene supercomputing architecture.Green - Interhemispheric fibers projecting between Neuroanatomy:primary visual cortex and the corpus callosum. Gray matter, short distanceYellow - Interhemispheric fibers projecting from corpuscallosum and not Red or Green.Brown - Fibers of the superior longitudinal fasciculus,connecting regions critical for language processing.Orange - Fibers of inferior longitudinal fasciculus anduncinate fasciculus, connecting regions to cortexresponsible for memory.Purple - Projections between parietal lobe and lateralcortexBlue - Fibers connecting local regions of the frontal cortex D. Modha
    • 46. symulator kory nie jest mózgiem!Kora jest analogowym systemem, asynchronicznym,równoległym, biofizycznym, odpornym na błędy i architekturzerozproszonej. Celem C2 NIE jest biofizyczny i realistycznymodel kory!Celem jest symulacja tylko tych szczegółów budowy mózgu,które przybliżają nas do zrozumienia tego jak wyglądająwysokopoziomowe procesy przetwarzania informacji i obliczeńpoznawczych. C2 to tylko logiczna reprezentacja kory, którajest wystarczająca do symulacji na współczesnychkomputerach wieloprocesorowych.Mamy nadzieję, że symulacja wyższych poziomów pozwolinam zbudować nowe systemy poznawcze, opracować nowearchitektury obliczeniowe, stworzyć nowe paradygmaty D. Modhaprogramowania i zastosować tak opracowany model.
    • 47. Mózg: to fizyczna struktura Φ P. Latham P. Dayan
    • 48. ale też wewnętrzna dynamika! Ψ P. Latham P. Dayan
    • 49. Długa historia: Sztuczna inteligencja
    • 50. Długa historia: Sztuczna inteligencjaQ. What is artificial intelligence?A. It is the science and engineering ofmaking intelligent machines, especiallyintelligent computer programs. It is relatedto the similar task of using computers tounderstand human intelligence, but AIdoes not have to confine itself to methodsthat are biologically observable.Q. Yes, but what is intelligence?A. Intelligence is the computational part ofthe ability to achieve goals in the world.Varying kinds and degrees of intelligenceoccur in people, many animals and somemachines.Q. Isnt there a solid definition ofintelligence that doesnt depend onrelating it to human intelligence?A. Not yet. John McCarthy http://en.wikiversity.org/wiki/Artificial_Intelligence
    • 51. Cel obliczeń poznawczych np. BlueGene/P & projekt SyNAPSEZbudowanie umysłu przez inżynierię wsteczną mózgu
    • 52. Cognitive Computers PrzyszłośćDzisiejsze algorytmy mają do czynienia z informacją, którajest ustrukturalizowana (dane), albo quasi-ustrukturalizowana(strony internetowe).Na razie nie potrafimy sobie radzić z kontekstową fuzjądanych, np. wzrok, słuch, dotyk, smak, i jednoczesnąaktywnością wielu mechanizmów motorycznych.Obliczenia poznawcze skupiają się na granicy międzyświatem cyfrowym i fizycznym, tam gdzie właśnie informacjasensoryczna i motoryczna odgrywa kluczową rolę.”For example, while instrumenting the outside world’s with some sensors, andstreaming this information in the real-time manner to a cognitive computer thatmay be able to detect spatio-temporal correlations.”
    • 53. Niels Bohr Predicting is very difficult especially about the future…

    ×