2. Jak widzimy kolory?
Od percepcji do budowania pojęć
Dariusz Plewczynski
ICM, Uniwersytet Warszawski
D.Plewczynski@icm.edu.pl
3. O czym mówimy...
• Percepcja
• Pojęcie - podstawowa struktura poznawcza,
reprezentująca klasę obiektów (przedmiotów,
zdarzeń, czynności, cech, relacji) podobnych do
siebie pod pewnym względem. Tworzeniu pojęć
towarzyszą procesy abstrakcji i uogólniania.
• Pojęcia są wyrażane przez społecznie ustalone
wyrażenia językowe. Tworzymy je od wczesnego
dzieciństwa, często bezwiednie.
• Prof. Chlewiński: rodzimy się z genetetycznie
uwarunkowaną kompetencją pojęciową. 2
6. Neurocybernetyka
Definicja polska:
Dział biocybernetyki, zajmujący się analizą i
modelowaniem procesów przetwarzania informacji i
sterowania w układach nerwowych zwierząt i człowieka.
Główne kierunki prac to m.in.:
ustalenie i opis matematyczny własności neuronu,
analiza percepcji,
badanie i modelowanie procesów uczenia się,
badanie sieci neuronowych i hierarchicznej organizacji
układu nerwowego,
analiza systemów sterowania układu ruchu.
http://en.wikipedia.org/wiki/Neurocybernetics
8. Spostrzeganie barw
• Barwa jako taka fizycznie nie istnieje,
• Wiele teorii spostrzegania - m.in trójczynnikowa
(Young-Helmholtz, czerwony-zielony-niebieski) i
dwuczynnikowa (Hering, czerwień/zieleń oraz
żółty/błękit),
• Intersubiektywność procesu,
• Zaburzenia w widzeniu barw,
• Czy język determinuje spostrzeganie?
Determinizm vs. uniwersalizm
7
9. Receptory: czopki i pręciki
Czopki, dawniej zwane słupkami – światłoczułereceptory
siatkówki oka. Czopki umożliwiają widzenie kolorów przy
dobrym oświetleniu. Jest to widzenie fotopowe.
Pręciki - światłoczułe receptory siatkówki oka. Odpowiadają za
postrzeganie kształtów i ruchu. Pręciki umożliwiają czarno-białe
widzenie przy słabym oświetleniu. Jest to widzenie skotopowe.
Względna absorpcja światła
czopków (K, Ś, D)
i pręcików (Pr)
przez ludzkie oko.
Skala długości fali
nie jest liniowa.
http://pl.wikipedia.org/wiki/Czopki
10. Receptory kolorów
Ludzkie oko zawiera trzy rodzaje czopków, z
których każdy ma inną charakterystykę
widmową, czyli reaguje na światło z innego
zakresu barw.
Pierwszy rodzaj reaguje głównie na światło
czerwone (ok. 700 nm), drugi na światło
zielone (ok. 530 nm) i ostatni na światło
niebieskie (ok. 420 nm).
Impulsy generowane pod wpływem światła w
pręcikach i czopkach są wysyłane do mózgu
za pośrednictwem komórek dwubiegunowych,
komórek zwojowych, a także bezpośrednio
poprzez własne aksony.
http://pl.wikipedia.org/wiki/Czopki
12. decision. Subsequently, both participants wer
Zadanie wzrokowe
ormed of the correct choice (with the excep
Który z obrazków ma większy kontrast (wzory Gabora)?
Modele decyzji, agregacji i dzielenia się informacją
w parach w przypadku zadania decyzyjnego związanego z
rozróżnianiem bodźców
A B
Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
13. Funkcja psychometryczna
probability of choosing the 2nd interval
1.0
1
s= p
P (c)
0.8
2⇡
0.6
slope / derivative
0.4
a measure of efficiency
0.2
contrast difference
-3 -2 -1 1 2 3
error rate ~
Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
14. Decyzje w parach
Czy można zaobserwować
jakąkolwiek mierzalną zaletę jeśli
bodziec wzrokowy jest
obserwowany przez więcej niż
jedną osobę?
Bahrami, Olsen, Latham, Roepstorff, Rees, Frith Science, 2010
15. observer [Fig. 4A, red bar; t(13) = 0.18, p = 0.85, benefit. However, the results do not address the
Downloaded from www.sciencemag.o
his prediction using the paired t test], as predicted by the BF model. More question of whether communication alone, with-
Wyniki Bahramiego
ment 1, modified so that
t allowed to communi-
important, dyad sensitivity was significantly lower
than the upper bound predicted by the WCS model
out feedback, is sufficient for achieving collabo-
ration benefit. Could dyads achieve any group
benefit at all without ever receiving any objective
feedback about the accuracy of their decisions?
This is an important question, because feedback
is not formally incorporated in the confidence-
sharing model (9). Taking this model seriously
at face value, one maymax(s1 , s2 ) counter-
make the extremely
p
q1 + s2 )/ 2
(s
intuitive assumption that, as long as accurate com-
munication of confidence is ensured, dyad benefit
2 + s2
can still be achieved without any feedback (that
sknowledge of decision
is, without any definitive 1 2
outcomes).
In experiment 4, we removed the feedback
stage of the task to test this prediction (9): After
the joint decision was made (either automatically
nt
no communication
in the agreement trials or after interaction in the
1. (A) Experimental paradigm. Each trial consisted of two observation intervals. In each interval,the participants were not
pe disagreement trials),
tically oriented Gabor patches were displayed equidistantly around an imaginary circle (duration: All other aspects of the
al told the correct answer.
). => Best Decides
n- In either the first or second interval, there was one oddball target that had slightly higher contrast
experiment were identical to experiment 1. Con-
nd of the others (in this example, upper-left target in interval 1). (B) Two example psychometric
all sistent with our prediction, even without feed-
ne
ons and the group average in experiment 1. The proportion of trials in which the dyads nevertheless achieved a significant
back, the oddball was
ds to be in the second interval is plotted against the contrast difference at thecollaboration benefit [Fig. 4A, blue bar; t(10) =
ed oddball location (i.e.,
st no feedback
ad in the second interval minus contrast in the first). A highly sensitive observer would0.022, paired t test], and dyad sen-
2.68, p = produce a
ellrising psychometric function with a large slope. Blue circles, performance of the less sensitive
y sitivity was statistically indistinguishable from
er.
he => (still) WCS
ver (smin) of the dyad; red squares, performance of the more sensitive observerprediction ofblackconfidence sharing model
the (smax); and the
nds, performance of the dyad (sdyad). The blue and red dashed curves are the best fitblueabar; t(10) = 1.16, p = 0.27, paired
[Fig. 4B, to cumu-
nd
Gaussian function (9); the solid black curve is the prediction of the WCS. N = 15 dyads. (C)These findings indicate that objective
e- t test]. Predictions of
nt. models (see Eqs. 1 to 4). The x axis shows the ratio of individual sensitivities (smin/smax),was not necessary, and communica-
ur feedback with values
to corresponding to dyad members with similar sensitivities and values near zero alone was sufficient forRees, Frith Science, 2010
ne Bahrami, Olsen,dyad members
tion to Latham, Roepstorff, achieving collective
16. Dlaczego musimy modelować
w populacjach?
Rozróżnianie bodźców, agregacja informacji - nawet tej
najprostszej, tj. percepcyjnej - jest zależna jeśli percepcja i
przyporządkowanie, kategoryzacja bodźca jest dokonywana
przez więcej niż jedną osobę,
a więc potrzebujemy: populacji!
czyli: modelowania wielo-agentowego, gdzie integracja
informacji, opis rzeczywistości dokonuje się w grupie, w sposób
rozproszony.
18. Percepcja kolorów
Proces kategoryzacji
semantycznej pojęć w przestrzeni
barw - analiza danych światowych
i modelowanie populacyjne
19. Trzy koncepcje języka
Natywizm Chomsky &Fodor
Struktura - ogólny konstrukt - języka jest wrodzona. Zbiór
kategorii językowych - zarówno pojęć jak i gramatyk jest
współdzielony przez wszystkich ludzi od urodzenia. Uczenie się
języka polega na wypełnianiu aktualnymi formami językowymi
struktur uprzednio istniejących
Przejmujemy od otaczających nas ludzi strukturę języka.
20. Trzy koncepcje języka
Empirycyzm
Mechanizm nabywania języka jest współdzielony przez
społeczności, jednak samo kształtowanie się jego struktur jest
odbiciem otaczającej nas rzeczywistości. Funkcjonalizm.
Kulturalizm
Poza językotwórczm oddziaływaniem środowiska silny wpływ
na strukurę języka i zbiór pojęć ma konsensus kulturowy.
Przejmujemy od otaczających nas ludzi strukturę języka.
21. nd Roberts (1956), consisting of 320 Munsell chips of 40 equally spaced hues and eight levels of lightness (Value) at
aximum saturation (Chroma) for each (Hue, Value) pair, was supplemented by nine Munsell achromatic chips (black through
Pomiary World Color Survey (WCS)
ay to white) – the resulting stimulus array is shown in Figure 1a2. First, without the stimulus array present, the major color
rms of the collaborator’s native language were elicited by questioning that was designed to find the smallest number of
mple words with which the speaker could name any color (basic color terms)3. Once this set of basic color terms was
tablished, the collaborator was asked to perform two tasks. In the naming task the stimulus array was placed before the
eaker and for each color term t, a piece of clear acetate was placed over the stimulus board and the collaborator was asked to
dicate, with a grease pencil on the acetate sheet, all the chips that he or she could call t. In the focus task the stimulus array
as shown as before and the collaborator was asked to indicate the best example(s) of t for each basic color term t. The
World Color Survey, Berlin & Kay 1969
oundaries of categories showed great variability, perhaps because of the vagueness of the instruction of the naming task:
Basic Color Terms: Their Universality and Evolution. Berkeley
obably some subjects took the instruction to call for all the chips that were more t than anything else, while others appear to
ave taken it to call for all chips in which any trace of t was visible.4 The focal choices of the B&K subjects were much more
and Los Angeles. University of California Press, 1969.
ustered and led to the conclusion that
Badano przyporządkowanie nazwy do pól palety Munsella
wykorzystano przestrzeńterms ofcategories become encodeddrawn history set aofgiven language
... [1] the referents for the basic color
barw CIE L*a*bto be in the from a of eleven
universal perceptual categories, and [2] these
all languages appear
zbadano 110 niepiśmiennych kultur z całego świata
in a partially fixed order (Berlin and Kay 1969: 4f).5
Berlin & Kay, 1969
Figure 1a. The WCS stimulus array.
23. Yaminahua
107 1 fiso FI
107 2 oxo OX
107 3 oshin OS
107 4 chaxta CH
107 5 dada DA
Berlin & Kay, 1969
24. World Color Survey (WCS)
pokazano hierarchiczność w słownikach pomiędzy
poszczególnymi językami,
argumentacja na rzecz koncepcji uniwersalistycznej
All languages contain terms for black and white.
If a language contains three terms, then it also contains a term for red.
If a language contains four terms, then it also contains a term for either
green or yellow (but not both).
If a language contains five terms, then it contains terms for both green and
yellow.
If a language contains six terms, then it also contains a term for blue.
If a language contains seven terms, then it also contains a term for brown.
If a language contains eight or more terms, then it contains a term for
purple, pink, orange, and/or grey.
Berlin & Kay, 1969
25. Trzy koncepcje języka
Czy środowisko zewnętrzne wpływa na proces
kategoryzacji ?
Generalnie nie, ale ….
możliwe subtelne efekty
Czy jest związek pomiędzy nazewnictwem kolorów w danym
języku, a natężeniem kolorów na zdjęciach przedstawiających
dane środowisko* ?
* - uogólnione do jednego z 13 biomów
26. Uniwersalizm
A B Chromatic Achromatic
RED GREEN YELLOW/ORANGE BLUE WHITE GRAY BLACK
PINK PURPLE BROWN GRUE
Motywy: wykonano klastrowanie badanych osób (k-means)
(VI)
GBP (VI)
C
(V)
(?)
Grue (IV) [IVb]
(IVb)
[IIIa?]
Gray [IIIa?]
[IIIb?]
Dark (II) (IV)
K=1 2 3 4 5 6 7 8 9
Fig. 1. Glossary and motifs in the WCS. (A) The WCS color chart, arranged according to Munsell hue (horizontal) and value (vertical), with 10 neutral samples
(leftmost column). (B) Concordance maps of the 11 color terms, in false color, with the color terms used in this paper. (C) Concordance maps of the color-naming
systems (motifs). Columns indicate solutions for K clusters (K ϭ 1 is the whole dataset). Titles are motif names. Roman numerals indicate corresponding stages
from Berlin and Kay (parentheses) or Kay and Maffi (brackets). At K ϭ 4 (concordance maps enlarged for clarity), 614 informants used the Green/Blue motif, 1,063
used the Grue motif, 313 used the Gray motif, and 377 informants used the Dark motif. Lindsey, Brown. PNAS, 2009
27. Uniwersalizm
Motywy w nazewnictwie kolorów grupują osoby z różnych,
niespokrewnionych grup językowych.
W większości języków istnieją podstawowe motywy
nazewnictwa kolorów GBP, Grue, Grey, Dark.
Dark Gray Grue GBP Excluded
1-5 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106
43. Gunu, Cameroon 86. Shipibo, Peru 88. Slavé, Canada 103. Walpiri, Australia
Lindsey, Brown. PNAS, 2009
29. Modelowanie
Powstawania kategorii w przestrzeni barw i przypisania nazwy
do danej kategorii.
Spektrum oraz intesywność barw prezentowana jest poprzez
1269 (lub 330) kostek Munsella
transformacja bodźca S(λ) ➡ {L,a,b}
jednostki centrujące (reactive units)
N 2
1 ⎛ xi − mij ⎞
− ∑
2 i=1 ⎜ σ ⎟
⎝ ⎠
z j (x) = e
Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
30. Kategoryzacja bodźca percepcyjnego
kategoria koloru jest zdefiniowana przez sieć adaptacyjną
J
yk (x) = ∑ w j z j (x)
j =1
perceptrony poznawcze
każdy agent posiada zbiór kategorii i wybiera kategorię
najmocniej odpowiadającą na bodziec - argmax(y(x)) w trakcie
gry w dyskryminację.
Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
31. Gra w dyskryminację
opisuje funkcjonalizm środowiskowy
algorytm:
prezentacja kontekstu O = {o1,…,oN} wraz z próbką
wyróżnioną.
wybór najsilniej odpowiadającej kategorii CS0= argmax(yc)
sukces dyskryminacyjny jeśli:
istnieje kategoria zwracająca najwyższą wartość dla próbki
wyróżnionej (jeśli nie - modyfikacja kategorii)
żadna inna próbka w kontekście nie ma równie wysokiego wyniku
Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
32. Gra w zgadywanie
proces uzgadniania przestrzeni nazw w populacji
opisuje wpływ wpływ kulturowy i środowiskowy,
występują: mówca i słuchacz
algorytm:
prezentacja kontekstu O = {o1,…,oN} mówcy i słuchaczowi
prezentacja próbki wyróżnionej mówcy
mówca gra w grę dyskryminacyjną - jeśli gra odnosi sukces -
kontynuacja, w innym przypadku gra w zgadywanie jest
przerywana
w przypadku sukcesu wybrana zostaje kategoria zwycięska Cs
Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
33. Statystyka opisująca semiologię
stopień uwspólnienia pojęć w populacji
communicative success [zielony]
zdolność do klasyfikacji każdego z podanych kolorów
discriminative success [niebieski]
ilość powstałych kategorii w populacji
Luc Steels, Tony Belpaeme. Behavioral and Brain Sciences, 2005
35. Triada semiotyczna
concept
method
object symbol
Triada semiotyczna wiąże symbol, obiekt, oraz znaczenie
który można przyporządkować do obiektu. Metoda jest z kolei
procedurą, która umożliwia zdecydowanie czy znaczenie
pasuje do obiektu, czy też nie.
Czasami, metoda jest ograniczeniem użycia symbolu do
obiektów, z którymi jest związana. L. Steels
36. Triada semiotyczna
concept
method
object symbol
Metoda ogranicza użycie symbolu do obiektów z którymi jest
on związany: np. jako klasyfikator, percept, wzorzec czy proces
rozpoznania który operując na danych sensoryczno-
motorycznych decyduje czy obiekt pasuje do konceptu.
Jeśli możemy zdefiniować taką metodę, mówimy, że symbol
jest ucieleśniony dzięki procesowi percepcji. L. Steels
37. Relacje semantyczne
concept
concept concept
method
method method
object symbol
object symbol object symbol
Relacje Semantyczne umożliwiają przemieszczanie się i nawigację
między znaczeniami, obiektami i symbolami:
objekty występują w kontekście (przestrzennych lub czasowych
relacjach)
symbole współwystępują z innymi symbolami
znaczenia mogą mieć relacje semantyczne między sobą
metody także mogą być powiązane L. Steels
38. concept
Sieci semiotyczne
concept method concept
method method
object symbol
object symbol object symbol
Sieć semiotyczna jest zbiorem połączeń między obiektami,
symbolami, znaczeniami, oraz implementacjami, tj. metodami.
Każda osoba tworzy i podtrzymuje taką sieć, która jest
modyfikowana, rozszerzana, reorganizowana za każdym razem
kiedy osoba myśli, poznaje, oddziałuje ze światem zewnętrznym i
innymi osobami. L. Steels
39. Komunikacja i adaptacja
Osoby nawigują przez sieci semiotyczne w celu osiągnięcia
sukcesu w komunikacji.
Podróżowanie przez symbole w celu konceptualizacji sytuacji w
której znajduje się podmiot.
Mówiący i słuchający muszą dostosować się, uliniowić ich
systemy komunikacyjne na wszystkich poziomach w trakcie
każdego aktu komunikacji między nimi.
Ciągła i postępująca adaptacja sieci semiotycznych.
Krajobraz semiotyczny: zbiór wszystkich sieci semiotycznych w
całej populacji oddziałujących między sobą osób, lub agentów.
“When a speaker wants to draw the attention of an addressee to an object, he can
use a concept whose method applies to the object, then choose the symbol
associated with this concept and render it in speech or some other medium. The
listener gets the symbol, uses his own vocabulary to retrieve the concept and hence
the method, and applies the method to decide which object might be intended.” L. Steels
40. Symbol ugruntowany
Searle (1980): Czy robot będzie mógł radzić sobie z ucieleśnionymi pojęciami?
Posiadanie ciała, które wchodzi w relacje ze światem zewnętrznym, posiada
fizyczną strukturę, zmysły i aktuatory, przetwarza dochodzące do niego sygnały,
rozpoznaje wzorce - to wszystko łączy rzeczywistość ze światem symboli.
”Is it possible to build an artificial system that has a body, sensors and actuators, signal and image
processing and pattern recognition process, and information structures to store and use semiotic
networks, and uses all that for communicating about the world or representing information about the
world?” Searle, 1980
41. Reprezentacje i znaczenia
Ludzki świat reprezentacji jest bogaty, służy wielu celom jednocześnie.
Jak interpretować, mapować znaczenia, na które wskazują
reprezentacje?
”Meaning and representation are different things. We need a task, an environment, and an
interaction between the agent and the environment which works towards an achievement
of the task in order to see the emergence of meaning.”
L. Steels
42. Symbol
• Inteligencja szachisty • Inteligencja karalucha
– Umysł: system symboli – Dynamiczne dostrojenie do
– Poziom wyjaśniania: świata zewnętrznego
abstrakcyjne struktury
umysłowe jednostki
– Non-Reprezentacjonalizm
– Nauka o koordynacji
– Potrzeba (paląca) • W oparciu o
ugruntowania symboli synergetykę
• Struktury skoordynowane:
wewnątrz/między
jednostkami
41
J. Rączaszek-Leonardii
43. symbol i dynamika
• Wiele teorii lata 50. i 60.: teorie informacji w
biologii:
– Von Neumann, Polanyi, Turing (?)
– Howard Pattee
• Konieczność symbolu: przekazywalnej struktury,
która ma kontrolującą funkcję w stosunku do
dynamiki
– Von Neumann’66: adaptacyjny wzrost złożoności jest
niemożliwy bez samo-opisu (samo-rekonstrukcja)
– Pattee: Procesy kontroli i pomiaru wymagają „czegoś
innego” niż opis w terminach praw fizycznych 42
44. Co dalej? Niels Bohr
Predicting is very
difficult especially about
the future…
Editor's Notes
\n
\n
\n
Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.\n \n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.  Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n \n
Izhikevich 2004:\n&#xA0;\nv' = 0.04 v^2 + 5v +140 -u +1\nu' = a(bv-u)\n\nif (v>30mV)\nv<-c\nu<-u+d\n&#xA0;\n&#xA0;\n&#xA0;\nSTDP model:\n&#xA0;\nCausal: If a pre-synaptic neuron fires and then the post-synaptic neuron fires, the synaptic weight is increased (LTP)\nAnti-causal: If a post-synaptic neuron fires and then the pre-synaptic neuron fires, the synaptic weight is descreased (LTD)\n\nLOCAL RULE to implement Hebbian learning\n&#xA0;\nSpecific stimulus: 10% neurons stimulated with an "edge" every 1/2 second: Spontaneous aperiodic bursty patterns emerge in firing rates; and neuronal groups form chains of activation. \n\n\nWhat aspects of the brain does the model include?\nThe model reproduces a number of physiological and anatomical features of the mammalian brain.&#xA0; The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models.&#xA0; The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex.&#xA0; Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation.\nAt an anatomical level, the model includes sections of cortex, a dense body of connected neurons where much of the brain's high level processing occurs, as well as the thalamus, an important relay center that mediates communication to and from cortex.&#xA0; Much of the connectivity within the model follows a statistical map derived from the most detailed study to date of the circuitry within the cat cerebral cortex.\n&#xA0;\nWhat do the simulations demonstrate?\nWe are able to observe activity in our model at many scales, ranging from global electrical activity levels, to activity levels in specific populations, to topographic activity dynamics to individual neuronal membrane potentials. In these measurements, we have observed the model reproduce activity in cortex measured by neuroscientists using corresponding techniques: electroencephalography, local field potential recordings, optical imaging with voltage sensitive dyes, and intracellular recordings.&#xA0;&#xA0; Specifically, we were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons.&#xA0; We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains.&#xA0; In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking.&#xA0; In all our simulations, we are able to simultaneously record from billions of individual model components, compared to cutting-edge neuroscience techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with an unprecedented picture of circuit dynamics.\n&#xA0;\n&#xA0;\n\nWhat will it take to achieve human-scale cortical simulations?\n Before discussing this question, we must agree upon the complexity of neurons and synapses to be simulated. Let us fix these two as described in our SC07 paper. \n The human cortex has about 22 billion neurons which is roughly a factor of 400 larger than our rat-scale model which has 55 million neurons. We used a BlueGene/L with 92 TF and 8 TB to carry out rat-scale simulations in near real-time. So, by na&#xEF;ve extrapolation, one would require at least a machine with a computation capacity of 36.8 PF and a memory capacity of 3.2 PB. Furthermore, assuming that there are 8,000 synapses per neuron, that neurons fire at an average rate of 1 Hz, and that each spike message can be communicated in, say, 66 Bytes. One would need an aggregate communication bandwidth of ~ 2 PBps. &#xA0;&#xA0; Thus, even at a given complexity of synapses and neurons that we have used, scaling cortical simulations to these levels will require tremendous advances along all the three metrics: memory, communication and computation. Furthermore, power consumption and space requirements will become a major technological obstacle that must be overcome. Finally, as complexity of synapses and neurons is increased many fold, even more resources would be required. Inevitably, along with the advances in hardware, significant further innovation in software infrastructure would be required to effectively use the available hardware resources.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
Thus, neurons are simulated in a &#x201C;clock-driven&#x201D; fashion whereas synapses are simulated in an &#x201C;event-driven&#x201D; fashion.\n&#xA0;\n As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? \n Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former. &#xA0;Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (&#x201C;messages&#x2019;) per second. This amounts to a total of 448 billion messages per second. \n Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice&#x2014;once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. \n The key observation is that synapses dominate all the three costs!\n Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? \n This is exactly the challenge that our paper addresses. \n Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization.\n Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity. \n&#xA0;\n
\n
\n
What is the goal of the DARPA SyNAPSE project?\nThe goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains. \nWho is on your SyNAPSE team?\nStanford University: Brian A. Wandell, H.-S. Philip Wong\nCornell University: Rajit Manohar\nColumbia University Medical Center: Stefano Fusi\nUniversity of Wisconsin-Madison: Giulio Tononi\nUniversity of California-Merced: Christopher Kello\nIBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh \n