Visual Population Codes Toward A Common
Multivariate Framework For Cell Recording And
Functional Imaging Edited By Nikolaus
Kriegeskorte Gabriel Kreiman download
https://ebookbell.com/product/visual-population-codes-toward-a-
common-multivariate-framework-for-cell-recording-and-functional-
imaging-edited-by-nikolaus-kriegeskorte-gabriel-kreiman-56400118
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Excel Pivottables And Pivotcharts Your Visual Blueprint For Creating
Dynamic Spreadsheets 2nd Edition Paul Mcfedries
https://ebookbell.com/product/excel-pivottables-and-pivotcharts-your-
visual-blueprint-for-creating-dynamic-spreadsheets-2nd-edition-paul-
mcfedries-2030446
Javatm And Xml Your Visual Blueprint For Creating Javaenhanced Web
Programs Paul Whitehead Ernest Friedmanhill Emily A Vander Veer
Friedmanhill Vander Veer
https://ebookbell.com/product/javatm-and-xml-your-visual-blueprint-
for-creating-javaenhanced-web-programs-paul-whitehead-ernest-
friedmanhill-emily-a-vander-veer-friedmanhill-vander-veer-2161356
Quilting Visual Quick Tips Sonja Hakala
https://ebookbell.com/product/quilting-visual-quick-tips-sonja-
hakala-2173898
Internet Visual Quick Tips 1st Edition Kate Shoup
https://ebookbell.com/product/internet-visual-quick-tips-1st-edition-
kate-shoup-2186900
Crochet Stitches Visual Encyclopedia 1st Edition Robyn Chachula
https://ebookbell.com/product/crochet-stitches-visual-
encyclopedia-1st-edition-robyn-chachula-2335566
Office 2010 Visual Quick Tips 1st Edition Sherry Kinkoph Gunter
https://ebookbell.com/product/office-2010-visual-quick-tips-1st-
edition-sherry-kinkoph-gunter-2336482
Internet Visual Quick Tips Kate Shoup
https://ebookbell.com/product/internet-visual-quick-tips-kate-
shoup-2529610
Excel Data Analysis Your Visual Blueprint For Creating And Analyzing
Data Charts And Pivottables 3rd Edition Denise Etheridge
https://ebookbell.com/product/excel-data-analysis-your-visual-
blueprint-for-creating-and-analyzing-data-charts-and-pivottables-3rd-
edition-denise-etheridge-4638746
Flash Actionscript Your Visual Blueprint For Creating Flash Enhanced
Web Sites Denise Etheridge
https://ebookbell.com/product/flash-actionscript-your-visual-
blueprint-for-creating-flash-enhanced-web-sites-denise-
etheridge-5060384
Contents
Series Foreword 1x
Preface xi
Introduction: A Guided Tour through the Book 1
THEORY AND EXPERIMENT 21
1 Grandmother Cells and Distributed Representations 23
Simon J. Thorpe
2 Strategies for Finding Neural Codes 53
Sheila Nirenberg
3 Multineuron Representations of Visual Attention 71
Jasper Poort, Arezoo Pooresmaeili, and Pieter R. Roelfsema
4 Decoding Early Visual Representations from fMRI Ensemble
Responses 101
Yukiyasu Kamitani
5 Understanding Visual Representation by Developing Receptive-Field
Models 133
Kendrick N. Kay
6 System Identification, Encoding Models, and Decoding Models: A Powerful
New Approach to fMRI Research 163
Jack L. Gallant, Shinji Nishimoto, Thomas Naselaris, and Michael C. K. Wu
7 Population Coding of Object Contour Shape in V4 and Posterior
Inferotemporal Cortex 189
Anitha Pasupathy and Scott L. Brincat
v 1 Con~n~
8 Measuring Representational Distances: The Spike-Train Metrics
Approach 213
Conor Houghton and Jonathan D. Victor
9 The Role of Categories, Features, and Learning for the Representation of
Visual Object Similarity in the Human Brain 245
Hans P. Op de Beeck
10 Ultrafast Decoding from Cells in the Macaque Monkey 275
Chou P. Hung and James J. DiCarlo
11 Representational Similarity Analysis of Object Population Codes in
Humans, Monkeys, and Models 307
Nikolaus Kriegeskorte and Marieke Mur
12 Three Virtues of Similarity-Based Multivariate Pattern Analysis: An
Example from the Human Object Vision Pathway 335
Andrew C. Connolly, M. Ida Gobbini, and James Y. Haxby
13 Investigating High-Level Visual Representations: Objects, Bodies, and
Scenes 357
Dwight J. Kravitz, Annie W-Y. Chan and Chris I. Baker
14 To Err Is Human: Correlating fMRI Decoding and Behavioral
Errors to Probe the Neural Representation of Natural Scene
Categories 391
Dirk B. Walther, Diane M. Beck, and Li Fei-Fei
15 Decoding Visual Consciousness from Human Brain Signals 417
John-Dylan Haynes
16 Probabilistic Codes and Hierarchical Inference in the Brain 441
Karl Friston
II BACKGROUND AND METHODS 475
17 Introduction to the Anatomy and Function of Visual Cortex 477
Kendra S. Burbank and Gabriel Kreiman
18 Introduction to Statistical Learning and Pattern Classification 497
Jed Singer and Gabriel Kreiman
19 Tutorial on Pattern Classification in Cell Recording 517
Ethan Meyers and Gabriel Kreiman
20 Tutorial on Pattern Classification in Functional Imaging 539
Marieke Mur and Nikolaus Kriegeskorte
Contents vii
21 Information-Theoretic Approaches to Pattern Analysis 565
Stefano Panzeri and Robin A. A. Ince
22 Local Field Potentials, BOLD, and Spiking Activity: Relationships and
Physiological Mechanisms 599
Philipp Berens, Nikos K. Logothetis and Andreas S. Tolias
Contributors 625
Index 629
Series Foreword
Computational neuroscience is an approach to understanding the development and
function of nervous systems at many different structural scales, including the bio-
physical, the circuit, and the systems levels. Methods include theoretical analysis and
modeling of neurons,networks,and brain systems and are complementary to empiri-
cal techniques in neuroscience. Areas and topics of particular interest to this book
series include computational mechanisms in neurons, analysis of signal processing
in neural circuits, representation of sensory information, systems models of senso-
rimotor integration, computational approaches to biological motor control, and
models of learning and memory. Further topics of interest include the intersection
of computational neuroscience with engineering, from representation and dynamics
to observation and control.
Terrence J. Sejnowski
Tomaso Poggio
Preface
This is a book about visual information processing in primate brains. As in other
biological networks, the function of the visual system emerges from the interaction
of the system’s components. Such inherently interactive phenomena cannot be
understood by analyzing the components in isolation. In neuronal coding, the idea
that the whole is more than the sum of its parts is exemplified by the concept of
“population code,” the idea that visual content is represented, at each stage of the
visual hierarchy, by the pattern of activity across the local population of neurons.
Although this concept appeared decades ago in neurophysiological studies of brain
function, the dominant approach to measurement and analysis has been to focus on
one cell at a time and to characterize its selectivity, receptive field, and other proper-
ties. A similar approach has been followed in the context of functional imaging,
albeit at a much coarser spatial scale.Although functional imaging measures complex
spatiotemporal activity patterns, most studies have focused on regional-average
activation.
The theoretical concept of the population code motivates multichannel measure-
ment and multivariate analysis of activity patterns. Population analyses have a long
history in vision and other fields. Notable examples include the decoding of motor
commands from population activity in motor cortex and parietal cortex, the decod-
ing of a rodent’s position from the population activity of hippocampal neurons, and
analyses of the population coding of olfactory information. Despite these examples,
the dominant approach to understanding neuronal representations has been uni-
variate analysis.
Over the past decade, the multivariate approach has gained significant momen-
tum, especially in the field of vision. Many researchers now analyze the information
in complex activity patterns across many measurement channels.Functional imaging
and cell recording measure brain activity in fundamentally different ways, but they
now use similar theoretical concepts and mathematical tools in their modeling and
analysis. Results indicate that the interactions between sites do matter to neuronal
coding.
xii Preface
At the micro-scale, the study of single-neuron responses continues to produce
valuable insights. And at the macro-scale, classical brain mapping with its focus on
regional-average activation continues to define the big picture of brain function. But
cell recording and functional imaging are beginning to close the gap of spatial scales
between them and invade the meso-scale, where regional population codes reside.
In terms of measurement, high-resolution imaging is invading this intermediate
scale by providing sub-millimeter resolution, and multi-electrode neuronal record-
ings promise to give us a richer picture of regional single-cell activity. In terms of
analysis, a common multivariate framework for analyzing population codes is
emerging.This framework promises to help bridge the divide between cell recording
and functional imaging and between animal and human studies. Moreover, it prom-
ises to allow us to test computational network models by integrating them in the
analysis of brain-activity data.
The purpose of this book is to present recent advances in understanding of visual
population codes afforded by the multivariate framework, to describe the current
state of the art in multivariate pattern-information analysis, and to take a step
toward a unified perspective for cell recording and functional imaging. The book
should serve as an introduction, overview, and reference for scientists and graduate
students across disciplines who are interested in human and primate vision and,
more generally, in understanding how the brain represents information.
The first part of the book,“Theory and Experiment,”is coarsely organized accord-
ing to the flow of visual information from the retina to the highest stages of ventral-
stream processing. Most of the chapters combine a review of theory and published
empirical findings with methodological considerations. The second part, “Back-
ground and Methods,” is intended to provide readers from different disciplines with
essential background on vision, different techniques for measuring brain activity
(and their relationships), and mathematical analysis methods. This preface is fol-
lowed by an introduction (“A Guided Tour through the Book”), which explains
some basic concepts, summarizes each chapter, and clarifies the chapters’ relation-
ships. Chapter abstracts provide a further level of detail to allow a quick grasp of
the information.
We have roughly organized the book according to the stages of visual processing,
interspersing animal cell recording and human imaging studies, so as to emphasize
the commonality of subject matter between these still somewhat separated fields.
Readers may discover that the perspective of functional imaging has a lot to con-
tribute to that of cell recording, and vice versa. They may also find that the way
questions are framed for early visual areas may help rethink the challenges of
understanding higher-level representations, and vice versa. As mentioned above,
multivariate analyses have provided important insights in other domains beyond
vision. We encourage the reader to examine this rich literature, and we hope that
Preface xiii
the multivariate framework for analyzing population codes will benefit from an
exchange between vision and other fields.
In order to understand brain function, we need to develop theory, experiment,
and analysis conjointly, in a way that embraces the parallel and interactive nature
of cortical computation. The emerging multivariate framework is an important step
in that direction, helping us make sense of ever richer spatiotemporal brain-activity
data and enabling us to see the forest, too, not just the trees.
Several people have helped throughout this effort. We would like to thank
Christina Chung and Jane Tingley, who helped with several aspects of the book. We
also acknowledge the patience and wisdom from the people at MIT Press while we
learnt our initial steps in making this work a reality. The work towards this book
was made possible through funding from the National Science Foundation (0954570;
BCS-1010109), National Institute of Health (R21 EY019710; X02 OD005393;
R21NS070250),theWhitehall Foundation,the Klingenstein Fund,the Massachusetts
Lions Eye Research Foundation, and the UK Medical Research Council.
Introduction: A Guided Tour through the Book
This chapter gives an overview of the content of the book. We follow the chapters
in the sequence in which they appear, summarize key findings and theoretical argu-
ments, and clarify the relationships between the chapters.Along the way, we explain
some basic issues of overarching importance.
The book is divided into two parts: “Theory and Experiment” and “Background
and Methods.” The first part describes recent primary research findings about the
visual system, along with cutting-edge theory and methodological considerations.
The second part provides some of the more general neuroscientific and mathemati-
cal background needed for understanding the first part.
Although each chapter is independent, the first part, “Theory and Experiment,”
is designed to be read in sequence. The sequence roughly follows the stages of
ventral-stream visual processing, which forms the focus of the book. Within this
rough order, we placed closely related chapters together.We purposely interspersed
theoretical and experimental chapters, and, within the latter, animal electrode
recording and human fMRI studies. An overview of the chapters is given in figure
I.1 and table I.1.
Localist and Distributed Codes
In chapter 1, Simon J. Thorpe reviews the debate about localist versus distributed
neuronal coding in the context of recent experimental evidence. Early findings of
neuronal selectivity to simple features at low levels of the visual hierarchy and to
more complex features at higher levels suggested, by extrapolation, that there might
be neurons that respond selectively to particular objects, such as one’s grandmother.
On a continuum of possible coding schemes from localist to distributed, this “grand-
mother cell” theory forms the localist pole. A code of grandmother cells could still
have multiple neurons devoted to each object; the key feature is the high selectivity
of the neurons. A grandmother-cell code is explicit in that no further processing is
2 Introduction
required to read out the code and conclude that a particular object is present. At
the other end of the continuum is a distributed code, in which each neuron will
respond to many different objects; thus, there is no single neuron that unequivocally
indicates the presence of a particular object. In a distributed code, the information
is in the combination of active neurons.
For a population of n neurons, a localist single-neuron code can represent no
more than n distinct objects, one for each neuron—and less if multiple neurons
Figure I.1
Chapter overview. Along the vertical axis (arrow on the left), the chapters have been arranged roughly
according to the stage of processing they focus on. Horizontally, chapters with a stronger focus on a
particular stage of processing are closer to the axis on the left. Where possible, chapters related by other
criteria are grouped together. For example, chapters 5 and 6 use the method of voxel-receptive-field
modeling, while chapters 9 and 11–14 use the method of representational similarity analysis. Neuron and
voxel icons label chapters using neuronal recordings and fMRI, respectively. Chapters focusing on theory,
experiment, or methods have been visually indicated (see legend), with methods chapters marked by a
gray underlay and experimental chapters with a strong methodological component marked by a partial
gray underlay.
A Guided Tour through the Book 3
Table I.1
Chapter content overview
First Author,
Last Author Content Type Regions
Brain-Activity
Measurement Content
1 Thorpe Theory, model,
exp.
Retina-IT Electrode Localist vs. distributed
coding; spike-timing-
dependent coding; plasticity
2 Nirenberg Theory, exp.,
methods
Retina In vitro
recording
Ruling out retinal codes by
comparing information
between code and behavior
3 Poort,
Roelfsema
Exp. V1 Electrode Decoding stimulus features
and attentional states from
V1 neurons
4 Kamitani Exp.,
methods
V1-3, MT fMRI Decoding human early
visual population codes and
stimulus reconstruction
5 Kay Methods, model,
exp.
V1-4 fMRI Voxel-receptive-field
modeling for identification
of natural images
6 Gallant, Wu Methods, model,
exp.
V1 fMRI Methodological framework
for voxel-receptive-field
modeling
7 Pasupathy,
Brincat
Exp. V4, pIT Electrode Shape-contour
representation by convex/
concave curvature-feature
combinations
8 Houghton,
Victor
Theory,
methods, exp.
— Electrode Measuring representational
dissimilarity by spike-train
edit distances
9 Op de Beeck Exp., theory IT fMRI Category modules vs.
feature map; influences of
task and learning
10 Hung,
DiCarlo
Exp., theory IT Electrode Decoding object category
and identity at small
latencies after stimulus
onset; invariances
11 Kriegeskorte,
Mur
Exp., theory,
model, methods
IT fMRI,
electrode
Categoricality of object
representation, comparing
human and monkey;
methods
12 Connolly,
Haxby
Exp., theory,
methods
IT fMRI Transformation of similarity
across stages; advantages of
pattern similarity analyses
13 Kravitz,
Baker
Exp., theory IT fMRI Object, body, and scene
representations; position
dependence
14 Walther,
Fei-Fei
Exp., theory,
methods
IT fMRI Distributed scene
representations; decoding
confusions predict
behavioral confusions
4 Introduction
redundantly code for the same object, as is commonly assumed. A distributed code
can use combinations of neurons and code for a vast number of different objects
(for binary responses, for example, there are 2n
distinct activity patterns). If the pat-
terns used for representing objects are randomly chosen among the 2n
combinations,
about half of the neurons will respond to any given object. A distributed code can
also represent the stimuli with some redundancy, making it robust to damage to
particular neurons. Moreover, it can represent the objects in terms of sensory or
semantic properties, thus placing the objects in a multidimensional abstract space
that reflects their relationships. Such an abstract space might emphasize behavior-
ally relevant similarities and differences in a graded or categorical manner.Although
the signals indicating the presence of a particular object are distributed, the code
may still be considered “explicit” if readout takes just a single step—for example, a
downstream neuron that computes a linear combination of the neuronal population.
(Such a downstream neuron would be a localist neuron.)
First Author,
Last Author Content Type Regions
Brain-Activity
Measurement Content
15 Haynes Theory, methods LGN-IT fMRI Decoding consciousness;
uni- vs. multivariate neural
correlates of consciousness
16 Friston Theory, model,
exp.
Retina-IT fMRI Visual system as
hierarchical model for
recurrent Bayesian
inference and learning
17 Burbank,
Kreiman
Theory tutorial Retina-IT — Essentials of visual
processing across stages of
the visual hierarchy; dorsal/
ventral stream
18 Singer,
Kreiman
Methods
tutorial
— — Introduction to statistical
learning theory and pattern
classification
19 Meyers,
Kreiman
Methods
tutorial
— Electrode Step-by-step tutorial on
pattern classification for
neural data
20 Mur,
Kriegeskorte
Methods
tutorial
— fMRI Step-by-step tutorial on
pattern classification for
fMRI data
21 Panzeri, Ince Theory,
methods
— Electrode,
fMRI
Information theoretic
analysis of neuronal
population codes
22 Berens,
Tolias
Exp.,
methods
— Electrode,
fMRI
Relationship between
spikes, local field potentials,
and fMRI
Table I.1
(continued)
A Guided Tour through the Book 5
Note that what is called localist and distributed is fundamentally in the eye of the
beholder, as it depends on the way the researcher thinks of the information to be
represented. For example, consider the case of two neurons that encode the
two-dimensional space of different jets of water. One neuron codes the amount of
water per unit of time; the other the temperature of the water. A researcher
who thinks of the space in terms of amount per unit of time and temperature will
conclude that the code is localist. But a researcher who thinks of jets of water in
terms of the amounts of cold and hot water per unit of time will conclude that the
code is distributed. In practice, we tend to think of a code as localist if we can
characterize each neuron’s preferences in very simple terms; we think of the
code as distributed if the description of the preference of a single neuron is complex
and doesn’t correspond to any concepts for describing the content that appear
natural to us.
The “grandmother cell” theory did not initially have any direct empirical support.
Findings of “grandmother” (or similarly highly selective) neurons were elusive. The
failure to find such neurons, of course, doesn’t prove that they don’t exist. The idea
of grandmother cells has also been criticized on theoretical grounds for failing to
exploit the combinatorics. This led to a preference for more distributed coding
schemes among many theorists. Indeed, distributed codes and multivariate analysis
of the information they carry is a central theme of this book.
Sparse Distributed Codes
Despite the advantages of distributed codes, the appeal of highly selective single
cells is not merely in the eye of the electrophysiologist who happens to record one
cell at a time with a single electrode. The reason why more of the page you are
reading is white than black may be the cost of ink. Similarly, the metabolic cost of
neuronal activity creates an incentive for a code that is sparser (i.e., fewer cells
responding to a particular object due to each cell’s greater selectivity) than one that
fully exploits the combinatorics. On the continuum between localist and distributed,
the concept of a sparse code has emerged as a compromise that may best combine
the advantages of both schemes. In a sparse code, few neurons respond to any given
stimulus. And, conversely, few stimuli drive any given neuron.
It seems likely that neurophysiological recordings have been biased toward
describing neurons that fire more rapidly and less selectively, making them easier
to find while looking for responses. Consistent with this notion, unbiased neuro-
physiological recordings using electrode arrays tend to report high selectivities,
suggesting sparse representations, in a variety of systems including the songbird
vocal center, the mouse auditory cortex, and the human hippocampus.
6 Introduction
Thorpe discusses additional arguments in favor of sparse coding. More recent
evidence from neurophysiological recordings in the human medial temporal lobe
suggests that there are neurons responding selectively to complex particular objects,
for example,to JenniferAniston.Interestingly,the“Jennifer-Aniston cell”responded
not just to one image, but to several images of the actress and even to the visual
presentation of her name in writing. The cell did not respond to any other stimuli
that the researchers tried. However, the relatively small number of stimuli and
neurons that can be examined in such experiments (on the order of hundreds) sug-
gests that neurons of this type might well respond to multiple particular objects.
The “Jennifer-Aniston cell,” then, might be more promiscuous than its exclusive
preference for the actress among the sampled set of stimuli would suggest. Thorpe
(citing Rafi Malach) refers to this as the “totem-pole cell” theory, where a cell has
multiple distinct preferences like the faces on a totem pole.
It is important to note that descriptions like “Jennifer-Aniston cell” or “totem-
pole cell” are likely to be caricatures that oversimplify the nature of these neurons.
The underlying computations are more complex and much less well understood
than those of early visual neurons.
In a distributed but sparse code, different objects are represented by largely dis-
joint sets of cells. This may render the code robust to interference between objects.
Interference of multiple simultaneously present objects (i.e., the superposition of
their representations) could create ambiguity in a maximally distributed code. Inter-
ference could also erase memories: If each neuron is activated by many different
objects, then spike-timing-dependent plasticity might wash away a memory that is
not reactivated over a long time. Highly selective neurons, Thorpe argues, could
maintain a memory over decades without the need of reactivation.Their high selec-
tivity would protect them from interference. He suggests that the brain might
contain neuronal “dark matter,” that is, neurons so selective that they may not fire
for years and are virtually impossible to elicit a response from in a neurophysiologi-
cal experiment.
Sampling Limitations: Few Stimuli, Few Response Channels
With current techniques, our chances are slim to activate neuronal “dark matter”
or to ever find the other loves of the “Jennifer-Aniston cell.” This reminds us of a
basic challenge for our field: our limited ability to sample brain responses to visual
stimuli. High-resolution imaging and advances in multi-electrode array recording
have greatly increased the amount of information we can acquire about brain-
activity patterns. However, our measurements will not fully capture the information
present in neuronal activity patterns in the foreseeable future. The subsample we
take always consists in a tiny proportion of the information that would be required
A Guided Tour through the Book 7
to fully describe the spatiotemporal activity pattern in a given brain region. Elec-
trode recording and fMRI tap into population activity in fundamentally different
ways (which we discuss further at the end of this overview). fMRI gives us a tem-
porally smoothed and spatially strongly blurred (and locally distorted) depiction of
activity (i.e., the hemodynamic response), with a single voxel reflecting the average
activity across hundreds of thousands of neurons (and possibly other cell types).
Neuronal recording gives us spatiotemporally precise information, but only for a
vanishingly small subset of the neurons in the region of interest (and possibly biased
toward certain neuronal types over others). In terms of information rates, fMRI and
electrode recording are similarly coarse:An fMRI acquisition might provide us with,
say, 100,000 channels sampled once per second, and an electrode array can record
from, say, 100 channels sampled 1,000 times per second.
We subsample not only the response space but also the stimulus space. Typical
studies only present hundreds of stimuli (give or take an order of magnitude). In
fMRI, the stimuli are often grouped into just a handful of categories; and only
category-average response patterns are analyzed. However, to characterize the
high-dimensional continuous space of images, a much larger number of stimuli is
needed. Consider a digital grayscale image defined by 64 × 64 pixels (4,096 pixels)
with intensities ranging from 0 to 255 (a pretty small image by today’s standards).
The number of possible such images is huge: 2564096
(~1010,000
). The more relevant
subset of“natural”images is much smaller,but this subset is still huge and ill defined.
To complicate matters, the concept of “visual object” is inherently vague and implies
the prior theoretical assumption that scenes are somehow parsed into constituent
objects.
Repeated presentations of the same stimulus sample help distinguish signal from
noise in the responses. Noise inevitably corrupts our data to some degree. The
number of responses sampled limits the complexity of the models we can fit to the
data. A model that is realistically complex, given what we know about the brain, is
often unrealistic to fit, given the amount of data we have. To fit such a model would
be to pretend that the data provide more information than they do, and generaliza-
tion of our predictions to new data sets would suffer (see discussion in chapters 18
and 19 about bias versus variance). Both subsampling of the response pattern and
limited model complexity cause us to underestimate the stimulus information
present in a brain region’s activity patterns. Our estimates are therefore usually
lower bounds on the information actually present.
Retina: Rate Code Ruled Out
Sheila Nirenberg describes an interesting exception to the rule of lower bounds on
activity-pattern information (chapter 2). She describes a study in which an upper
8 Introduction
bound could be estimated. Neuronal recordings performed in vitro captured the
continuous activity of the entire retinal population representing the stimulus. Niren-
berg and colleagues then tested different hypothetical codes, each of which was
based on a different set of features of the spike trains (thus retaining a different
subset of the total information). Because the recordings arguably captured the full
population information, any code that retained less information than present in the
animal’s behavior (as assessed in vivo) could be ruled out. Spike-rate and spike-
timing codes did not have all the information reflected in behavior, whereas a
temporal-correlation code did the trick.
Unfortunately, studies of cortical visual population codes are faced with a more
complicated situation, where our limited ability to measure the activity pattern (a
small sample of neurons measured or voxels that blur the pattern) is compounded
by multiple parallel pathways. For example, current technology does not allow us
to record from all the neurons in V1 that respond to a particular stimulus. Moreover,
if a given hypothetical code (e.g., a rate code) suggested the absence in V1 of stimu-
lus information reflected in behavior, the code could still not be ruled out, because
the information might enter the cortex by another route, bypassing V1. The other
studies reviewed in this book, therefore, cannot rule out codes by Nirenberg’s rigor-
ous method. When population activity is subsampled, absence of evidence for par-
ticular information is not evidence of absence of this information. The focus, then,
is on the positive results, that is, the information that can be shown to be present.
Early Visual Cortex: Stimulus Decoding and Reconstruction
In chapter 3, Jasper Poort, Arezoo Pooresmaeili, and Pieter R. Roelfsema describe
a study showing that physical stimulus features as well as attentional states can be
successfully decoded from multiple neurons in monkey V1. They find that stimulus
features and attentional states are reflected in separate sets of neurons, demonstrat-
ing thatV1 is not just a low-level stimulus-driven representation.The results of Poort
and colleagues illustrate a simple synergistic effect of multiple neurons that even
linear decoders can benefit from: noise cancelation. Neuron A may not respond to
a particular stimulus feature and carry no information about that feature by itself.
However, if its noise fluctuations are correlated with the noise of another neuron
B which does respond to the feature, then subtracting the activity of A from B (with
a suitable weight) can reduce the noise in B and allow better decoding. Such noise
cancelation is automatically achieved with linear decoders, such as the Fisher linear
discriminant.Although the decoding is based on a linear combination of the neurons,
the information in the ensemble of neurons does not simply add up across neurons
and cannot be fully appreciated by considering the neurons one by one.
A Guided Tour through the Book 9
Like Poort and colleagues, Yukiyasu Kamitani (chapter 4) describes studies
decoding physical stimulus properties and attentional states from early visual cortex.
However, Kamitani’s studies use fMRI in humans to analyze the information in
visual areas V1–4 and MT+. All these areas allowed significant decoding of motion
direction. Grating orientation information, by contrast, was strongest in V1 and then
gradually diminished in V2–4; it was not significant in MT+. Beyond stimulus fea-
tures, Kamitani was able to decode which of two superimposed gratings a subject
is paying attention to.
These findings are roughly consistent with results from monkey electrode record-
ings.Their generalization to human fMRI is significant because it was not previously
thought that fMRI might be sensitive to fine-grained neuronal patterns, such as V1
orientation columns.The decodability of grating orientation from V1 voxel patterns
is all the more surprising because Kamitani did not use high-resolution fMRI, but
more standard (3mm)3
voxels. The chapter discusses a possible explanation for the
apparent “hyperacuity” of fMRI: Each voxel may average across neurons preferring
all orientations, but that does not mean that all orientations are exactly equally
represented in the sample. If a slight bias in each voxel carries some information,
then pattern analysis can recover it by combining the evidence across multiple
voxels.
From decoding orientation and motion direction, Kamitani moves on to recon-
struction of arbitrary small pixel shapes from early visual brain activity. This is a
much harder feat, because of the need to generalize to novel instances from a large
set of possible stimuli. In retinotopic mapping, we attempt to predict the response
of each voxel separately as a function of the stimulus pattern. Conversely, we could
attempt to reconstruct a pixel image by predicting each pixel from the response
pattern. However, Kamitani predicts the presence of a stimulus feature extended
over multiple stimulus pixels from multiple local response voxels. The decoded
stimulus features are then combined to form the stimulus reconstruction. This
multivariate-to-multivariate approach is key to the success of the reconstruction,
suggesting that dependencies on both sides, among stimulus pixels and among
response voxels, matter to the representation.
Early Visual Cortex: Encoding and Decoding Models
While Kamitani focuses on fMRI decoding models, the following two chapters
describe how fMRI encoding models can be used to study visual representations.
Kendrick N. Kay (chapter 5) gives an introduction to fMRI voxel-receptive-field
modeling (also known as “population-receptive-field modeling”). In this technique,
a separate computational model is fitted to predict the response of each voxel to
10 Introduction
novel stimuli. Similar techniques have been applied to neuronal recording data to
characterize each neuron’s response behavior as a function of the visual stimulus.
Kay argues in favor of voxel-receptive-field modeling by contrasting it against two
more traditional methods of fMRI analysis: the investigation of response profiles
across different stimuli (e.g., tuning curves or category-average activations) and
pattern-classification decoding of population activity. He reviews a recent study, in
which voxel-receptive-field modeling was used to predict early visual responses to
natural images. The study confirms what is known about V1, namely that the repre-
sentation can be modeled as a set of detectors of Gabor-like small visual features
varying in location, orientation, and spatial frequency.
Kay’s study is an example of a general fMRI methodology developed in the lab
of Jack Gallant (the senior author of the study). Jack L. Gallant, Shinji Nishimoto,
Thomas Naselaris, and Michael C. K.Wu (chapter 6) present this general methodol-
ogy, which combines encoding (i.e., voxel-receptive-field) and decoding models.
First, each of a number of computational models is fitted to each voxel on the basis
of measured responses to as many natural stimuli as possible.Then the performance
of each model (how much of the non-noise response variance it explains) is assessed
by comparing measured to predicted responses for novel stimuli not used in fitting
the model. The direction in which a model operates (encoding or decoding) is irrel-
evant to the goal of detecting a dependency between stimulus and response pattern
(a point elaborated upon by Marieke Mur and Nikolaus Kriegeskorte in chapter
20). However, Gallant’s discussion suggests that the direction of the model predic-
tions should match the direction of the information flow in the system: If we are
modeling the relationship between stimulus and brain response, an encoding
approach allows us to use computational models of brain information processing
(rather than generic statistical models as are typically used for decoding, which are
not meant to mimic brain function).The computational models can be evaluated by
the amount of response variance they explain. Decoding models, on the other hand,
are well suited for investigating readout of a representation by other brain regions
and relating population activity to behavioral responses. For example, if the noise
component of a region’s brain activity predicts the noise component of a behavioral
response (e.g., categorization errors; see chapter 14), this suggests that the region
may be part of the pathway that computes the behavioral responses.
Midlevel Vision: Curvature Representation in V4 and Posterior IT
Moving up the visual hierarchy, Anitha Pasupathy and Scott L. Brincat (chapter 7),
explore the representation of visual shapes between the initial cortical stage of V1
and V2 and higher-level object representations in inferior temporal (IT) cortex. At
this intermediate level, we expect the representational features to be more complex
A Guided Tour through the Book 11
than Gabor filters or moving edges, but less complex than the types of features often
found to drive IT cells. Pasupathy and Brincat review a study that explores the
representation of object shape by electrode recordings of single-neuron responses
to sample stimuli from a continuous parameterized space of binary closed shapes.
Results suggest that a V4 neuron represents the presence of a particular curvature
at a particular angular position of a closed shape’s contour. A posterior IT neuron
appears to combine multiple V4 responses and represent the presence of a combina-
tion of convex and concave curvatures at particular angular positions. The pattern
of responses of either region allowed the decoding of the stimulus (as a position
within the parameterized stimulus space). This study nicely illustrates how we can
begin to quantitatively and mechanistically understand the transformations that
take place along the ventral visual stream.
What Aspect of Brain Activity Serves to “Represent” Mental Content?
When we analyze information represented in patterns of activity, we usually make
assumptions about what aspect of the activity patterns serves to represent the infor-
mation in the context of the brain’s information processing. A popular assumption
is that spiking rates of neurons carry the information represented by the pattern.
While there is a lot of evidence that spike rates are an important part of the picture,
experiments like those Nirenberg describes in chapter 2 show that we miss function-
ally relevant information if we consider only spike rates.
Conor Houghton and Jonathan Victor (chapter 8) consider the general question
of how we should measure the “representational distance” between two spatiotem-
poral neuronal activity patterns. In a theoretical chapter at the interface between
mathematics and neuroscience, they consider metrics of dissimilarity comparing
activity patterns that consist in multiple neurons’ spike trains.The aim is to find out
which metric captures the functionally relevant differences between activity pat-
terns. Houghton and Victor focus on “edit distances” (including the “earth mover’s
distance”), which measure the distance between two patterns in terms of the “work”
(i.e., the total amount of changes) required to transform one pattern into another.
Jonathan Victor had previously proposed metrics to characterize the distance
between single-neuron spike trains. Here this work is extended to populations of
neurons, suggesting a rigorous and systematic approach to understanding neuronal
coding.
Inferior Temporal Cortex: A Map of Complex Object Features
Moving farther down the ventral stream, Hans P. Op de Beeck discusses high-level
object representations in inferior temporal (IT) cortex in the monkey and in the
12 Introduction
human (chapter 9). This is the first chapter to review the findings of macroscopic
regions selective for object categories (including faces and places). Face-selective
neurons had been found in monkey-IT electrode recordings decades earlier.
However, the clustering of such responses in macroscopic regions found in consis-
tent anatomical locations along the ventral stream was discovered by fMRI, first in
humans and later in monkeys. It has been suggested that these regions are “areas”
or “modules,” terms that imply well-defined anatomical and functional boundaries,
which have yet to be demonstrated.
The proposition that the higher-level ventral stream might be composed of cate-
gory-selective (i.e., semantic) modules sparked a new debate about localist versus
distributed coding within the fMRI community.The new debate in fMRI concerned
a larger spatial scale (the overall activation of entire brain regions, not single
neurons) and also a larger representational scale (the regions represented catego-
ries, not particular objects). Nonetheless, the theoretical arguments are analogous
at both scales. Just like the functional role of highly selective single neurons remains
contentious, it has yet to be resolved whether the higher ventral stream consists of
a set of distinct category modules or a continuous map of visual and/or semantic
object features.
Op de Beeck argues that the finding of category-selective regions might be
accommodated under a continuous-feature-map model. He reviews evidence sug-
gesting that the feature map reflects the perceptual similarity space and subjective
interpretations of the visual stimuli, and that it can be altered by visual
experience.
Chou Hung and James DiCarlo (chapter 10) describe a study in which they
repeatedly presented seventy-seven grayscale object images in rapid succession (a
different image every 200 ms) while sequentially recording from more than three
hundred locations in monkey anterior IT. The images were from eight categories,
including monkey and human faces, bodies, and inanimate objects.
Single-cell responses to object images have been studied intensely for decades,
showing that single neurons exhibit only weak object-category selectivity and
limited tolerance to accidental properties. From a computational perspective,
however, the more relevant question is what information can be read out from the
neuronal population activity by downstream neurons. Single-neuron analyses can
only hint at the answer. Hung and DiCarlo therefore analyzed the response patterns
across object scales and locations by linear decoding. This approach provides a
lower-bound estimate (as explained above) on the information available for imme-
diate biologically plausible readout.
The category (among 8) and identity (among 77) of an image could be decoded
with high accuracy (94 percent and 70 percent correct, respectively), far above
chance level. Once fitted, a linear decoder generalized reasonably well across sub-
A Guided Tour through the Book 13
stantial scale (2 octaves) and small position changes (4 deg visual angle). The
decoder also generalized to novel category exemplars (i.e., exemplars not used in
fitting), and worked well even when based on a 12.5-ms temporal window (capturing
just 0–2 spikes per neuron) at 125-ms latency. Category and identity information
appeared to be concentrated in the same set of neurons, and both types of informa-
tion appeared at about the same latency (around 100 ms after stimulus onset, as
revealed by a sliding temporal-window decoding analysis). Hung and DiCarlo found
only minimal task and training effects at the level of the population. This is in con-
trast to some earlier studies, which focused on changes in particular neurons during
more attention-demanding tasks. From a methodological perspective, Hung and
DiCarlo’s study is exemplary for addressing a wide range of basic questions, by
applying a large number of well-motivated pattern-information analyses to popula-
tion response patterns elicited by a set of object stimuli.
Representational Similarity Structure of IT Object Representations
Classifier decoding can address how well a set of predefined categories can be read
out, but not whether the representation is inherently organized by those categories.
Nikolaus Kriegeskorte and Marieke Mur (chapter 11) review a study of the similar-
ity structure of the IT representations of 92 object images in humans, monkeys, and
computational models. Kriegeskorte and Mur show that the response patterns elic-
ited by the ninety-two objects form clusters corresponding to conventional catego-
ries. The two main clusters correspond to animate and inanimate objects; the
animates are further subdivided into faces and bodies. The response-pattern dis-
similarity matrices reveal a striking match of the structure of the representation
between human and monkey. In both species, IT appears to emphasize the same
basic categorical divisions. Moreover, even within categories the dissimilarity struc-
ture is correlated between human and monkey. IT object similarity was not well
accounted for by several computational models designed to mimic either low-level
features (e.g., pixel images, processed versions of the images, features modeling V1
simple and complex cells) or more complex (e.g., natural image patch) features
thought to reside in IT. This suggests that the IT features might be optimized to
emphasize particular behaviorally important category distinctions.
In terms of methods, the chapter shows that studying the similarity structure of
response patterns to a sizable set of visual stimuli (“representational similarity
analysis”) can allow us to discover the organization of the representational space
and to compare it between species, even when different measurement techniques
are used (here, fMRI in humans and cell recordings in monkeys). Like voxel-
receptive-field modeling (see chapters 5 and 6, discussed earlier), this technique
14 Introduction
allows us to incorporate computational models of brain information processing into
the analysis of population response patterns, so as to directly test the models.
Andrew C. Connolly, M. Ida Gobbini, and James V. Haxby (chapter 12) discuss
three virtues of studying object similarity structure:it provides an abstract character-
ization of representational content, can be estimated on the basis of different data
sources,and can help us understand the transformation of the representational space
across stages of processing.They describe a human fMRI study of the similarity struc-
ture of category-average response patterns and how it is transformed across stages of
processing from early visual to ventral temporal cortex. The similarity structure in
early visual cortex can be accounted for by low-level features. It is then gradually
transformed from early visual cortex, through the lateral occipital region, to ventral
temporal cortex.Ventral temporal cortex emphasizes categorical distinctions.
Connolly and colleagues also report that the replicability of the similarity struc-
ture of the category-average response patterns increases gradually from early visual
cortex to ventral temporal cortex. This may reflect the fact that category-average
patterns are less distinct in early visual cortex. Similarity structure was found to be
replicable in all three brain regions, within as well as across subjects. Replicability
did not strongly depend on the number of voxels included in the region of interest
(100–1,000 voxels, selected by visual responsiveness).
The theme of representational similarity analysis continues in the chapter by
Dwight J. Kravitz, Annie W.-Y. Chan, and Chris I. Baker (chapter 13), who review
three related human fMRI studies of ventral-stream object representations.The first
study shows that the object representations in ventral-stream regions are highly
dependent on the retinal position of the object. Despite the larger receptive fields
found in inferior temporal cortex (compared to early visual regions), these high-
level object representations are not entirely position invariant. The second study
shows that particular images of body parts are most distinctly represented in body-
selective regions when they are presented in a “natural” retinal position—assuming
central fixation of a body as a whole (e.g., right torso front view in the left visual
field).This suggests a role for visual experience in shaping position-dependent high-
level object representations. The third study addresses the representation of scenes
and suggests that the major categorical distinction emphasized by scene-selective
cortex is that between open (e.g., outdoor) and closed (e.g., indoor) scenes. In terms
of methods, Kravitz and colleagues emphasize the usefulness of ungrouped-events
designs (i.e., designs that do not assume a grouping of the stimuli a priori) and they
describe a straightforward and very powerful, split-half approach to representa-
tional similarity analysis.
The representation of scenes in the human brain is explored further in the chapter
by Dirk B. Walther, Diane M. Beck, and Li Fei-Fei (chapter 14). These authors
investigate the pattern representations of subcategories of scenes (including moun-
tains, forests, highways, and buildings) with fMRI in humans.They relate the confus-
A Guided Tour through the Book 15
ability of the brain response patterns (when linearly decoded) to behavioral
confusions among the subcategories. This shows that early visual representations,
though they distinguish scene subcategories, do not reflect behavioral confusions,
while representations in higher-level object- and scene-selective regions do. In terms
of methods, this chapter introduces the attractive method of relating confusions (a
particular type of error) between behavioral classification tasks and response-
pattern classification analyses, so as to assess to what extent a given region might
contribute to a perceptual decision process.
In chapter 15, John-Dylan Haynes discusses how fMRI studies of consciousness
can benefit from pattern-information analyses. A central theme in empirical con-
sciousness research is the search for neural correlates of consciousness (NCCs).
Classical fMRI studies on NCCs have focused on univariate correlations between
regional-average activation and some aspect of consciousness.For example,regional-
average activation in area hMT+/V5 has been shown to be related to conscious
percepts of visual motion. However, finding a regional-average-activation NCC,
does not address whether the specific content of the conscious percept (e.g., the
direction of the motion) is encoded in the brain region in question. Combining the
idea of an NCC with multivariate population decoding can allow us to relate specific
conscious percepts (e.g., upward visual motion flow) to specific patterns of brain
activity (e.g., a particular population pattern in hMT+/V5) in human fMRI. Beyond
the realm of consciousness, we return to this point at a more general level in chapter
20, where we consider how classical fMRI studies use regional-average activation
to infer the “involvement” of a brain region in some task component, whereas
pattern-information fMRI studies promise to reveal a region’s representational
content, whether the organism is conscious of that content or not.
Vision as a Hierarchical Model for Inferring Causes by Recurrent Bayesian
Inference
In chapter 16, the final chapter of the “Theory and Experiment” section, Karl Friston
outlines a comprehensive mathematical theory of perceptual processing.The chapter
starts by reviewing the theory of probabilistic population codes. A population code
is probabilistic if the activity pattern represents not just one particular state of the
external world, but an entire probability distribution of possible states. On one hand,
bistable perceptual phenomena (e.g., binocular rivalry) suggest that the visual
system, when faced with ambiguous input, chooses one possible interpretation (and
explores alternatives only sequentially in time). On the other hand, there is evidence
for a probabilistic representation of confidence. These findings suggest a code that
is probabilistic but unimodal. Friston argues that the purpose of vision is to infer
the causes of the visual input (e.g., the objects in the world that cause the light
16 Introduction
patterns falling on the retina), and that different regions represent causes at differ-
ent levels of abstraction. He interprets the hierarchy of visual regions as a hierarchi-
cal statistical model of the causes of visual input. The model combines top-down
and bottom-up processing to arrive at an interpretation of the input. The top-down
component consists in prediction of the sensory input from hypotheses about its
causes (or prediction of lower-level causes from higher-level causes). The predicted
information is “explained away” by subtracting its representation out at each stage,
so that the remaining bottom-up signals convey the prediction errors, that is the
component of the input that requires further processing to be accommodated in the
final interpretation of the input. Friston suggests that perceptual inference and
learning can proceed by an empirical Bayesian mechanism. The chapter closes by
reviewing some initial evidence in support of the model.
In the second part of the book, “Background and Methods,” we collect chapters
that provide essential background knowledge for understanding the first part.These
chapters describe the neuroscientific background, the mathematical methods, and
the different ways of measuring brain-activity patterns.
A Primer on Vision
In chapter 17, Kendra Burbank and Gabriel Kreiman give a general introduction
to the primate visual system, which will be a useful entry point for researchers from
other fields. They describe the cortical visual hierarchy, in which simple local image
features are detected first, before signals converge for analysis of more complex and
more global features. In low-level (or “early”) representations, neurons respond to
simple generic local stimulus features such as edges and the cortical map is retino-
topically organized, with each neuron responsive to inputs from a small patch of the
retina (known as the neuron’s “receptive field”). In higher-level regions, neurons
respond to more complex, larger stimulus features that occur in natural images and
are less sensitive to the precise retinal position of the features (i.e., larger receptive
fields). The system can be globally divided into a ventral stream and dorsal stream,
where the ventral “what” stream (the focus of this book) appears to represent what
the object is (object recognition) and the dorsal “where” stream appears to repre-
sent spatial relationships and motion.
Tools for Analyzing Population Codes: Statistical Learning and Information
Theory
Jed Singer and Gabriel Kreiman (chapter 18) give a general introduction to statisti-
cal learning and pattern classification. This chapter should provide a useful entry
A Guided Tour through the Book 17
point for neuroscientists. Statistical learning is a field at the interface between sta-
tistics, computer science, artificial intelligence, and computational neuroscience,
which provides important tools for analysis of brain-activity patterns. Moreover,
some of its algorithms can serve as models of brain information processing (e.g.,
artificial neural networks) or are inspired by the brain at some level of abstraction.
A key technique is pattern classification, where a set of training patterns is used to
define a model that divides a multivariate space of possible input patterns into
regions corresponding to different classes. The simplest case is linear classification,
where a hyperplane is used to divide the space. In pattern classification as in other
statistical pursuits, more complex models (i.e., models with more parameters to be
fitted to the data) can overfit the data. A model is overfitted if it represents noise-
dominated fine-scale features of the data.
Overfitting has a depressing and important consequence: a complex model can
perform worse at prediction than a simple model, even when the complex model is
correct and the simple model is incorrect. The complex correct model will be more
easily “confused” by the noise (i.e., overfitted to the data), while the simple model
may gain more from its stability than it loses from being somewhat incorrect. This
can happen even if the complex model subsumes the simple model as a special case.
The phenomenon is also known as the bias-variance tradeoff: The simple model in
our example has an incorrect bias, but it performs better because of its lower vari-
ance (i.e., noise dependence).As scientists, we like our models “as simple as possible,
but no simpler,” as Albert Einstein said. Real-life prediction from limited data,
however, favors a healthy dose of oversimplification.
In brain science, pattern classification is used to “decode” population activity
patterns, that is, to predict stimuli from response patterns. This is the most widely
used approach to multivariate analysis of population codes. Tutorial introductions
to this method are given by Ethan Meyers and Gabriel Kreiman for neural data
(chapter 19) and by Marieke Mur and Nikolaus Kriegeskorte for fMRI data
(chapter 20). These chapters provide step-by-step guides and discuss the neuro-
scientific motivation of particular analysis choices.
Pattern analyses are needed to detect information interactively encoded by mul-
tiple responses. In addition, they combine the evidence across multiple responses,
thus boosting statistical power and providing useful summary measures. The
combination of evidence would be useful even if interactive information were
absent. These advantages apply to both neuronal and fMRI data, but in different
ways. Single-neuron studies miss interactively encoded information, and perhaps
also effects that are weak and widely distributed. However, they can still contribute
to our understanding of population codes within a brain region. Arguably, most of
what we know about population codes today has been learned from single-neuron
studies.
18 Introduction
The single-voxel scenario is quite different, as discussed by Mur and Kriegeskorte.
In addition to the hemodynamic nature of the fMRI signal and its low spatial reso-
lution, single-voxel fMRI analyses have very little power because of the physiologi-
cal and instrumental noise and because of the need to account for multiple testing
carried out across many voxels. As we make the voxels smaller to pick up more
fine-grained activity patterns within a region, we get (1) more and (2) noisier voxels.
The combination of weaker effects and stronger correction for multiple tests leaves
single-voxel analysis severely underpowered. Pattern-information analysis recovers
power by combining the evidence across voxels. Classical fMRI studies have used
regional averaging (or smoothing) to boost power. This approach enables us to
detect overall regional activations at the cost of missing fine-grained pattern infor-
mation. Regional-average activation is taken to indicate the “involvement” of a
region in a task component (or in the processing of a stimulus category). However,
the region remains a black box with respect to its internal processes and representa-
tions. The pattern-information approach promises to enable us to look into each
region and reveal its representational content, even with fMRI.
Whether we use neuronal recordings or fMRI, we wish to reveal the information
the code carries. If pattern classification provides above-chance decoding of the
stimuli, then we know that there is mutual information between the stimulus and
the response pattern. However, pattern classification is limited by the assumptions
of the classification model. Moreover, the categorical nature of the output (i.e.,
predefined classes) leads to a loss of probabilistic information about class member-
ship and does not address the representation of continuous stimulus properties. It
would be desirable to detect stimulus information in a less biased fashion and to
quantify its amount in bits.
Stefano Panzeri and Robin A. A. Ince (chapter 21) describe a framework for
information theoretic analysis of population codes. Information theory can help us
understand the relationships between neurons and how they jointly represent
behaviorally relevant stimulus properties. If the neurons carry independent infor-
mation, the population information is the sum of the information values for single
neurons. To the extent that different neurons carry redundant information, the
population information will be less than that sum. To the extent that the neurons
synergistically encode information, the population information can be greater than
the sum. The case of synergistic information was described earlier in the context of
chapter 3: If neurons A and B share noise, but not signal, A can be used to cancel
B’s noise. Subtracting out the noise improves the signal-to-noise ratio and increases
the information. Panzeri and Ince place these effects in a general mathematical
framework, in which the mutual information between the stimulus and the popula-
tion response pattern is decomposed into additive components, which correspond
to the sum of the information values for single neurons and the synergistic offset
A Guided Tour through the Book 19
(which can be positive or negative and is further decomposed into signal- and noise-
related subcomponents).
The abstract beauty of the mathematical concept of information lies in its general-
ity. In empirical neuroscience, the necessarily finite amount of data requires us to
sacrifice some of the generality in favor of stable estimates (i.e., to reduce the error
variance of our estimates by accepting some bias). However, information theory is
key to the investigation of population coding not only at the level of data analysis,
but also at the level of neuroscientific theory.
What We Measure with Electrode Recordings and fMRI
The experimental studies described in this book relied on brain-activity data from
electrode recordings and fMRI. We can analyze the response patterns from these
measurement techniques with the same mathematical methods, and there is evi-
dence that they suggest a broadly consistent view of brain function (e.g., chapter
11). However, fMRI and electrode recordings measure fundamentally different
aspects of brain activity. Moreover, the two kinds of signal have been shown to be
dissociated in certain situations. The final chapter by Philipp Berens, Nikos K.
Logothetis, and Andreas S. Tolias (chapter 22) reviews the relationship between
neuronal spiking, local field potentials, and the blood-oxygen-level-dependent
(BOLD) fMRI signal, which reflects the local hemodynamic response thought to
serve the function of adjusting the energy supply for neuronal activity.
Neuronal spikes represent the output signal of neurons.They are sharp and short
events, and thus reflected mainly in the high temporal-frequency band of the electri-
cal signal recorded with an invasive extracellular electrode in the brain. The high
band (e.g., >600 Hz) of electrode recordings reflects spikes of multiple neurons very
close to the electrode’s tip (<200 micrometers away) and is known as the multi-unit
activity (MUA).
The low temporal-frequency band (e.g., <200 Hz) of electrode recordings is
known as the local field potential (LFP). Compared to the MUA, the LFP is a more
complex composite of multiple processes. It appears to reflect the summed excit-
atory and inhibitory synaptic activity in a more extended region around the tip of
the electrode (approaching the spatial scale of high-resolution-fMRI voxels). The
LFP is therefore thought to reflect the input and local processing of a region,
whereas the MUA is thought to reflect the spiking output. The LFP is also more
strongly correlated with the BOLD fMRI signal than the MUA. Berens and col-
leagues describe what is currently known about the highly complex relationships
among these three very different kinds of brain-activity measurement.
ITHEORY AND EXPERIMENT
Grandmother Cells and Distributed Representations
Simon J. Thorpe
Summary
It is generally accepted that a typical visual stimulus will be represented by the
activity of many millions of neurons distributed across many regions of the visual
cortex. However, there is still a long-running debate about the extent to which
information about individual objects and events can be read out from the responses
of individual neurons. Is it conceivable that neurons could respond selectively and
in an invariant way to specific stimuli—the idea of “grandmother cells”? Recent
single-unit recording studies in the human medial lobe seem to suggest that such
neurons do indeed exist, but there is a problem, because the hit rate for finding such
cells seems too high. In this chapter, I will look at some of the implications of this
work and raise the possibility that the cortical structures that provide the input to
these hippocampal neurons could well contain both highly distributed and highly
localist coding. I will discuss how a combination of STDP and temporal coding can
allow highly selective responses to develop to frequently encountered stimuli.
Finally, I will argue that “grandmother cell” coding has some specific advantages
not shared by conventional distributed codes. Specifically, I will suggest that when
a neuron becomes very selective, its spontaneous firing rate may drop to virtually
zero, thus allowing visual memories to be maintained for decades without the need
for reactivation.
Introduction
The Distributed vs. Localist Representation Debate
One of the longest-running and thorniest debates in the history of research on the
brain concerns the nature of the representations that the brain uses to represent
objects, and specifically the question of whether individual neurons may encode
specific objects and events (Barlow, 1972; Gross, 2002). The debate has recently
1
24 Simon J. Thorpe
received new impetus, with the publication of a significant review paper by Jeff
Bowers (Bowers, 2009), which has been followed by a series of commentaries (Plaut
and McClelland, 2010; Quiroga and Kreiman, 2010). In addition, there have been a
series of fascinating studies on single-unit responses from the human medial tem-
poral lobe that have raised numerous questions about the link between single-unit
activity and perception (Quian Quiroga et al., 2005). There is clearly something
special that happens when we recognize a familiar visual stimulus.Virtually any such
visual stimulus will doubtless activate millions, maybe hundreds of millions of
neurons in our visual system. Many of these are presumably involved in generic
processing tasks that will take place irrespective of whether the image is recognized
or not. For example, simple cells in V1 will presumably signal the presence of an
edge with a particular orientation at a particular point in the visual field, irrespective
of whether the corresponding object can be recognized or not. But most scientists
would probably accept that at some level in the brain, quite possibly relatively high
levels in the visual hierarchy, there are neurons that are directly involved in encod-
ing the presence of the object.The debate concerns the way in which those neurons
do the encoding. One view, currently quite popular among scientists, is that the
representation at the neuronal level is distributed across large numbers of neurons,
none of which needs to be specifically tuned to a particular object. At the other
extreme, some researchers have proposed that for some highly familiar objects,
there may be neurons that respond very selectively to that object—a view often
jokingly referred to as “grandmother cell” coding.
This difference between “local” and “distributed” coding models has become a
very hot topic in recent years, partly because there have been a number of reports
of single neurons that have been recorded from the medial temporal lobe of human
patients undergoing presurgical investigations for the treatment of intractable epi-
lepsy. At first glance, some of these cells seem to have many of the properties that
one might expect to find if grandmother cell coding was true. But, as some of the
authors of the studies have pointed out, there are a number of puzzling features
about these cells that do not seem to fit with the simple grandmother cell view. In
this chapter, my plan is to look in more detail at some of the issues. I will argue that
while the cells reported in the human medial temporal lobe may not be what one
would predict for a true localist coding scheme, there may be other explanations of
the results. Furthermore, I will argue that there are some good computational argu-
ments for using grandmother cell–like coding in at least some cases. However, I will
also argue strongly that there is no requirement to choose in favor of only one type
of coding. Rather, as I have argued previously, it is likely that the brain simultane-
ously uses both highly distributed and localist encoding, quite possibly within the
same bit of neocortex.
Grandmother Cells and Distributed Representations 25
A Test Case: Recognition in RSVP Streams
To make the nature of the debate clear, consider the following experiment that you
can try on yourself by downloading a set of movie files from the following site: <ftp://
www.cerco.ups-tlse.fr/Simon/Movies>. Each movie sequence is a string of highly
varied photographs of animals drawn from a range of sources that are presented in
a Rapid Serial Visual Presentation (RSVP) sequence at 10 frames per second. Ever
since the classic studies of Molly Potter in the 1970s, we have known that our visual
systems can process images at this rate and that we can spot a particular target image
(such as a “boat” or a “baby”) very effectively under such conditions, even when
only a verbal description of the stimulus is provided (Potter, 1975, 1976). In the case
of the demo sequences, all the images are of animals except one, and the task is to
report whatever image in the sequence does not fit. Whenever I have shown the
first example sequence during lectures, almost everyone will immediately notice that
the sequence contains an image of Mona Lisa. In a sense, the fact that Mona Lisa
is easy to spot may not be that surprising, since it is effectively one of the most
familiar images in Western civilization. Each of us has almost certainly seen it hun-
dreds if not thousands of times.And, given that it is a 2D painting, the visual pattern
that it produces on our retina is relatively stable with changes of viewing angle. Size
is certainly not prespecified, since we can recognize the Mona Lisa at essentially
any scale, but relative to most 3D objects that we encounter, its appearance is
nevertheless relatively standardized, making it possible to imagine that recognition
could be achieved with a relatively simple “pattern-matching” approach. However,
by looking at the other movies, you will be convinced, I hope, that this sort of effort-
less recognition occurs for many other types of object. For example, one of the
sequences contains a photograph of the Statue of Liberty. Again, almost every one
will notice its presence in the sequence of images, despite the fact that (unlike Mona
Lisa), there are effectively an infinite number of different viewing angles that would
work. This certainly makes life harder for the recognition mechanism that is being
used by the visual system. In another of the sequences, there is a scene from a res-
taurant with someone dressed up as Mickey Mouse. Again, a remarkably high
number of people who see the sequence will immediately report that they saw
Mickey Mouse, even though they had no reason to expect such an image. Indeed,
the other “distractor” images in the sequence are all animals, and in a sense, so is
Mickey Mouse (albeit a somewhat special one).Why then, do people almost invari-
ably notice the intruder in the sequence? My personal view is that they notice such
intruders because some sort of high-level representation is activated in the brain,
and that this representation gets noticed because it does not fit with the rest of the
context.
26 Simon J. Thorpe
There are a number of points that we can make on the basis of this sort of dem-
onstration. One point concerns the question of whether all the images in the
sequence need to be processed fully by the visual system, or whether it might be
sufficient to only process the one (Mona Lisa, Statue of Liberty, or Mickey Mouse)
that we actually notice. This seems to me to be very implausible. It is not because
the other twenty or thirty images in the sequence presented at ten images per second
cannot be reliably reported that they were not fully processed. Indeed, it seems
difficult to imagine how the brain could determine whether any particular image in
the sequence was worth noticing without processing them all fully. Rather, it seems
more likely that all the images are being processed and that the intelligence of our
visual system is demonstrated by the fact that we are automatically able to deter-
mine whether or not a particular image is sufficiently important to merit being made
the center of attention.
Some Distinctions
A key issue that I want to address here concerns the nature of the neural represen-
tations that are activated in such a situation. A first point to make is that there are
good reasons to believe that the brain could potentially use a complete range of
representational schemes. Consider a classic 3-layer neural network architecture
composed of a layer of input units, a layer of output units, and between them a layer
of so-called hidden units. To make the problem clear, imagine that the input layer
corresponds to a simple 8 × 8 retina, and the output layer corresponds to a set of
128 responses, with one response for each of the 128 members of the ASCII char-
acter set (see figure 1.1). The desired input–output function is to generate the
appropriate output when a particular character is presented on the “retina.” Theo-
retical studies have shown that with sufficient units in the hidden layer,such a system
can implement any arbitrary input function, but there are many different ways in
which the function could be implemented at the level of the hidden units. One would
be to use just 7 hidden units and the 7-bit ASCII code, illustrated in figure 1.1. This
is a perfect illustration of a completely distributed coding strategy, since to know
what is being represented, it is necessary to know the state of all 7 hidden units.
None of the units actually “means” anything on its own, and an experimenter who
was recording the response of any of the neurons to changing inputs would be
unable to make sense of why the neuron was active for any given stimulus, because
effectively, the assignment of each neuron to the set of stimuli is arbitrary. Each unit
will be active for 50 percent of the input patterns, which means that the coding is
in fact very efficient. Indeed, that is precisely why the ASCII code was chosen as a
way to encode text based information within computers. Note also, that there are a
very large number of different ways in which the 7 units could be used to represent
all 128 characters—each is effectively as good as any other.
Grandmother Cells and Distributed Representations 27
At the other end of the spectrum, one could imagine a hidden layer with 128
units, one for each of the characters in the set. For any one input pattern, only one
of the units need be active. This would be an example of extreme localist coding—
effectively corresponding to grandmother cell coding. Clearly, both extremes of
representation could potentially be used for representation at the neuronal level. Is
there any way of deciding which if any of these different schemes is actually used
in the brain?
Reading out from Distributed Representations
The distinction between distributed and grandmother cell–based representations has
become an increasingly hot issue in recent years for a number of reasons. One is that
recent advances in multivoxel-based analysis of fMRI data mean that it has been
possible to show how, using the distributed pattern of activity over a large number of
voxels, one can make inferences about the stimulus identity and category. Such data
provide a very clear demonstration that information can indeed be extracted from
the distributed pattern of activity within a cortical region. This has been demon-
strated using fMRI voxel activation levels (Kriegeskorte et al., 2007) but more
recently,the approach has been extended to recordings from intracerebral electrodes
in epileptic patients (Liu et al., 2009), as well as to single-unit recording studies from
both monkey inferotemporal cortex (Kiani et al.,2007) and the medial temporal lobe
in humans (Quian Quiroga et al.,2007).One particularly significant aspect of this sort
Figure 1.1
Distributed coding in the ASCII code. Each of the seven units in the hidden layer participates to the
coding of one of the 128 characters in the ASCII set. However, none of these units has activity that is
specifically related to any particular stimulus.
28 Simon J. Thorpe
of work is that the decoding strategies used, although sophisticated, do not require
the existence of mechanisms that could not be implemented with relatively simple
neural circuits. Suppose that it can be demonstrated that by applying a classification
algorithm to the activation levels of a few hundred voxels in some part of the ventral
processing pathway it is possible to make some judgment about the stimulus—for
example, whether or not a face is present, and maybe even whether the face is famil-
iar or not.If the classification procedure could be implemented using a neural circuit,
then it follows that the brain could also derive the same information simply by for-
warding the same pattern of activation to another brain area where neurons could
potentially learn to make the same distinction. It might also be that individual
neurons within the region would also be in a position to learn to make the same
categorical distinctions. In this case, the “readout” neurons would show a more local
representation, as is the case for the third layer in figure 1.1.
Does the fact that it is possible to extract information from the activity patterns
of large numbers of neurons (Quian Quiroga and Panzeri, 2009) mean that there is
no need for the brain to make information explicit at the level of individual cells?
In other words, does this recent work actually allow us to say anything about
whether or not grandmother cells really exist or not?
A Highly Specialized Neuron in the Orbitofrontal Cortex
One reason why I think one should be careful before assuming that it will be pos-
sible to derive all the information needed to interpret the function of a particular
brain region from imaging studies comes from my own experience as a doctoral
student in Edmund Rolls’s lab at Oxford in the late 1970s.We were recording single-
unit activity in the monkey orbitofrontal cortex (OFC) using behavioral tasks that
we thought were likely to be interesting given the known effects of OFC lesions.
Specifically, lesioned animals (and humans) were known to have severe problems
in task shifting, for example, when performing visual discrimination tasks with
reversals. We therefore explicitly tested this by training monkeys to perform a go/
no-go visual discrimination task for fruit juice reward, and then periodically revers-
ing the rule.Thus, initially the monkey might be responding to a green “go” stimulus
and withholding responses to a red “no-go” stimulus. However, we then reversed
the contingency with the result that the monkey made one “mistake” and received
a drop of saline after responding to the green stimulus. After weeks or months of
training, he would immediately reverse his strategy and start responding systemati-
cally to the other stimulus,until the next reversal occurred.More than three hundred
neurons were recorded during the performance of this reversal task, most of which
failed to show any significant activity changes related to the task. However, a
handful of cells showed some quite remarkable responses during reversal, and one
particular cell showed a quite amazing response, illustrated in figure 1.2 (Thorpe
Grandmother Cells and Distributed Representations 29
Figure 1.2
Activity of a single neuron in monkey orbitofrontal cortex with activity very specifically related to
reversals in a go/no-go visual discrimination task. On each trial, the monkey is shown one of two different
stimuli through a shutter that opens for 1 second. One of the stimuli means that he can lick a tube for
fruit juice reward, whereas the other stimulus means that he should not lick in order to avoid receiving
saline. Licks are indicated by the inverted triangles. Without warning the meanings of the two stimuli
are inverted (“Reversal”), at which point the monkey makes one mistake. This was followed by a strong
burst of activity from the neuron that lasted several seconds. A second smaller burst occurred on the
next correct trial. Adapted from Thorpe et al. 1983.
et al., 1983). Following each reversal of the rule, the neuron showed a very strong
increase in firing rate that lasted for several seconds.There was even a second burst
of firing following the first correctly performed trial with the new rule. It is therefore
difficult to imagine that the neuron is simply responding to the punishment, or to
the making of an error. Rather, the neuron appeared to form part of a highly specific
circuit that was specifically related to the performance of the reversal task. Given
that the monkey was a true expert at performing the task, having spent months
performing such reversals, I cannot help thinking that maybe the existence of such
a neuron is a direct reflection of the automaticity of the behavior following training.
In the current context, it is particularly important to realize that only one such cell
was found out of hundreds recorded. It is therefore relatively unlikely that the activ-
ity of the neuron could be seen at the level of more global measures of brain activa-
tion, such as event-related potential recording or fMRI. Many other examples of
highly specific but rare responses in individual neurons can be found in the
literature.
30 Simon J. Thorpe
The Problem of Inferring Neuronal Selectivity from Global Measures
The existence of this sort of highly specific, yet rare, neuronal response within a
cortical area raises an important issue. Global activity measures certainly provide
evidence to support the idea that a particular brain region could be involved in
performing a certain cognitive task. However, it is probably impossible to make
inferences about the degree of specialization of individual neurons on the basis of
these global measures. In principle, one can even imagine a situation where the
global activation measures provide no evidence for selectivity whatsoever, and yet
where there might still be strong selectivity at the single-unit level. Indeed, the
reverse can also be true, since there are cases when there is “global” activity in the
absence of spiking activity (Sirotin and Das, 2009).
Consider some of the very interesting fMRI-based studies that have shown that
it is possible to read out the orientation of a grating from the pattern of activation
seen across voxels in V1 (Haynes and Rees, 2005; Kamitani and Tong, 2005). Such
techniques rely on the existence of the local variations in preferred orientation
within V1. While the size of the orientation columns is small relative to the resolu-
tion of the fMRI technique, there are nevertheless sufficient variations in local
selectivity to allow the technique to be used. However, it is important to realize that
it did not have to be that way. If the neurons selective to different orientations were
really mixed up completely at the local level, nothing would be visible at the level
of the voxels because each voxel would contain neurons coding all the different
orientations. Thus, if information can be extracted from looking at the relatively
coarse pattern of activity seen in imaging studies, this is certainly compatible with
the hypothesis that the structure is involved in the processing of the stimulus attri-
butes. However, the opposite is not true.The absence of differential fMRI activation
does not imply that the structure has no role to play.
Grandmother Cells in the Human Medial Temporal Lobe?
Having made a few general points about the problems of relating results from
imaging studies with responses seen at the single-unit level, I would now like to
move on to the interpretation of the fascinating series of papers that have described
the responses of single units in the human medial temporal lobe. The studies dem-
onstrate that such neurons can have remarkably invariant responses to a wide range
of different stimuli that effectively correspond to the same object or concept. One
of the earliest such studies was a paper from 2000 describing neurons that would
respond to a wide range of different photographs of animals (Kreiman et al., 2000),
but over the past few years we have seen reports of neurons that would respond to
Grandmother Cells and Distributed Representations 31
many different photographs of a particular actress (the famous “Jennifer-Aniston
cell”), or even to the name of the person written in text (Quian Quiroga et al., 2005).
And it is now clear that the same individual neuron can fire selectively to the
same stimulus presented via a number of different sensory modalities—vision,
text, voice (Quian Quiroga et al., 2009), implying a truly remarkable degree of
invariance. Another fascinating result is the fact that when the stimuli are masked,
and the duration of the presentation is so short that the subject can only report the
nature of the stimulus on some limited percentage of trials, there is a remarkably
high correlation on individual trials between whether the neuron responds and
whether the subject can report the nature of the stimulus (Quian Quiroga et al.,
2008b).
At first sight, such results might appear to provide strong support for the notion
of grandmother cell coding. While it is true that there have not actually been any
reports of neurons that fire exclusively to the patient’s grandmother, the neurons
do tend to respond best to stimuli that are personally relevant to the patient,
responding in particular to members of the patient’s family or members of the
experimental team (Viskontas et al., 2009). However, there is a problem with such
a view, which stems from the fact that the hit rate for finding such cells appears to
be much higher than one would expect if that part of the brain was really using such
an explicitly localist coding scheme.
The critical issue is the number of different objects that the system needs to be
able to encode. One widespread source of confusion concerning localist coding is
the belief that it would require having one neuron to code every possible stimulus
that can be identified. As I have argued previously (Thorpe, 1995; Thorpe and
Imbert, 1989b;Thorpe, 2002), this can be easily demonstrated to be erroneous. Con-
sider the output of the retina via the optic nerve, which contains roughly 1 million
axons. Even if we only consider the situation where each axon can either be “on”
or “off,” this means that there are 21,000,000
possible patterns that can be presented. If
we assume that we need one neuron to encode each one of these patterns, this would
need roughly 10300,000
neurons. Assuming a reasonable size for each neuron, this
would require that the brain be larger than the known universe—clearly, not the
strategy used by natural selection! The error in the argument comes from assuming
that we need to know the total number of possible stimuli. In fact, it is the number
of visual categories that is the real number of interest. There are no hard and fast
numbers for this, but Irving Biederman suggested about 30,000 distinct visual cat-
egories (Biederman, 1987), and I myself have suggested a somewhat higher number
based on the number of entries in a large encyclopedia (Thorpe and Imbert, 1989a).
If we suppose that the real number is something like 100,000, this would mean that
the probability that any given cell could be activated by a given familiar stimulus
32 Simon J. Thorpe
should on average be 1 in 100,000, assuming all stimuli to be equally represented.
However, it is clear that in the human MTL recoding studies, the chances of finding
a cell that responds appears to be much higher than this. Indeed, the probability of
activation of a given cell to the sorts of stimuli used in these experiments has been
estimated to be 0.54 percent (Waydo et al., 2006).
This point is well made in the paper entitled “Sparse but not ‘grandmother cell’
coding in the medial temporal lobe” (Quian Quiroga et al., 2008a). In a typical
experiment, the researchers are able to record from a few dozen cells simultane-
ously. During a morning session, they show a set of roughly 100 photographs about
five times each to the patient in a random order. Often, they will find one or two
cells that respond well to one of the images. Let us suppose that one of the effective
images was a photograph of Bill Clinton. During the lunch break, the researchers
then constitute a new set of test images, including several other images of Bill
Clinton that are then used to analyze the neuronal responses during an afternoon
session.This is how they are then able to confirm that a single cell is able to respond
to a wide range of different images (and even text strings or speech) that correspond
to the same object.
The critical point is that the hit rate during the morning session is much
higher than one would expect if each neuron in the medial temporal lobe were a
grandmother cell in a sort of library containing hundreds of thousands of possible
objects.
Distributed Coding and the Totem Pole Cell Hypothesis
In their paper, they conclude that even if individual cells can respond in an invariant
way to a highly diverse set of different stimuli that correspond to the same object,
it is highly probable that the same cell might also respond to other completely
unrelated objects. Rafi Malach has called this idea the totem pole cell hypothesis
(Malach, personal communication). According to this idea, each cell has a number
of different “faces,” and might simultaneously be able to respond invariantly to say
“Bill Clinton” but also to some other completely unrelated stimuli—such as the “Taj
Mahal” or an episode of The Simpsons, for example. Clearly, the probability that
the experimenters might hit on two or more totally unrelated stimuli just by chance
would be very low. Nevertheless, cells responding to two separate stimuli have been
seen occasionally, so the idea is nevertheless a real possibility that deserves to be
tested more explicitly. Note that Rafi Malach’s totem pole cell hypothesis is a clear
case where object identity could only be deduced if one has access to the responses
of multiple neurons. Thus, if one such “totem-pole cell” responded to Bill Clinton,
the Taj Mahal, and the Simpsons, and another cell responded to another set of
stimuli including Bill Clinton, the fact that both cells responded on a given trial
could be used to determine that Bill Clinton was present.
Grandmother Cells and Distributed Representations 33
An Alternative Hypothesis for the Significance of MTL Responses: Temporal
Tagging
While the high-hit-rate issue appears to argue against the idea that the neurons in
the human hippocampus are truly instances of grandmother cell coding, the totem
pole hypothesis is perhaps not the only option for explaining the phenomenon.
Given the well-known implication of medial temporal lobe structures in memory, it
may be interesting to think of how the responses of such neurons might fit within
an alternative memory related hypothesis. Suppose that one of the key roles of the
hippocampus is to keep track of a subset of all the possible objects and events that
we are able to recognize, namely, those that have been experienced in the relatively
recent past. According to this view, the neurons in the hippocampus are not a dic-
tionary of all the objects that can be recognized, but rather a dictionary of recently
experienced events. Although speculative, this hypothesis fits a number of interest-
ing features of the medial temporal lobe.
First, it has been known for decades that synapses in the hippocampus are very
plastic and show long-term potentiation (LTP) following strong activation (Bliss
and Collingridge, 1993; Bliss and Lømo, 1973). This potentiation can last for days
and even weeks, meaning that a sensory input that is repeated is likely to produce
a stronger response to the second presentation, even when the interval between
presentations is a matter of weeks. Second, in a study of neuronal responses in a
region close to the anterior thalamus that could potentially receive information
from the medial temporal lobe, Rolls and colleagues described neurons that had the
remarkable property of responding strongly to effectively any visual stimulus that
had been seen recently (Rolls et al., 1982). A particularly remarkable finding is the
fact that such neurons could have visual responses that have latencies as short as
130 ms. This form of invariant response to familiar stimuli is a major challenge for
computational models, because it implies that there must be massive convergence
from higher-order visual areas to allow such a general response. How might this be
achieved? It is just conceivable that somehow all possible visual stimuli converge
in one processing stage to produce a generic “familiarity” response, but this seems
unlikely.Alternatively, it might be that the brain determines familiarity individually
for recently encountered objects before putting them all together. Could this be
what is seen at the level of the single-unit responses seen in the human medial
temporal lobe?
One of the strategies used by the team performing the human MTL recordings
when choosing the initial set of images for testing is to specifically ask the patients
for information about their favorite TV programs and movies, together with their
preferred actors. This strategy could well be one of the reasons for the high success
rates seen in the experiments but leaves open the issue of whether it is the fact that
34 Simon J. Thorpe
the patients are highly familiar with the stimuli or whether it is the fact that they
may have seen them relatively recently that is critical. It appears that the neurons
can sometimes respond strongly even on the very first presentation of a particular
photograph during the morning recording session (Pedreira et al., 2010), and this
might be taken as evidence that recency is not critical for obtaining a response.
However, as far as I am aware, it would be difficult to rule out the possibility that
the patient has seen the stimulus elsewhere in the relatively recent past.
The hypothesis is therefore that these hippocampal neurons may effectively be
keeping track of a relatively limited subset of all possible objects that can be rec-
ognized, namely, those that have been experienced within say the past few weeks.
The precise duration of this temporal tagging period may not be strictly fixed, but
could potentially be related to the maximal duration of LTP mechanisms, that is to
say, periods that could extend to several weeks. The critical question would now be
to estimate the total number of different objects, scenes, and events that are typically
tracked during this period. It may well be thousands or maybe more, but it certainly
will be a lot smaller than the total number of objects that we are capable of recog-
nizing, which will probably be orders of magnitude higher. If the numbers really are
in the range of thousands, then this might well be able to account for the anoma-
lously high hit rate seen in the hippocampus. Clearly, if during any recording session,
it is possible to record from several dozens of neurons, and each is tested with 100
different images, many of which are likely to correspond to stimuli that have been
experienced recently, it would be enough to have a system that was only tracking a
few thousand stimuli to be relatively confident about finding at least one neuron
that could be activated during any given experiment.
One critical implication of this view of the hippocampus is that a neuron that
currently shows highly selective and invariant responses to a particular input (for
example, “Jennifer Aniston”) is not required to remain selective to that particular
stimulus indefinitely. Specifically, if one were able to record again from the same
cell two years later (clearly a technical impossibility), it might well be found to
respond to something completely different. The idea is that a neuron will remain
selective for a particular stimulus as long as that stimulus is reexperienced reason-
ably frequently, that is to say every few weeks or so.
We therefore have at least two quite different views about the highly selective
responses reported in human medial temporal lobe, and the puzzling fact that the
hit rate for finding these cells seems to be excessively high—too high to be compat-
ible with the idea that the hippocampus contains a complete set of grandmother-
type cells.The first is the suggestion by Rafi Malach that the cells may be multifaceted,
like totem poles, and each capable of responding to many totally different objects.
The second option is that the cells may only be responding to a subset of the large
number of recognizable objects, namely the subset corresponding to objects that
Grandmother Cells and Distributed Representations 35
have been seen or experienced in the relatively recent past. And, of course, these
options are not mutually exclusive.
The Origin of Highly Selective Responses
It is important to realize that for both models, we still need to explain how the
neurons are able to achieve this high level of selectivity. More specifically, the fun-
damental question concerns the nature of the coding scheme being used by the
neurons that provide the input to such neurons. We know that the neurons in the
hippocampus will get their visual inputs from structures in the ventral stream,
including areas such as perirhinal and entorhinal cortex. And before that, these
structures in turn will be receiving from the hierarchy of processing areas that make
up the ventral stream. What sort of coding is being used in these structures?
Here, there are clearly a number of theoretical possibilities.To make things clear,
consider a hypothetical neuron that responds selectively to any visual input that
corresponds to a particular person—whether it is the patient’s brother or a well-
known celebrity.What sorts of neurons are providing the inputs to such a cell? One
possibility is that the hippocampal cell can somehow generate invariant responses
to a wide range of physically different visual inputs directly from a true distributed
code at the previous layer. If this is the case, then it is clear that this would involve
mechanisms that we currently cannot understand.
In contrast, one way of obtaining an invariant response that we can understand
involves pooling together outputs from a population of cells that is each selective
to a particular instance of the stimulus. This sort of pooling mechanism has already
been suggested for generating selectivity to views of the head that are invariant to
the angle of view, based on pooling together different responses, each of which is
selective to a particular viewing angle (Perrett et al., 1987). Indeed, this form of
view-specific recognition mechanism now seems to be increasingly seen as the most
plausible way of generating invariance (Wallis and Rolls, 1997). But if true, it is clear
that the neurons providing the input would themselves have to be quite selective,
and indeed they might look quite a lot like the hypothetical grandmother cell,
although without the remarkable invariance seen in the medial temporal lobe
neurons.
In the end, an answer to this sort of question will have to wait until we have
single-unit recordings from the cortical regions that provide the inputs to the medial
temporal lobe. For the time being, such recordings are simply not available, largely
for technical reasons. Few researchers working in humans have been able to record
from individual neurons in the ventral stream structures that provide the highly
processed information. Obviously, there are far more data available from work on
monkeys, but it is difficult to extrapolate from one species to the other. While we
can be confident that the human patients can indeed recognize “Jennifer Aniston,”
36 Simon J. Thorpe
we have no idea whether monkeys would make the same sort of judgment of indi-
vidual identity. Furthermore, there is even some recent evidence demonstrating that
the frequency selectivity of neurons in the human auditory cortex is considerably
sharper than has ever been reported in previous studies in mammals with the excep-
tion of bats (Bitterman et al., 2008). This raises the intriguing possibility that selec-
tivity in human cortical areas may be higher than has generally been seen in animal
studies.
The Latency Problem—and a Hypothesis
There is another feature of the medial temporal lobe neurons that poses a major
puzzle, namely, the latency at which they respond. Typically, their latency is around
300 ms (Mormann et al., 2008), although sometimes onset latencies can be even
longer. Such values are substantially higher than the 100 ms latencies typically seen
for neuronal responses in monkey inferotemporal cortex. It is even substantially
longer than the 120–130 ms latency for ultrafast saccades to animals that have been
reported recently in humans (Kirchner and Thorpe, 2006) and the 100–110 ms
latency seen for selective saccades toward human faces (Crouzet et al., 2010).
Intriguingly, one of the human single-unit recording studies compared onset latency
in four different structures and found that while units in the hippocampus,amygdala,
and entorhinal cortex all tended to start firing at similar times, those in the parahip-
pocampal cortex started firing substantially earlier, from as little as 150–200 ms
(Mormann et al., 2008). This suggests that there may be a significant delay before
activation within the hippocampus can start, leaving the way open for more complex
processing mechanisms than would be predicted by a simple feedforward pass
through the relatively small number of intervening stages.
What might explain these latency differences? It certainly does not fit the simple
idea that the hippocampus is simply pooling the output of several “view-tuned” IT
neurons. There are two synapses between the later stages of IT and the hippo-
campus, perhaps three if the circuits are more complicated than currently believed.
Conduction delays and postsynaptic integration times simply cannot explain this
additional delay. Remember that in the primate ventral stream, the latency differ-
ence between V1 and IT is only about 40 ms, despite the fact that this includes the
time for processing in intermediate structures such as V2 and V4. Indeed, if the
estimates of conduction velocities for cortico-cortical connections are correct
(namely, 1–2 m.s–1
), the physical distance between V1 and IT would account for
a substantial proportion of that 40 ms difference, implying that the integration
time at each processing stage must be remarkably short, probably only a few
milliseconds.
I would like to make what I believe to be a novel hypothesis, which appears to
make considerable sense. Could it be that the hippocampus contains a mechanism
Grandmother Cells and Distributed Representations 37
that will respond when a particular pattern of activation in the cortex is maintained
for some minimal duration? Suppose that this minimal duration was 150–200 ms.
This would mean that neurons in the neocortex that are only activated briefly will
not result in activation in the hippocampus, even though processing in the ventral
stream may have been complete. The fact that processing it the ventral stream can
be very rapid has been beautifully demonstrated by neurophysiological studies of
responses to RSVP sequences (Keysers et al.,2001).These authors showed that even
at 72 frames per second, neurons in IT can show a transient “blip” of activation
around 100 ms after their preferred visual stimulus has been shown, demonstrating
that processing with just a single feedforward wave of processing can be enough.
However, these “blips” of activation are generally too weak to reach consciousness.
Suppose, though, that if the activity in inferotemporal neurons can be maintained
long enough for the hippocampal circuits to kick in, even such briefly presented
stimuli can potentially be stored in episodic memory by leaving a trace in the hippo-
campus. If the same stimulus is presented again later, it would be recognized as
familiar if and only if there is also a response in the hippocampus, which effectively
would mean that the cortical activation on a previous presentation had been main-
tained for the critical period.
In fact, this suggestion can also be related to the highly controversial question of
the neural correlates of consciousness (Dehaene and Naccache, 2001; Lamme, 2006;
Tononi and Koch, 2008). Many authors have suggested that conscious perception
may be related to synchronous activation across multiple cortical areas, or maybe
specific types of oscillatory activity (Uhlhaas et al., 2009). The suggestion here is
considerably simpler. According to this view, conscious perception and the storing
of episodic memory traces could simply be gated by the duration of activation within
the cortex, with the hippocampus acting as a form of gatekeeper, measuring the
duration of activation within the cortex and selecting those patterns that are main-
tained for longer than the minimum time. It could be that hippocampal circuits are
specifically designed for performing this function.
Mechanisms for Generating Highly Selective Responses
Let us now return to the nature of the cortical inputs that provide the inputs to the
medial temporal lobe, and the sort of selectivity that might be present. Given that
the simplest hypothesis for generating invariant responses would involve combining
the outputs of neuronal mechanisms that are themselves quite selective, let us now
consider the question of whether we can propose any neurophysiologically plausible
mechanism for producing selectivity.
I suspect that one reason why “grandmother cell” encoding is often considered
to be implausible lies in the lack of generally accepted mechanisms that could lead
38 Simon J. Thorpe
to development of such strong selectivity. It is probably fair to say that few if any
such mechanisms are currently known. However, in this section, we will have a look
at one possible mechanism that could potentially explain why neurons might become
selective to frequently experienced stimuli. Two specific mechanisms will be
introduced—one depending on spike-time dependent plasticity (STDP), the other
using temporal coding to control the proportion of active neurons.
STDP-Based Learning
Spike-time dependent plasticity is a phenomenon that first became prominent in
the late 1990s (Song et al., 2000) and has since generated a great deal of interest
among the modeling community (Dan and Poo, 2006). One common type of STDP
rule changes synaptic weight as a consequence of the relative timing of pre- and
postsynaptic spikes—inputs that fire before the postsynaptic spike are strengthened,
whereas those that fire afterward are depressed. This sort of mechanism is known
to reinforce synapses that tend to fire in synchrony, and this effect has been exten-
sively studied. But another significant finding is that STDP systematically concen-
trates high synaptic strengths on early firing inputs (Guyonneau et al., 2005). Thus,
if a neuron repeatedly receives waves of spikes via its afferents and these waves of
spikes occur in a pattern that tends to repeat, over time the neuron will end up with
all the highest synaptic weights located on the afferents that fired first. We have
found that this simple mechanism can allow neurons to learn to respond selectively
to repeating stimuli, and this may provide a simple mechanism for generating
selectivity.
To realize why this occurs, it is necessary to introduce another natural feature of
any integrate-and-fire neuron, namely, the fact that they tend to fire earlier when
activated strongly. With a weak input, the neuron can take a long time to reach
threshold, whereas with strong inputs, the threshold is reached rapidly. As I first
pointed out two decades ago, this means that when one considers a population of
neurons, information can be contained in the order in which cells fire (Thorpe, 1990).
However, only relatively recently has the idea been directly tested experimentally
(see, for example, Gollisch and Meister, 2008). When an image is flashed on the
retina, each cell will charge up and reach its threshold for spike initiation, but the
time at which the neuron spikes will vary, with the most strongly activated cells
(corresponding to the points in the image where the local contrast is highest) firing
first.As a consequence, the order of firing will encode information about the image,
even under conditions where each cell only fires one spike.Theoretical studies have
demonstrated how this sort of order based information can be a very efficient way
to transmit information (VanRullen and Thorpe, 2001), considerably more efficient
than conventional rate based coding, especially if the underlying rate is encoded by
a random Poisson-like process (Gautrais and Thorpe, 1998).
Grandmother Cells and Distributed Representations 39
Now,let us consider what would happen if a neuron equipped with STDP“listens”
to the output of a retina-like structure using this sort of coding. If there was only
one neuron connected to the retina, and the same image was repeatedly flashed on
the retina, the STDP rule will end up concentrating high weights on the earliest
firing cells in the retina. Since these will correspond to the highest contrast parts of
the image, the neuron will end up with a set of synaptic connections that will make
it selective to the specific pattern that has been shown. Of course, this is not a real-
istic situation, because in reality there will not be just one neuron listening, and
there will not be just one stimulus being shown. But as we have shown using simula-
tions (Guyonneau et al., 2004), when many neurons are present and inhibitory
connections exist between the neurons such that as soon as one of the neurons fires,
it prevents any other neurons from firing, only one neuron will be allowed to learn
in response to a given stimulus. This will lead the system to act as a competitive
learning mechanism in which different neurons will become selective to different
patterns. The power of such a mechanism is illustrated by a study that looked at
STDP-based learning in a model visual system that was stimulated by a set of images
taken from the Caltech face database —a set of several hundred photographs of
human faces seen on highly varied backgrounds (Masquelier and Thorpe, 2007). It
was found that, even though there was no explicit instruction, the neurons at the
top end of the visual pathway ended up selective for facelike features, simply
because these were the features that occurred most frequently in the inputs. It is
important to realize that this selectivity emerged in an entirely unsupervised way,
and indeed, if the system was instead stimulated with photographs of motorcycles,
the neurons became selective to parts of motorcycles instead. And when the input
image set contains equal numbers of faces, motorcycles, and other varied distractors,
only the faces and motorcycles are learned, because there is not enough similarity
between the distractors to allow the development of selective responses.
In that particular set of simulations, we used a hierarchical feedforward process-
ing architecture similar to one that has been successfully used to model processing
in the primate ventral stream by the MIT group (Serre et al., 2007). However, it is
important to realize that the same basic principles apply for essentially any archi-
tecture. For example, in another study, we considered the case of a neuron receiving
activity from a set of 2,000 randomly firing afferents in which there was a particular
pattern of activity lasting 50 ms,which affected a subset of the afferents and repeated
at unpredictable intervals. Remarkably, using just one neuron equipped with STDP,
we found that the neuron would reliably learn to respond to the repeating pattern,
and indeed would learn to respond within just a few milliseconds of the start of the
pattern (Masquelier et al., 2008). Furthermore, when several different neurons are
listening to the same set of afferents, and lateral inhibition between the neurons
prevents more than one neuron firing at the same time, two interesting phenomena
40 Simon J. Thorpe
can occur. First, several different neurons can learn to respond to the same stimulus
pattern but at different times. Thus, with an input pattern lasting 50 ms, one neuron
might respond very close to the start of the pattern, but other neurons would learn
to respond to later parts of the same pattern, causing the neurons to “stack,” with
the result that any given pattern would result in the firing of a string of different
units (Masquelier et al., 2009). Secondly, when several different patterns are present
in the input activity, different neurons will learn to respond selectively to those dif-
ferent patterns.
Temporal Coding for Controlling the Proportion of Active Neurons
We have seen how STDP can cause high weights to concentrate on those inputs
that fire earliest during a repeating spatiotemporal pattern. In order for this effect
to allow responses to become selective, we need to add another mechanism, namely,
a mechanism that keeps the percentage of active neurons within bounds. Here again,
allowing neurons to use a temporal coding strategy proves to be particularly inter-
esting. If we consider how a population of neurons will respond to a flashed visual
stimulus, it is natural to suppose that the first neurons to fire will tend to be the ones
that get the strongest input. As processing progresses, more and more neurons will
fire, but it would be relatively straightforward to include inhibitory circuits that
could control the percentage of neurons that are able to respond. The principle is
illustrated in figure 1.3 (plate 1).The basic idea is that we use STDP to select a small
subset of inputs that are given strong weights—the other inputs are given low or
even zero weights. For example, in this case only 4 of the 40 inputs has a high weight,
and we have also fixed the number of inputs that can fire at 4 by using an inhibitory
feedback circuit that prevents more than that number of input neurons from firing.
Under these conditions it is clear that the probability of getting a given level of
activation in the output neuron can be calculated using a binomial distribution. To
simplify the calculation, assume that each of the potentiated synapses has a weight
of 1. In that case, if the input pattern is chosen randomly, and only 10 percent of the
inputs are allowed to be active, it is relatively simple to calculate the probability
that a given number of potentiated synapses are activated. There is a less than 0.4
percent chance of having 3 of the of the potentiated synapses active, and only once
in 10,000 times would we expect to hit the maximum level of excitation.This follows
very simply from the fact that there are a very large number of ways of choosing 4
inputs out of a set of 40, only one of which would match perfectly the set of 4 that
have high weights.
The example illustrated here is obviously very much simplified compared with
the case of a real neuron that might well have 500 inputs from the previous layer.
Suppose that 50 of the synapses are potentiated (the others set at zero) and the
percentage of active inputs fixed at 10 percent. Here again, it is relatively straight-
Grandmother Cells and Distributed Representations 41
forward to show that the probability of more than 10 of the 50 inputs with strong
weights being active by chance is less than 1 percent.The key point is that by setting
the threshold of the output neuron appropriately, the neuron can be made to be
arbitrarily selective. It is perhaps worth stressing that these very attractive features
stem from the fact that we have controlled the percentage of active cells. If there
were no constraints on how many cells can fire, there would be nothing to stop any
output cell from going over threshold. Other authors (Furber et al., 2004) have made
similar points.
The essential point is that generating high selectivity is not in itself a problem.
Once a neuron has a set of weights where only a relatively small proportion of the
incoming spikes can produce responses, and where the percentage of active inputs
is limited, producing selectivity to a particular input pattern need only require that
the threshold for firing be set well above the level of activation that could be pro-
duced by chance.The real trick is to achieve invariance, that is, the ability to respond
A Threshold B Threshold
I I
N N
Figure 1.3 (plate 1)
Illustration of how controlling the number of active neurons in a system where only a small number of
synapses have high weights can allow output neurons to be highly selective.The output neuron N receives
synapses from a large number (here only 40), of which only 10 percent are potentiated. During an activa-
tion cycle, the input activation pattern increases until some fixed percentage of the input units has fired.
At this point, the inhibitory feedback circuit prevents further neurons from firing. In this case, the first
four input units to fire are the ones with the high synaptic weights, producing maximal activity of the
output neuron. By appropriately setting the threshold of the output neuron, it can be made arbitrarily
selective.
42 Simon J. Thorpe
to a wide range of variations of the same stimulus.This is clearly something that the
brain can do, as illustrated by the responses of neurons in monkey inferotemporal
cortex (Booth and Rolls, 1998; Tovee et al., 1994; Zoccolan et al., 2007), but even
more clearly by neurons in the human medial temporal lobe (Quian Quiroga et al.,
2005). Currently, the most plausible hypothesis for generating invariance involves
pooling together responses from neurons that themselves are selective to a subset
of different views. Indeed, there have been a number of interesting suggestions for
how this regrouping of related stimuli could be achieved by having mechanisms that
are sensitive to temporally correlated inputs that change relatively slowly (Foldiak,
1990; Wiskott and Sejnowski, 2002).
A Continuum of Representations within Cortex?
What sort of picture should we expect to find within cortical areas within the human
ventral processing stream? According to one view, the neurons can be thought of
as a distributed representation in which each neuron participates in representing a
wide range of potentially unrelated visual patterns. In contrast, an extreme localist
position would claim that individual neurons are effectively specialized for encoding
particular visual objects. But there is another, perhaps more biologically plausible
option. According to this view, an area such as inferotemporal cortex could
reasonably contain a full range of types of neuronal representation. These could
include a proportion of relatively uncommitted neurons that are either relatively
visually unresponsive, or capable of responding to a wide range of different
unrelated stimuli.There might also be neurons that look very much like a distributed
representation system, responding to different extents to a wide range of differing
stimuli. But there might also be some proportion of highly selective neurons that
literally will only respond to very specific (and probably highly familiar) stimuli.
Many of the existing neurophysiological data are actually consistent with this
sort of hybrid view, because it is clear that there is indeed a continuum of selectivity
(see, for example, Rolls and Tovee, 1995; Young and Yamane, 1992; Zoccolan et al.,
2007).
If we accept that such a continuum exists, it is clear that the proportions of these
different types of neuron will be a critical issue. Imaging techniques such as fMRI
are clearly limited to providing information about the average response pattern.
While more advanced approaches such as fMRI adaptation have been used to
provide evidence that neurons in particular brain areas can be selective to particular
stimulus attributes (Grill-Spector et al., 1999), drawing direct conclusions about the
degree of selectivity at the neuronal level is not simple (Sawamura et al., 2006).
Clearly, the most direct way to obtain a picture of how neurons represent visual
Random documents with unrelated
content Scribd suggests to you:
General Remarks upon the differences of exhalations and
absorptions. 67
ARTICLE FIRST.
GENERAL ARRANGEMENT OF THE EXHALANTS.
I. Origin, Course and Termination.
Different hypotheses respecting these vessels.—What
observation shows us concerning them. 69
II. Division of the Exhalants.
They can be referred to three classes.—Table of these classes
and their division. 71
III. Difference of the Exhalations. 73
ARTICLE SECOND.
PROPERTIES, FUNCTIONS AND DEVELOPMENT OF THE EXHALANT
SYSTEM.
I. Properties.
We are ignorant of those of texture.—The organic are very
evident in it. 74
Characters of the Vital Properties.—They vary in each system.
—Consequences as it regards functions. ib.
II. Of Natural Exhalations.
They are all derived from the vital properties.—They vary
consequently like these properties.—Proofs.—Of
sympathetic exhalations. 75
III. Of Preternatural Exhalations.
Sanguineous exhalation.—Hemorrhage of the excrementitious
exhalants.—Hemorrhage from the skin.—Hemorrhages from
the mucous surfaces.—They take place by exhalation.—
Proofs.—Experiments.—Of active and passive hemorrhages.
—Differences between hemorrhages by rupture and by
78
exhalation, between those of the capillaries and those of
the great vessels.
Hemorrhages of the recrementitious exhalants.—
Hemorrhages of the serous surfaces.—Observations
concerning dead bodies.—Cellular hemorrhages.—Other
hemorrhages of the exhalants. 85
Preternatural exhalations, not sanguineous.—Varieties of the
exhaled fluids, according to the state of the vital forces of
the exhalants.—Different examples of these varieties. 87
IV. Of the preternatural development of the exhalants.
It is especially in cysts that it takes place.—The secreted
fluids are never preternaturally poured out like the exhaled.
—Why.—Of the natural emunctories. 88
ABSORBENT SYSTEM.
GENERAL OBSERVATIONS.
ARTICLE FIRST.
OF THE ABSORBENT VESSELS.
I. Origin of the Absorbents.
Table of absorptions.—Of external absorptions.—Of internal
absorptions.—Of the nutritive absorptions.—It is impossible
to know the mode of origin of the absorbents.—Interlacing
of the branches. 91
II. Course of the Absorbents.
Their division into two layers, superficial and deep-seated.—
Their arrangement in the extremities and the trunk. 95
Forms of the absorbents in their course.—They are
cylindrical, full of knots, &c.—Consequences of these forms.
—The absorbents have not as great capacity during life as
in the dead body. 97
Of the capacity of the absorbents in their course.—Manner of
ascertaining it.—Extreme varieties which it exhibits.—
Capacity of the absorbents compared with that of the veins. 99
Anastomoses of the absorbents in their course.—Different
modes of these anastomoses.—Remarks upon the
lymphatic circulation. 102
Remarks upon the difference of dropsies that are produced
by the increase of exhalation, and those that are the effect
of a diminution of absorption.—Cases that may be referred
to one or the other cause. 104
III. Termination of the Absorbents.
Trunks of termination.—Their disproportion with the
branches.—Consequences.—Difficulties in regard to the
motion of the lymph.—Remarks upon venous absorption. 105
IV. Structure of the Absorbents.
Exterior texture.—Vessels.—Peculiar membrane.—Valves.—
Uses of these last. 109
ARTICLE SECOND.
LYMPHATIC GLANDS.
I. Situation, Size, Forms, &c.
Varieties of their number and situation in the different
regions.—Relation with the cellular texture.—Varieties from
age, sex, &c. 111
II. Organization.
Colour.—Its varieties.—Particular arrangement about the
bronchia. 114
Common parts.—External cellular texture.—Cellular
membrane.—Vessels. 115
Peculiar texture.—Density.—Cells.—Contained fluid.—
Properties and phenomena of this texture.—Interlacing of
the absorbents. 116
ARTICLE THIRD.
PROPERTIES OF THE ABSORBENT SYSTEM.
I. Properties of Texture. 118
II. Vital Properties.
Animal sensibility.—Its phenomena in the vessels and the
glands.—Organic properties.—Their duration after death.—
Remarks upon the absorbent faculty of dead bodies. 119
Characters of the vital properties.—Life is very evident in this
system.—Its disposition to inflammation.—Character which
this affection has in it. 122
Differences of the vital properties in the absorbent vessels
and their glands.—These differences are remarkable.—Their
influence upon diseases. 123
Sympathies.—Sympathies of the glands.—Sympathies of the
vessels.—Remarks upon the engorgements of the lymphatic
glands. 124
ARTICLE FOURTH.
OF ABSORPTION.
I. Influence of the Vital Forces upon this Function.
All depends on the organic properties. 128
II. Varieties of Absorption.
Different examples.—Of resolution.—Of the absorption of
morbific principles. 129
III. Motion of the Fluids in the Absorbents.
Laws of this motion.—It is not subject to any reflux.—Why. 132
IV. Of Absorption in the different Ages.
It appears that the internal and external absorptions are
opposite at the two extreme ages.—Remarks. 134
V. Preternatural Absorption.
Absorption of certain fluids different from those naturally
absorbed.—Absorption in the cysts. 138
SYSTEMS PECULIAR TO CERTAIN APPARATUS.
GENERAL OBSERVATIONS.
Differences of the systems peculiar to certain apparatus, from
those common to all.—Characters of the first.—Their
distribution in the apparatus. 139
OSSEOUS SYSTEM.
GENERAL OBSERVATIONS.
ARTICLE FIRST.
OF THE FORMS OF THE OSSEOUS SYSTEM. DIVISION OF THE
BONES.
I. Of the Long Bones.
Relation of their position with their general uses.—External
forms of the body and the extremities.—Internal forms.—
Medullary canal.—Its situation, extent and form.—Its use.—
It disappears in the first periods of callus.—It is shorter in
proportion in childhood. 144
II. Of the Flat Bones.
Relations of their situation and external forms with the
general use of forming the cavities.—Internal forms. 147
III. Of the Short Bones.
Position.—Internal and external forms.—General uses. 149
IV. Of the Bony Eminences.
Their division into those, 1st, of articulation; 2d, of insertion;
3d, of reflection; 4th, of impression.—Remarks upon each
of these divisions.—Relations of the second with the
muscular force.—How these last are formed. 150
V. Of the Osseous Cavities.
Their division into those, 1st, of insertion; 2d, of reception;
3d, of sliding; 4th, of impression; 5th, of transmission; 6th,
of nutrition.—Particular remarks upon each division.—Of the
three kinds of canals of nutrition. 153
ARTICLE SECOND.
ORGANIZATION OF THE OSSEOUS SYSTEM.
I. Texture Peculiar to the Osseous System.
Common division of this texture.
Texture with cells.—How it is formed.—When it is formed.—Of
the cells and their communications.—Experiments. 156
Compact texture.—Arrangement of its fibres.—Their
formation.—Experiments to ascertain their direction.—The
osseous layers do not exist.—Proofs.—Influence of rickets
upon the compact texture. 158
Arrangement of the two osseous textures in the three kinds
of Bones.—Arrangement of the compact texture.—Two
kinds of texture with cells in the long bones.—Proportion of
the common texture with cells and the compact texture in
the short and broad bones.—The same proportion examined
in the cavities and the osseous eminences. 161
Of the composition of the osseous texture.—There are two
principal bases.—Of the saline calcareous substance.—
Experiments.—Nature of this substance.—Experiments to
ascertain the gelatinous substance.—Different relations of
each of these substances with vitality. 164
II. Common Parts which enter into the organization of the Osseous
System.
Three orders of blood vessels.—Arrangement of each.—
Experiments.—Proportions according to age.—
Communication.—Proofs of the existence of the cellular
texture. 167
ARTICLE THIRD.
PROPERTIES OF THE OSSEOUS SYSTEM.
I. Physical Properties.
Elasticity.—It is in the inverse ratio of the age. 171
II. Properties of Texture.
Different examples of contractility and extensibility.—
Characters of these properties. 171
III. Vital Properties.
They are obscure. 173
Characters of these properties.—Slowness of their
development.—Their influence upon diseases. 174
Sympathies.—Their character is always chronic.—General
remark upon sympathies. 175
Seat of the vital properties.—They are not seated in the
calcareous substance.—They exist only in the gelatinous.—
Experiment which proves it. 177
ARTICLE FOURTH.
OF THE ARTICULATIONS OF THE OSSEOUS SYSTEM.
I. Division of the Articulations.
Moveable Articulations.—Observations upon their Motions.—
1st. Opposition; it is extensive or confined.—2d.
Circumduction; a motion composed of all those of
opposition.—3d. Rotation; a motion upon the axis.—4th.
Sliding. 180
Immoveable articulations.—They are on surfaces in juxta-
position, inserted into each other or implanted. 182
Table of the Articulations. 183
II. Observations upon the Moveable Articulations.
First genus.—Situation.—Form of the surfaces.—Rotation and
circumduction are inversely in the humerus and the femur.
—Why. 184
Second genus.—Form of the surfaces.—Motions. 186
Third genus.—Diminution of the motions.—Direction in which
they take place. 187
Fourth genus.—Motions still less. 189
Fifth genus.—Remarkable obscurity of the motions. 190
III. Observations upon the Immoveable Articulations.
Situation, forms of each order.—Relation of the structure to
the uses. 191
IV. Of the means of Union between the Articular Surfaces.
Union of the immoveable Articulations.—Cartilages of union. 193
Union of moveable articulations.—Ligaments and muscles
considered as articular bands. 194
ARTICLE FIFTH.
DEVELOPMENT OF THE OSSEOUS SYSTEM.
Remarks. 195
I. State of the Osseous System during Growth.
Mucous State.—What should be understood by it. 195
Cartilaginous State.—Period and mode of its development.—
Of this state in the broad bones. 197
Osseous State.—Its phenomena.—Its period. 198
Progress of the osseous state in the long bones; 1st, in the
middle; 2d, in the extremities. 200
Progress of the osseous state in the broad bones.—Varieties
according to the bones.—Formation of the ossa wormiana.
ib.
Progress of the osseous state in the short bones. 202
II. State of the Osseous System after its Growth.
Increase in thickness.—Composition and decomposition after
the termination of growth in thickness.—Experiments.—
State of the bones in old age. 203
III. Peculiar Phenomena of the Development of the Callus.
1st. Fleshy granulations.—2d. Adhesions of these
granulations.—3d. Exhalation of gelatine and then of
phosphate of lime. 206
IV. Peculiar Phenomena of the Development of the Teeth.
Organization of the teeth.—Hard portion of the teeth.—
Enamel.—Experiment which distinguishes it from bone.—Its
thickness.—Its nature.—Reflections upon its organization.—
Osseous portion.—Its form.—Cavity of the tooth. 209
Soft portion of the tooth.—Its spongy nature.—Its acute
sensibility.—Remarks upon its different sympathies. 211
First dentition considered before cutting.—Follicle.—
Membrane of this follicle analogous to the serous
membranes.—Albuminous nature of the fluid which
lubricates it.—Mode of development of the osseous tooth
upon the follicle.—Number of the first teeth. 213
First dentition considered at the period of cutting.—Mode of
cutting.—Accidents.—Their causes. 216
Second dentition considered before cutting.—Formation of
the second follicle. 217
Second dentition considered at the period of cutting.—Fall of
the first teeth.—Appearance of the second.
Phenomena subsequent to the cutting of the second teeth.—
Growth in length and thickness.—Fall of the teeth earlier
than the death of the bones.—Why.—State of the jaws after
the fall of the teeth. 219
V. Particular Phenomena of the Development of the Sesamoid
Bones.
General arrangement of the sesamoid bones.—Situation.—
Forms. 221
Fibro-cartilaginous state.—Osseous state.—Phenomena of the
patella.—Use of the sesamoid bones. 222
MEDULLARY SYSTEM.
Division of this system. 225
ARTICLE FIRST.
MEDULLARY SYSTEM OF THE FLAT AND SHORT BONES, AND THE
EXTREMITIES OF THE LONG ONES.
I. Origin and Conformation.
It is an expansion of the vessels of the second order. 225
II. Organization.
There is no medullary membrane.—Vascular interlacing. 226
III. Properties.
There are only organic ones.—Experiments. 227
IV. Development.
There is no medullary oil in infancy.—Proofs.—Experiments. 227
ARTICLE SECOND.
MEDULLARY SYSTEM OF THE MIDDLE OF THE LONG BONES.
I. Conformation.
It is like the cellular. 229
II. Organization.
The medullary membrane is not an expansion of the
periosteum.—Its vessels. 230
III. Properties.
Properties of texture.—Vital properties.—Animal sensibility.—
Vitality more active than in the bones. 231
IV. Development.
How the medullary membrane is formed.—The marrow of the
infant is wholly different from that of the adult.—Proofs. 233
Functions.—The marrow is exhaled.—Its alterations.—Its
relations with the nutrition of the bone.—Necrosis.—The
marrow is foreign to the synovia. 234
CARTILAGINOUS SYSTEM.
What must be understood by cartilage. 237
ARTICLE FIRST.
OF THE FORMS OF THE CARTILAGINOUS SYSTEM.
I. Forms of the Cartilages of the Moveable Articulations.
Internal and external surfaces.—Relations of the two
corresponding cartilages.—Peculiar characters of these
cartilages in each kind of moveable articulations. 238
II. Forms of the Cartilages of the Immoveable Articulations. 241
III. Forms of the Cartilages of the Cavities. 242
ARTICLE SECOND.
ORGANIZATION OF THE CARTILAGINOUS SYSTEM.
I. Texture peculiar to the Cartilaginous System.
Fibres.—Remarkable resistance of the cartilaginous texture to
putrefaction, maceration, &c.—Stewing and desiccation of
this texture.—Its various alterations. 243
II. Parts common to the Organization of the Cartilaginous Texture.
Cellular texture.—Means of seeing it.—Absence of blood
vessels.—White vessels.—Their colour in jaundice. 245
ARTICLE THIRD.
PROPERTIES OF THE CARTILAGINOUS SYSTEM.
I. Physical Properties.
Elasticity.—It appears to be owing to the superabundance of
gelatine.—Proofs. 247
II. Properties of Texture.
They are very obscure. 248
III. Vital Properties.
They are inconsiderable, as well as the sympathies. 249
Character of the Vital Properties.—All the phenomena over
which they preside have a chronic progress.—General
observations upon the reunion of the parts. 250
ARTICLE FOURTH.
DEVELOPMENT OF THE CARTILAGINOUS SYSTEM.
I. State of the Cartilaginous System in the First Age.
Predominance of gelatine in the early periods.—Property
which the cartilages then have of becoming red by
maceration.—Vascular layers between the cartilage and the
bone.—Cause which limits ossification in the cartilage.—
Development of the cartilages of the cavities. 252
II. State of the Cartilaginous System in the after Ages.
Different character which the gelatine assumes.—Ossification
of the cartilages in old age.—Those of the cavities are the
soonest ossified. 255
III. Preternatural Development of the Cartilaginous System.
Tendency of the membrane of the spleen to become the seat
of it.—Preternatural cartilages of the articulations. 257
FIBROUS SYSTEM.
GENERAL OBSERVATIONS.
ARTICLE FIRST.
OF THE FORMS AND DIVISIONS OF THE FIBROUS SYSTEM.
The fibrous forms are either membranous or in fasciæ. 259
I. Of the Fibrous Organs of a Membranous Form.
Fibrous membranes.—Fibrous capsules.—Fibrous sheaths.—
Aponeuroses. 260
II. Of the Fibrous Organs in the form of Fasciæ.
1st. Tendons.—2d. Ligaments. 262
III. Table of the Fibrous System.
Analogy of the different organs of this system.—The
periosteum is the common centre of these organs. 262
ARTICLE SECOND.
ORGANIZATION OF THE FIBROUS SYSTEM.
I. Of the Texture peculiar to the Organization of the Fibrous
System.
Peculiar nature of the fibrous texture.—Its extreme
resistance.—Phenomena of this resistance.—It can be
overcome.—Difference of the fibrous and muscular textures.
—Experiments upon the fibrous texture subjected to
maceration, ebullition, putrefaction, the action of the acids,
the digestive juices, &c. 264
II. Of the Common Parts which enter into the Organization of the
Fibrous System.
Cellular texture.—Blood vessels.—Their varieties according to
the organs. 270
ARTICLE THIRD.
PROPERTIES OF THE FIBROUS SYSTEM.
I. Physical Properties.
II. Properties of Texture.
Extensibility.—Peculiar law to which it is subjected there.
Contractility.—It is almost nothing.—When it is manifested. 272
III. Vital Properties.
Animal sensibility.—Singular mode of putting it in action by
distension.—Consequence of this peculiar phenomenon to
the fibrous texture. 274
Character of the vital properties.—The vital activity is more
evident in this system than in the preceding.—It appears
that the fibrous texture does not suppurate. 277
Sympathies.—Examples of those of the animal and the
organic properties. 279
ARTICLE FOURTH.
DEVELOPMENT OF THE FIBROUS SYSTEM.
I. State of the Fibrous System in the First Age.
The fibres are wanting in most of the fibrous organs of the
fœtus.—Softness of these organs at this age.—Varieties of
development.—Remarks upon rheumatism. 281
II. State of the Fibrous System in the After Ages.
Phenomena of the adult.—General stiffness in old age. 283
III. Preternatural Development of the Fibrous System.
Various tumours exhibit fibres analogous to those of this
system. 284
ARTICLE FIFTH.
OF THE FIBROUS MEMBRANES IN GENERAL.
I. Forms of the Fibrous Membranes.
Their double surface.—These membranes are like moulds of
their respective organs.—Researches respecting that of the
corpus cavernosum.—Experiments which show that it differs
essentially from the subjacent spongy texture.—Other
researches upon that of the testicle. 285
II. Organization of the Fibrous Membranes. 288
III. Of the Periosteum. Of its Form.
Its two surfaces.—Their adhesion to the bones. 289
Organization of the periosteum.—Preternatural development
of its fibres in elephantiasis.—Its connexions with the
fibrous bodies in infancy. 291
Development of the periosteum.
Functions of the Periosteum.—In what way it assists
ossification.—It relates as much to the fibrous organs as to
292
the bones.
IV. Perichondrium.
Experiments upon this membrane. 294
ARTICLE SIXTH.
OF THE FIBROUS CAPSULES.
I. Forms of the Fibrous Capsules.
They are very few.—Arrangement of the two principal ones.—
Canal between them and the synovial capsule. 295
II. Functions of the Fibrous Capsules. 296
ARTICLE SEVENTH.
OF THE FIBROUS SHEATHS.
Their division. 297
I. Partial Fibrous Sheaths.
Their form.—Their arrangement.—Why the flexor tendons are
alone provided with them. 297
II. General Fibrous Sheaths. 299
ARTICLE EIGHTH.
OF THE APONEUROSES.
I. Of the Aponeuroses for Covering.
Their division. 299
Aponeuroses for general covering. 300
Forms.—They are accommodated to the extremities, &c. ib.
Tensor muscles.—Organization.—Examples of the tensor
muscles.—Their uses relative to the aponeuroses.—Analogy
with the tendons and difference from them.—Arrangement
of the fibres. 301
Functions. 302
Aponeuroses for partial covering.—Examples.—General uses
of these aponeuroses. 303
II. Of the Aponeuroses of Insertion.
Aponeuroses of insertion with a broad surface.—Their origin.
—Their uses.—The identity of their nature with that of the
tendons.—Experiments. 304
Aponeuroses of insertion in the form of an arch.—They are
rare.—They exist where vessels pass through.—They do not
compress them. 305
Aponeuroses of insertion with separate fibres. 306
ARTICLE NINTH.
OF THE TENDONS.
I. Form of the Tendons.
Relation of the uses with the forms.—Union with the fleshy
fibres. 307
II. Organization of the Tendons.
Method of seeing their fibres advantageously.—They appear
to be destitute of blood vessels.—Their tendency to be
penetrated with the phosphate of lime. 309
ARTICLE TENTH.
OF THE LIGAMENTS.
I. Ligaments with, Regular Fasciæ.
General arrangement. 311
II. Ligaments with Irregular Fasciæ. 312
FIBRO-CARTILAGINOUS SYSTEM.
Organs which compose it. 315
ARTICLE FIRST.
OF THE FORMS OF THE FIBRO-CARTILAGINOUS SYSTEM.
Division into three classes of the organs of this system.—
Characters of each class. 315
ARTICLE SECOND.
ORGANIZATION OF THE FIBRO-CARTILAGINOUS SYSTEM.
I. Texture peculiar to the Organization of the Fibro-Cartilaginous
System.
It arises, 1st, from a fibrous substance; 2d, from a
cartilaginous one.—It owes its resistance to the first and its
elasticity to the second.—Action of caloric, air and water
upon the fibro-cartilaginous texture.—It reddens by
maceration.—Absence of the perichondrium upon most of
the fibro-cartilages. 317
II. Parts common to the Organization of the Fibro-
Cartilaginous System. 320
ARTICLE THIRD.
PROPERTIES OF THE FIBRO-CARTILAGINOUS SYSTEM.
I. Physical Properties.
Elasticity and suppleness united. 320
II. Properties of Texture.
Extensibility.—It is quite evident in it.—Contractility.—
Difference from elasticity. 321
III. Vital Properties.
They are inconsiderable.—Influence of the obscurity of these
forces upon the properties of the fibro-cartilages. 322
ARTICLE FOURTH.
DEVELOPMENT OF THE FIBRO-CARTILAGINOUS SYSTEM.
I. State of this System in the First Age.
Mode of development of the three classes. 323
II. State of this System in the after Ages.
General rigidity of these organs.—Consequences.—
Ossification of the fibro-cartilages rare. 325
MUSCULAR SYSTEM OF ANIMAL LIFE.
Difference between the muscles of the two lives.—
Observations upon those of animal life. 327
ARTICLE FIRST.
OF THE FORMS OF THE MUSCULAR SYSTEM OF ANIMAL LIFE.
Division of these muscles into long, broad and short. 327
I. Forms of the Long Muscles.
Place which they occupy.—Their division.—Their separation
and reunion.—Peculiar forms of the long muscles of the
spine. 328
II. Forms of the Broad Muscles.
Where they are situated.—Thickness.—Peculiar forms of the
broad pectoral muscles. 330
III. Forms of the Short Muscles.
Where they are found.—Their arrangement.—Remarks upon
the three species of muscles. 331
ARTICLE SECOND.
ORGANIZATION OF THE MUSCULAR SYSTEM OF ANIMAL LIFE.
I. Texture peculiar to this Organization.
Arrangement of this texture into fasciculi.—Its division into
fibres.—Length of the fleshy fibres compared with that of
the muscle.—Their direction.—Their figure.—Their softness.
332
—Ease of their rupture in the dead body.—Difficulty in the
living.
Composition of the muscular texture.—Action of the air in
desiccation and putrefaction.—Action of cold water.—
Maceration and its products.—Ease with which the colouring
substance is removed.—Analogy of the remaining texture
with the fibrin of the blood.—Relation of the forces with this
texture.—Action of boiling water.—Some peculiar
phenomena of common boiled flesh.—Roasting of the fleshy
texture.—Singular affinity of the digestive juices to this sort
of texture.—General observations.—Influence of sex and the
genital organs upon the fleshy texture. 336
II. Parts common to the Organization of this System.
Cellular texture.—Manner in which it envelops the fibres.—Its
uses for muscular motion.—Experiment.—Fatty muscles. 343
Blood vessels.—Arteries.—Of the blood of the muscles.—Of
their colour.—Free and combined state of the colouring
substance.—Veins.—Remarks upon the injection of them. 346
Nerves.—There are hardly any but those of animal life.—Their
difference in the extensors and the flexors.—Manner in
which the nerves penetrate the muscles. 348
ARTICLE THIRD.
PROPERTIES OF THE MUSCULAR SYSTEM OF ANIMAL LIFE.
I. Properties of Texture. Extensibility.
This property is continually in action.—It is in proportion to
the length of the fibres.—Its exercise in diseases. 350
Contractility of texture.—Phenomena of the antagonists.—
Distinction in these phenomena of that which belongs to
the vital properties from that which belongs to those of
texture.—Of the contractility of texture in diseases.—Extent
and quickness of the contractions.—They continue after
352
death.—Essential differences between the contractility of
texture and horny hardening. Their parallel.
II. Vital Properties.
Properties of animal life.—Sensibility.—Most of the ordinary
agents do not develop it.—It is put into action by repeated
contractions.—Of the sensation of lassitude.—Sensibility of
the muscles in their affections. 359
Animal Contractility.—It should be considered in three
relations. 361
Animal contractility considered in the brain.—The principle of
this property exists in this organ.—Proofs drawn from
observation.—Proofs derived from diseases.—Proofs
borrowed from experiments upon animals.—Cases in which
the brain is foreign to the muscles. 362
Animal contractility considered in the nerves.—Influence of
the spinal marrow upon this property.—Observations and
experiments.—Influence of the nerves.—Observations and
experiments.—All the nerves do not transmit equally the
different irradiations of the brain.—Direction of the
propagation of the nervous influence. 367
Animal contractility considered in the muscles.—Necessary
conditions in the muscle for it to contract.—Obstacles to
contraction.—Various experiments. 374
Causes which bring into action animal contractility.—Division
of these causes.—Of the will.—Of the involuntary causes.—
Direct excitement.—Sympathetic excitement.—Influence of
the passions.—Remarks upon the motion of the fœtus. 374
Duration of the animal contractility after death.—Various
experiments.—Consequences relative to respiration.—
Variety of the duration of this property.—How it is
extinguished. 379
Organic Properties.—Organic sensibility and insensible
organic contractility.—Sensible organic contractility.—Various
experiments upon this last property.—Phenomena of
irritations.—In order to study this contractility the animal
382
contractility must be destroyed.—How this is done.—Various
modes of contraction.
Sympathies.—The animal sensibility is the property especially
brought into action by them.—General Remarks.—
Sympathies of animal sensibility.—The organic properties
are rarely brought into action. 386
Characters of the vital properties.—Different remarks upon
these characters. 388
ARTICLE FOURTH.
PHENOMENA OF THE ACTION OF THE MUSCULAR SYSTEM OF
ANIMAL LIFE.
I. Force of the Muscular Contraction.
Difference according as it is put into action by stimuli or by
the cerebral influence.—Experiments.—Influence of
muscular organization upon contraction.—The laws of
nature the reverse of those of mechanics in the production
of motions.—Multiplication of forces.—Uncertainty of
calculations upon this point. 390
II. Quickness of the Contractions.
Varieties according as the contractions are, 1st, from stimuli;
2d, from nervous action.—Different degrees of quickness in
different individuals.—Influence of habit upon this degree. 395
III. Duration of the Contractions. 397
IV. State of the Muscles in Contraction.
Different phenomena which they then experience.—Essential
remark upon the different modes of contraction. 398
V. Motions imparted by the Muscles.
Simple Motions.—1st. In the muscles with a straight
direction.—How we determine the uses of these muscles.—
2d. In the muscles with a reflected direction.—3d. In those
with a circular direction. 400
Compound Motions.—Almost every motion is compound.—
How.—Different examples of compound motions.—
Antagonist muscles. 403
VI. Phenomena of the Relaxation of the Muscles.
They are opposite to the preceding. 406
ARTICLE FIFTH.
DEVELOPMENT OF THE MUSCULAR SYSTEM OF ANIMAL LIFE.
I. State of this System in the Fœtus.
It contains but little blood.—Slight contractility at this age.—
Influence upon these phenomena, of the blood which then
penetrates the muscles.—These organs are then slender
and weak. 407
II. State of this System during Growth.
Sudden effect of the red blood which penetrates the muscles,
and of the other irritations which are connected with it.—
Colour of the Muscles.—Period of the brightest colour.—
Varieties of the action of reagents on the fleshy texture of
young animals. 410
III. State of this System after Growth.
The thickness constantly increases.—The external forms are
more evident.—Colour in the adult.—Innumerable variety. 413
IV. State of this System in Old Age.
Increase of density.—Diminution of cohesion.—Phenomena of
the vacillation of the muscles.—Atrophous muscles. 416
V. State of the System at Death.
Relaxation or stiffness of the muscles. 419
END OF CONTENTS TO VOL. II.
VOLUME THIRD.
MUSCULAR SYSTEM OF ORGANIC LIFE.
GENERAL OBSERVATIONS.
ARTICLE FIRST.
FORMS OF THE MUSCULAR SYSTEM OF ORGANIC LIFE.
PAGE
Curved direction of the fibres.—They do not arise from the
fibrous system.—Varieties of the muscular forms,
according to the organs. 4
ARTICLE SECOND.
ORGANIZATION OF THE MUSCULAR SYSTEM OF ORGANIC
LIFE.
General difference of organization from the preceding
muscles. 5
I. Peculiar Texture.
General arrangement of the muscular fibre.—Analogy with
the preceding and difference. 6
II. Common Parts.
Cellular Texture.—Blood vessels.—Nerves of the ganglions
and of the brain.—Proportion of each class. 8
ARTICLE THIRD.
PROPERTIES OF THE MUSCULAR SYSTEM OF ORGANIC LIFE.
I. Properties of Texture.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

Visual Population Codes Toward A Common Multivariate Framework For Cell Recording And Functional Imaging Edited By Nikolaus Kriegeskorte Gabriel Kreiman

  • 1.
    Visual Population CodesToward A Common Multivariate Framework For Cell Recording And Functional Imaging Edited By Nikolaus Kriegeskorte Gabriel Kreiman download https://ebookbell.com/product/visual-population-codes-toward-a- common-multivariate-framework-for-cell-recording-and-functional- imaging-edited-by-nikolaus-kriegeskorte-gabriel-kreiman-56400118 Explore and download more ebooks at ebookbell.com
  • 2.
    Here are somerecommended products that we believe you will be interested in. You can click the link to download. Excel Pivottables And Pivotcharts Your Visual Blueprint For Creating Dynamic Spreadsheets 2nd Edition Paul Mcfedries https://ebookbell.com/product/excel-pivottables-and-pivotcharts-your- visual-blueprint-for-creating-dynamic-spreadsheets-2nd-edition-paul- mcfedries-2030446 Javatm And Xml Your Visual Blueprint For Creating Javaenhanced Web Programs Paul Whitehead Ernest Friedmanhill Emily A Vander Veer Friedmanhill Vander Veer https://ebookbell.com/product/javatm-and-xml-your-visual-blueprint- for-creating-javaenhanced-web-programs-paul-whitehead-ernest- friedmanhill-emily-a-vander-veer-friedmanhill-vander-veer-2161356 Quilting Visual Quick Tips Sonja Hakala https://ebookbell.com/product/quilting-visual-quick-tips-sonja- hakala-2173898 Internet Visual Quick Tips 1st Edition Kate Shoup https://ebookbell.com/product/internet-visual-quick-tips-1st-edition- kate-shoup-2186900
  • 3.
    Crochet Stitches VisualEncyclopedia 1st Edition Robyn Chachula https://ebookbell.com/product/crochet-stitches-visual- encyclopedia-1st-edition-robyn-chachula-2335566 Office 2010 Visual Quick Tips 1st Edition Sherry Kinkoph Gunter https://ebookbell.com/product/office-2010-visual-quick-tips-1st- edition-sherry-kinkoph-gunter-2336482 Internet Visual Quick Tips Kate Shoup https://ebookbell.com/product/internet-visual-quick-tips-kate- shoup-2529610 Excel Data Analysis Your Visual Blueprint For Creating And Analyzing Data Charts And Pivottables 3rd Edition Denise Etheridge https://ebookbell.com/product/excel-data-analysis-your-visual- blueprint-for-creating-and-analyzing-data-charts-and-pivottables-3rd- edition-denise-etheridge-4638746 Flash Actionscript Your Visual Blueprint For Creating Flash Enhanced Web Sites Denise Etheridge https://ebookbell.com/product/flash-actionscript-your-visual- blueprint-for-creating-flash-enhanced-web-sites-denise- etheridge-5060384
  • 5.
    Contents Series Foreword 1x Prefacexi Introduction: A Guided Tour through the Book 1 THEORY AND EXPERIMENT 21 1 Grandmother Cells and Distributed Representations 23 Simon J. Thorpe 2 Strategies for Finding Neural Codes 53 Sheila Nirenberg 3 Multineuron Representations of Visual Attention 71 Jasper Poort, Arezoo Pooresmaeili, and Pieter R. Roelfsema 4 Decoding Early Visual Representations from fMRI Ensemble Responses 101 Yukiyasu Kamitani 5 Understanding Visual Representation by Developing Receptive-Field Models 133 Kendrick N. Kay 6 System Identification, Encoding Models, and Decoding Models: A Powerful New Approach to fMRI Research 163 Jack L. Gallant, Shinji Nishimoto, Thomas Naselaris, and Michael C. K. Wu 7 Population Coding of Object Contour Shape in V4 and Posterior Inferotemporal Cortex 189 Anitha Pasupathy and Scott L. Brincat
  • 6.
    v 1 Con~n~ 8Measuring Representational Distances: The Spike-Train Metrics Approach 213 Conor Houghton and Jonathan D. Victor 9 The Role of Categories, Features, and Learning for the Representation of Visual Object Similarity in the Human Brain 245 Hans P. Op de Beeck 10 Ultrafast Decoding from Cells in the Macaque Monkey 275 Chou P. Hung and James J. DiCarlo 11 Representational Similarity Analysis of Object Population Codes in Humans, Monkeys, and Models 307 Nikolaus Kriegeskorte and Marieke Mur 12 Three Virtues of Similarity-Based Multivariate Pattern Analysis: An Example from the Human Object Vision Pathway 335 Andrew C. Connolly, M. Ida Gobbini, and James Y. Haxby 13 Investigating High-Level Visual Representations: Objects, Bodies, and Scenes 357 Dwight J. Kravitz, Annie W-Y. Chan and Chris I. Baker 14 To Err Is Human: Correlating fMRI Decoding and Behavioral Errors to Probe the Neural Representation of Natural Scene Categories 391 Dirk B. Walther, Diane M. Beck, and Li Fei-Fei 15 Decoding Visual Consciousness from Human Brain Signals 417 John-Dylan Haynes 16 Probabilistic Codes and Hierarchical Inference in the Brain 441 Karl Friston II BACKGROUND AND METHODS 475 17 Introduction to the Anatomy and Function of Visual Cortex 477 Kendra S. Burbank and Gabriel Kreiman 18 Introduction to Statistical Learning and Pattern Classification 497 Jed Singer and Gabriel Kreiman 19 Tutorial on Pattern Classification in Cell Recording 517 Ethan Meyers and Gabriel Kreiman 20 Tutorial on Pattern Classification in Functional Imaging 539 Marieke Mur and Nikolaus Kriegeskorte
  • 7.
    Contents vii 21 Information-TheoreticApproaches to Pattern Analysis 565 Stefano Panzeri and Robin A. A. Ince 22 Local Field Potentials, BOLD, and Spiking Activity: Relationships and Physiological Mechanisms 599 Philipp Berens, Nikos K. Logothetis and Andreas S. Tolias Contributors 625 Index 629
  • 8.
    Series Foreword Computational neuroscienceis an approach to understanding the development and function of nervous systems at many different structural scales, including the bio- physical, the circuit, and the systems levels. Methods include theoretical analysis and modeling of neurons,networks,and brain systems and are complementary to empiri- cal techniques in neuroscience. Areas and topics of particular interest to this book series include computational mechanisms in neurons, analysis of signal processing in neural circuits, representation of sensory information, systems models of senso- rimotor integration, computational approaches to biological motor control, and models of learning and memory. Further topics of interest include the intersection of computational neuroscience with engineering, from representation and dynamics to observation and control. Terrence J. Sejnowski Tomaso Poggio
  • 9.
    Preface This is abook about visual information processing in primate brains. As in other biological networks, the function of the visual system emerges from the interaction of the system’s components. Such inherently interactive phenomena cannot be understood by analyzing the components in isolation. In neuronal coding, the idea that the whole is more than the sum of its parts is exemplified by the concept of “population code,” the idea that visual content is represented, at each stage of the visual hierarchy, by the pattern of activity across the local population of neurons. Although this concept appeared decades ago in neurophysiological studies of brain function, the dominant approach to measurement and analysis has been to focus on one cell at a time and to characterize its selectivity, receptive field, and other proper- ties. A similar approach has been followed in the context of functional imaging, albeit at a much coarser spatial scale.Although functional imaging measures complex spatiotemporal activity patterns, most studies have focused on regional-average activation. The theoretical concept of the population code motivates multichannel measure- ment and multivariate analysis of activity patterns. Population analyses have a long history in vision and other fields. Notable examples include the decoding of motor commands from population activity in motor cortex and parietal cortex, the decod- ing of a rodent’s position from the population activity of hippocampal neurons, and analyses of the population coding of olfactory information. Despite these examples, the dominant approach to understanding neuronal representations has been uni- variate analysis. Over the past decade, the multivariate approach has gained significant momen- tum, especially in the field of vision. Many researchers now analyze the information in complex activity patterns across many measurement channels.Functional imaging and cell recording measure brain activity in fundamentally different ways, but they now use similar theoretical concepts and mathematical tools in their modeling and analysis. Results indicate that the interactions between sites do matter to neuronal coding.
  • 10.
    xii Preface At themicro-scale, the study of single-neuron responses continues to produce valuable insights. And at the macro-scale, classical brain mapping with its focus on regional-average activation continues to define the big picture of brain function. But cell recording and functional imaging are beginning to close the gap of spatial scales between them and invade the meso-scale, where regional population codes reside. In terms of measurement, high-resolution imaging is invading this intermediate scale by providing sub-millimeter resolution, and multi-electrode neuronal record- ings promise to give us a richer picture of regional single-cell activity. In terms of analysis, a common multivariate framework for analyzing population codes is emerging.This framework promises to help bridge the divide between cell recording and functional imaging and between animal and human studies. Moreover, it prom- ises to allow us to test computational network models by integrating them in the analysis of brain-activity data. The purpose of this book is to present recent advances in understanding of visual population codes afforded by the multivariate framework, to describe the current state of the art in multivariate pattern-information analysis, and to take a step toward a unified perspective for cell recording and functional imaging. The book should serve as an introduction, overview, and reference for scientists and graduate students across disciplines who are interested in human and primate vision and, more generally, in understanding how the brain represents information. The first part of the book,“Theory and Experiment,”is coarsely organized accord- ing to the flow of visual information from the retina to the highest stages of ventral- stream processing. Most of the chapters combine a review of theory and published empirical findings with methodological considerations. The second part, “Back- ground and Methods,” is intended to provide readers from different disciplines with essential background on vision, different techniques for measuring brain activity (and their relationships), and mathematical analysis methods. This preface is fol- lowed by an introduction (“A Guided Tour through the Book”), which explains some basic concepts, summarizes each chapter, and clarifies the chapters’ relation- ships. Chapter abstracts provide a further level of detail to allow a quick grasp of the information. We have roughly organized the book according to the stages of visual processing, interspersing animal cell recording and human imaging studies, so as to emphasize the commonality of subject matter between these still somewhat separated fields. Readers may discover that the perspective of functional imaging has a lot to con- tribute to that of cell recording, and vice versa. They may also find that the way questions are framed for early visual areas may help rethink the challenges of understanding higher-level representations, and vice versa. As mentioned above, multivariate analyses have provided important insights in other domains beyond vision. We encourage the reader to examine this rich literature, and we hope that
  • 11.
    Preface xiii the multivariateframework for analyzing population codes will benefit from an exchange between vision and other fields. In order to understand brain function, we need to develop theory, experiment, and analysis conjointly, in a way that embraces the parallel and interactive nature of cortical computation. The emerging multivariate framework is an important step in that direction, helping us make sense of ever richer spatiotemporal brain-activity data and enabling us to see the forest, too, not just the trees. Several people have helped throughout this effort. We would like to thank Christina Chung and Jane Tingley, who helped with several aspects of the book. We also acknowledge the patience and wisdom from the people at MIT Press while we learnt our initial steps in making this work a reality. The work towards this book was made possible through funding from the National Science Foundation (0954570; BCS-1010109), National Institute of Health (R21 EY019710; X02 OD005393; R21NS070250),theWhitehall Foundation,the Klingenstein Fund,the Massachusetts Lions Eye Research Foundation, and the UK Medical Research Council.
  • 13.
    Introduction: A GuidedTour through the Book This chapter gives an overview of the content of the book. We follow the chapters in the sequence in which they appear, summarize key findings and theoretical argu- ments, and clarify the relationships between the chapters.Along the way, we explain some basic issues of overarching importance. The book is divided into two parts: “Theory and Experiment” and “Background and Methods.” The first part describes recent primary research findings about the visual system, along with cutting-edge theory and methodological considerations. The second part provides some of the more general neuroscientific and mathemati- cal background needed for understanding the first part. Although each chapter is independent, the first part, “Theory and Experiment,” is designed to be read in sequence. The sequence roughly follows the stages of ventral-stream visual processing, which forms the focus of the book. Within this rough order, we placed closely related chapters together.We purposely interspersed theoretical and experimental chapters, and, within the latter, animal electrode recording and human fMRI studies. An overview of the chapters is given in figure I.1 and table I.1. Localist and Distributed Codes In chapter 1, Simon J. Thorpe reviews the debate about localist versus distributed neuronal coding in the context of recent experimental evidence. Early findings of neuronal selectivity to simple features at low levels of the visual hierarchy and to more complex features at higher levels suggested, by extrapolation, that there might be neurons that respond selectively to particular objects, such as one’s grandmother. On a continuum of possible coding schemes from localist to distributed, this “grand- mother cell” theory forms the localist pole. A code of grandmother cells could still have multiple neurons devoted to each object; the key feature is the high selectivity of the neurons. A grandmother-cell code is explicit in that no further processing is
  • 14.
    2 Introduction required toread out the code and conclude that a particular object is present. At the other end of the continuum is a distributed code, in which each neuron will respond to many different objects; thus, there is no single neuron that unequivocally indicates the presence of a particular object. In a distributed code, the information is in the combination of active neurons. For a population of n neurons, a localist single-neuron code can represent no more than n distinct objects, one for each neuron—and less if multiple neurons Figure I.1 Chapter overview. Along the vertical axis (arrow on the left), the chapters have been arranged roughly according to the stage of processing they focus on. Horizontally, chapters with a stronger focus on a particular stage of processing are closer to the axis on the left. Where possible, chapters related by other criteria are grouped together. For example, chapters 5 and 6 use the method of voxel-receptive-field modeling, while chapters 9 and 11–14 use the method of representational similarity analysis. Neuron and voxel icons label chapters using neuronal recordings and fMRI, respectively. Chapters focusing on theory, experiment, or methods have been visually indicated (see legend), with methods chapters marked by a gray underlay and experimental chapters with a strong methodological component marked by a partial gray underlay.
  • 15.
    A Guided Tourthrough the Book 3 Table I.1 Chapter content overview First Author, Last Author Content Type Regions Brain-Activity Measurement Content 1 Thorpe Theory, model, exp. Retina-IT Electrode Localist vs. distributed coding; spike-timing- dependent coding; plasticity 2 Nirenberg Theory, exp., methods Retina In vitro recording Ruling out retinal codes by comparing information between code and behavior 3 Poort, Roelfsema Exp. V1 Electrode Decoding stimulus features and attentional states from V1 neurons 4 Kamitani Exp., methods V1-3, MT fMRI Decoding human early visual population codes and stimulus reconstruction 5 Kay Methods, model, exp. V1-4 fMRI Voxel-receptive-field modeling for identification of natural images 6 Gallant, Wu Methods, model, exp. V1 fMRI Methodological framework for voxel-receptive-field modeling 7 Pasupathy, Brincat Exp. V4, pIT Electrode Shape-contour representation by convex/ concave curvature-feature combinations 8 Houghton, Victor Theory, methods, exp. — Electrode Measuring representational dissimilarity by spike-train edit distances 9 Op de Beeck Exp., theory IT fMRI Category modules vs. feature map; influences of task and learning 10 Hung, DiCarlo Exp., theory IT Electrode Decoding object category and identity at small latencies after stimulus onset; invariances 11 Kriegeskorte, Mur Exp., theory, model, methods IT fMRI, electrode Categoricality of object representation, comparing human and monkey; methods 12 Connolly, Haxby Exp., theory, methods IT fMRI Transformation of similarity across stages; advantages of pattern similarity analyses 13 Kravitz, Baker Exp., theory IT fMRI Object, body, and scene representations; position dependence 14 Walther, Fei-Fei Exp., theory, methods IT fMRI Distributed scene representations; decoding confusions predict behavioral confusions
  • 16.
    4 Introduction redundantly codefor the same object, as is commonly assumed. A distributed code can use combinations of neurons and code for a vast number of different objects (for binary responses, for example, there are 2n distinct activity patterns). If the pat- terns used for representing objects are randomly chosen among the 2n combinations, about half of the neurons will respond to any given object. A distributed code can also represent the stimuli with some redundancy, making it robust to damage to particular neurons. Moreover, it can represent the objects in terms of sensory or semantic properties, thus placing the objects in a multidimensional abstract space that reflects their relationships. Such an abstract space might emphasize behavior- ally relevant similarities and differences in a graded or categorical manner.Although the signals indicating the presence of a particular object are distributed, the code may still be considered “explicit” if readout takes just a single step—for example, a downstream neuron that computes a linear combination of the neuronal population. (Such a downstream neuron would be a localist neuron.) First Author, Last Author Content Type Regions Brain-Activity Measurement Content 15 Haynes Theory, methods LGN-IT fMRI Decoding consciousness; uni- vs. multivariate neural correlates of consciousness 16 Friston Theory, model, exp. Retina-IT fMRI Visual system as hierarchical model for recurrent Bayesian inference and learning 17 Burbank, Kreiman Theory tutorial Retina-IT — Essentials of visual processing across stages of the visual hierarchy; dorsal/ ventral stream 18 Singer, Kreiman Methods tutorial — — Introduction to statistical learning theory and pattern classification 19 Meyers, Kreiman Methods tutorial — Electrode Step-by-step tutorial on pattern classification for neural data 20 Mur, Kriegeskorte Methods tutorial — fMRI Step-by-step tutorial on pattern classification for fMRI data 21 Panzeri, Ince Theory, methods — Electrode, fMRI Information theoretic analysis of neuronal population codes 22 Berens, Tolias Exp., methods — Electrode, fMRI Relationship between spikes, local field potentials, and fMRI Table I.1 (continued)
  • 17.
    A Guided Tourthrough the Book 5 Note that what is called localist and distributed is fundamentally in the eye of the beholder, as it depends on the way the researcher thinks of the information to be represented. For example, consider the case of two neurons that encode the two-dimensional space of different jets of water. One neuron codes the amount of water per unit of time; the other the temperature of the water. A researcher who thinks of the space in terms of amount per unit of time and temperature will conclude that the code is localist. But a researcher who thinks of jets of water in terms of the amounts of cold and hot water per unit of time will conclude that the code is distributed. In practice, we tend to think of a code as localist if we can characterize each neuron’s preferences in very simple terms; we think of the code as distributed if the description of the preference of a single neuron is complex and doesn’t correspond to any concepts for describing the content that appear natural to us. The “grandmother cell” theory did not initially have any direct empirical support. Findings of “grandmother” (or similarly highly selective) neurons were elusive. The failure to find such neurons, of course, doesn’t prove that they don’t exist. The idea of grandmother cells has also been criticized on theoretical grounds for failing to exploit the combinatorics. This led to a preference for more distributed coding schemes among many theorists. Indeed, distributed codes and multivariate analysis of the information they carry is a central theme of this book. Sparse Distributed Codes Despite the advantages of distributed codes, the appeal of highly selective single cells is not merely in the eye of the electrophysiologist who happens to record one cell at a time with a single electrode. The reason why more of the page you are reading is white than black may be the cost of ink. Similarly, the metabolic cost of neuronal activity creates an incentive for a code that is sparser (i.e., fewer cells responding to a particular object due to each cell’s greater selectivity) than one that fully exploits the combinatorics. On the continuum between localist and distributed, the concept of a sparse code has emerged as a compromise that may best combine the advantages of both schemes. In a sparse code, few neurons respond to any given stimulus. And, conversely, few stimuli drive any given neuron. It seems likely that neurophysiological recordings have been biased toward describing neurons that fire more rapidly and less selectively, making them easier to find while looking for responses. Consistent with this notion, unbiased neuro- physiological recordings using electrode arrays tend to report high selectivities, suggesting sparse representations, in a variety of systems including the songbird vocal center, the mouse auditory cortex, and the human hippocampus.
  • 18.
    6 Introduction Thorpe discussesadditional arguments in favor of sparse coding. More recent evidence from neurophysiological recordings in the human medial temporal lobe suggests that there are neurons responding selectively to complex particular objects, for example,to JenniferAniston.Interestingly,the“Jennifer-Aniston cell”responded not just to one image, but to several images of the actress and even to the visual presentation of her name in writing. The cell did not respond to any other stimuli that the researchers tried. However, the relatively small number of stimuli and neurons that can be examined in such experiments (on the order of hundreds) sug- gests that neurons of this type might well respond to multiple particular objects. The “Jennifer-Aniston cell,” then, might be more promiscuous than its exclusive preference for the actress among the sampled set of stimuli would suggest. Thorpe (citing Rafi Malach) refers to this as the “totem-pole cell” theory, where a cell has multiple distinct preferences like the faces on a totem pole. It is important to note that descriptions like “Jennifer-Aniston cell” or “totem- pole cell” are likely to be caricatures that oversimplify the nature of these neurons. The underlying computations are more complex and much less well understood than those of early visual neurons. In a distributed but sparse code, different objects are represented by largely dis- joint sets of cells. This may render the code robust to interference between objects. Interference of multiple simultaneously present objects (i.e., the superposition of their representations) could create ambiguity in a maximally distributed code. Inter- ference could also erase memories: If each neuron is activated by many different objects, then spike-timing-dependent plasticity might wash away a memory that is not reactivated over a long time. Highly selective neurons, Thorpe argues, could maintain a memory over decades without the need of reactivation.Their high selec- tivity would protect them from interference. He suggests that the brain might contain neuronal “dark matter,” that is, neurons so selective that they may not fire for years and are virtually impossible to elicit a response from in a neurophysiologi- cal experiment. Sampling Limitations: Few Stimuli, Few Response Channels With current techniques, our chances are slim to activate neuronal “dark matter” or to ever find the other loves of the “Jennifer-Aniston cell.” This reminds us of a basic challenge for our field: our limited ability to sample brain responses to visual stimuli. High-resolution imaging and advances in multi-electrode array recording have greatly increased the amount of information we can acquire about brain- activity patterns. However, our measurements will not fully capture the information present in neuronal activity patterns in the foreseeable future. The subsample we take always consists in a tiny proportion of the information that would be required
  • 19.
    A Guided Tourthrough the Book 7 to fully describe the spatiotemporal activity pattern in a given brain region. Elec- trode recording and fMRI tap into population activity in fundamentally different ways (which we discuss further at the end of this overview). fMRI gives us a tem- porally smoothed and spatially strongly blurred (and locally distorted) depiction of activity (i.e., the hemodynamic response), with a single voxel reflecting the average activity across hundreds of thousands of neurons (and possibly other cell types). Neuronal recording gives us spatiotemporally precise information, but only for a vanishingly small subset of the neurons in the region of interest (and possibly biased toward certain neuronal types over others). In terms of information rates, fMRI and electrode recording are similarly coarse:An fMRI acquisition might provide us with, say, 100,000 channels sampled once per second, and an electrode array can record from, say, 100 channels sampled 1,000 times per second. We subsample not only the response space but also the stimulus space. Typical studies only present hundreds of stimuli (give or take an order of magnitude). In fMRI, the stimuli are often grouped into just a handful of categories; and only category-average response patterns are analyzed. However, to characterize the high-dimensional continuous space of images, a much larger number of stimuli is needed. Consider a digital grayscale image defined by 64 × 64 pixels (4,096 pixels) with intensities ranging from 0 to 255 (a pretty small image by today’s standards). The number of possible such images is huge: 2564096 (~1010,000 ). The more relevant subset of“natural”images is much smaller,but this subset is still huge and ill defined. To complicate matters, the concept of “visual object” is inherently vague and implies the prior theoretical assumption that scenes are somehow parsed into constituent objects. Repeated presentations of the same stimulus sample help distinguish signal from noise in the responses. Noise inevitably corrupts our data to some degree. The number of responses sampled limits the complexity of the models we can fit to the data. A model that is realistically complex, given what we know about the brain, is often unrealistic to fit, given the amount of data we have. To fit such a model would be to pretend that the data provide more information than they do, and generaliza- tion of our predictions to new data sets would suffer (see discussion in chapters 18 and 19 about bias versus variance). Both subsampling of the response pattern and limited model complexity cause us to underestimate the stimulus information present in a brain region’s activity patterns. Our estimates are therefore usually lower bounds on the information actually present. Retina: Rate Code Ruled Out Sheila Nirenberg describes an interesting exception to the rule of lower bounds on activity-pattern information (chapter 2). She describes a study in which an upper
  • 20.
    8 Introduction bound couldbe estimated. Neuronal recordings performed in vitro captured the continuous activity of the entire retinal population representing the stimulus. Niren- berg and colleagues then tested different hypothetical codes, each of which was based on a different set of features of the spike trains (thus retaining a different subset of the total information). Because the recordings arguably captured the full population information, any code that retained less information than present in the animal’s behavior (as assessed in vivo) could be ruled out. Spike-rate and spike- timing codes did not have all the information reflected in behavior, whereas a temporal-correlation code did the trick. Unfortunately, studies of cortical visual population codes are faced with a more complicated situation, where our limited ability to measure the activity pattern (a small sample of neurons measured or voxels that blur the pattern) is compounded by multiple parallel pathways. For example, current technology does not allow us to record from all the neurons in V1 that respond to a particular stimulus. Moreover, if a given hypothetical code (e.g., a rate code) suggested the absence in V1 of stimu- lus information reflected in behavior, the code could still not be ruled out, because the information might enter the cortex by another route, bypassing V1. The other studies reviewed in this book, therefore, cannot rule out codes by Nirenberg’s rigor- ous method. When population activity is subsampled, absence of evidence for par- ticular information is not evidence of absence of this information. The focus, then, is on the positive results, that is, the information that can be shown to be present. Early Visual Cortex: Stimulus Decoding and Reconstruction In chapter 3, Jasper Poort, Arezoo Pooresmaeili, and Pieter R. Roelfsema describe a study showing that physical stimulus features as well as attentional states can be successfully decoded from multiple neurons in monkey V1. They find that stimulus features and attentional states are reflected in separate sets of neurons, demonstrat- ing thatV1 is not just a low-level stimulus-driven representation.The results of Poort and colleagues illustrate a simple synergistic effect of multiple neurons that even linear decoders can benefit from: noise cancelation. Neuron A may not respond to a particular stimulus feature and carry no information about that feature by itself. However, if its noise fluctuations are correlated with the noise of another neuron B which does respond to the feature, then subtracting the activity of A from B (with a suitable weight) can reduce the noise in B and allow better decoding. Such noise cancelation is automatically achieved with linear decoders, such as the Fisher linear discriminant.Although the decoding is based on a linear combination of the neurons, the information in the ensemble of neurons does not simply add up across neurons and cannot be fully appreciated by considering the neurons one by one.
  • 21.
    A Guided Tourthrough the Book 9 Like Poort and colleagues, Yukiyasu Kamitani (chapter 4) describes studies decoding physical stimulus properties and attentional states from early visual cortex. However, Kamitani’s studies use fMRI in humans to analyze the information in visual areas V1–4 and MT+. All these areas allowed significant decoding of motion direction. Grating orientation information, by contrast, was strongest in V1 and then gradually diminished in V2–4; it was not significant in MT+. Beyond stimulus fea- tures, Kamitani was able to decode which of two superimposed gratings a subject is paying attention to. These findings are roughly consistent with results from monkey electrode record- ings.Their generalization to human fMRI is significant because it was not previously thought that fMRI might be sensitive to fine-grained neuronal patterns, such as V1 orientation columns.The decodability of grating orientation from V1 voxel patterns is all the more surprising because Kamitani did not use high-resolution fMRI, but more standard (3mm)3 voxels. The chapter discusses a possible explanation for the apparent “hyperacuity” of fMRI: Each voxel may average across neurons preferring all orientations, but that does not mean that all orientations are exactly equally represented in the sample. If a slight bias in each voxel carries some information, then pattern analysis can recover it by combining the evidence across multiple voxels. From decoding orientation and motion direction, Kamitani moves on to recon- struction of arbitrary small pixel shapes from early visual brain activity. This is a much harder feat, because of the need to generalize to novel instances from a large set of possible stimuli. In retinotopic mapping, we attempt to predict the response of each voxel separately as a function of the stimulus pattern. Conversely, we could attempt to reconstruct a pixel image by predicting each pixel from the response pattern. However, Kamitani predicts the presence of a stimulus feature extended over multiple stimulus pixels from multiple local response voxels. The decoded stimulus features are then combined to form the stimulus reconstruction. This multivariate-to-multivariate approach is key to the success of the reconstruction, suggesting that dependencies on both sides, among stimulus pixels and among response voxels, matter to the representation. Early Visual Cortex: Encoding and Decoding Models While Kamitani focuses on fMRI decoding models, the following two chapters describe how fMRI encoding models can be used to study visual representations. Kendrick N. Kay (chapter 5) gives an introduction to fMRI voxel-receptive-field modeling (also known as “population-receptive-field modeling”). In this technique, a separate computational model is fitted to predict the response of each voxel to
  • 22.
    10 Introduction novel stimuli.Similar techniques have been applied to neuronal recording data to characterize each neuron’s response behavior as a function of the visual stimulus. Kay argues in favor of voxel-receptive-field modeling by contrasting it against two more traditional methods of fMRI analysis: the investigation of response profiles across different stimuli (e.g., tuning curves or category-average activations) and pattern-classification decoding of population activity. He reviews a recent study, in which voxel-receptive-field modeling was used to predict early visual responses to natural images. The study confirms what is known about V1, namely that the repre- sentation can be modeled as a set of detectors of Gabor-like small visual features varying in location, orientation, and spatial frequency. Kay’s study is an example of a general fMRI methodology developed in the lab of Jack Gallant (the senior author of the study). Jack L. Gallant, Shinji Nishimoto, Thomas Naselaris, and Michael C. K.Wu (chapter 6) present this general methodol- ogy, which combines encoding (i.e., voxel-receptive-field) and decoding models. First, each of a number of computational models is fitted to each voxel on the basis of measured responses to as many natural stimuli as possible.Then the performance of each model (how much of the non-noise response variance it explains) is assessed by comparing measured to predicted responses for novel stimuli not used in fitting the model. The direction in which a model operates (encoding or decoding) is irrel- evant to the goal of detecting a dependency between stimulus and response pattern (a point elaborated upon by Marieke Mur and Nikolaus Kriegeskorte in chapter 20). However, Gallant’s discussion suggests that the direction of the model predic- tions should match the direction of the information flow in the system: If we are modeling the relationship between stimulus and brain response, an encoding approach allows us to use computational models of brain information processing (rather than generic statistical models as are typically used for decoding, which are not meant to mimic brain function).The computational models can be evaluated by the amount of response variance they explain. Decoding models, on the other hand, are well suited for investigating readout of a representation by other brain regions and relating population activity to behavioral responses. For example, if the noise component of a region’s brain activity predicts the noise component of a behavioral response (e.g., categorization errors; see chapter 14), this suggests that the region may be part of the pathway that computes the behavioral responses. Midlevel Vision: Curvature Representation in V4 and Posterior IT Moving up the visual hierarchy, Anitha Pasupathy and Scott L. Brincat (chapter 7), explore the representation of visual shapes between the initial cortical stage of V1 and V2 and higher-level object representations in inferior temporal (IT) cortex. At this intermediate level, we expect the representational features to be more complex
  • 23.
    A Guided Tourthrough the Book 11 than Gabor filters or moving edges, but less complex than the types of features often found to drive IT cells. Pasupathy and Brincat review a study that explores the representation of object shape by electrode recordings of single-neuron responses to sample stimuli from a continuous parameterized space of binary closed shapes. Results suggest that a V4 neuron represents the presence of a particular curvature at a particular angular position of a closed shape’s contour. A posterior IT neuron appears to combine multiple V4 responses and represent the presence of a combina- tion of convex and concave curvatures at particular angular positions. The pattern of responses of either region allowed the decoding of the stimulus (as a position within the parameterized stimulus space). This study nicely illustrates how we can begin to quantitatively and mechanistically understand the transformations that take place along the ventral visual stream. What Aspect of Brain Activity Serves to “Represent” Mental Content? When we analyze information represented in patterns of activity, we usually make assumptions about what aspect of the activity patterns serves to represent the infor- mation in the context of the brain’s information processing. A popular assumption is that spiking rates of neurons carry the information represented by the pattern. While there is a lot of evidence that spike rates are an important part of the picture, experiments like those Nirenberg describes in chapter 2 show that we miss function- ally relevant information if we consider only spike rates. Conor Houghton and Jonathan Victor (chapter 8) consider the general question of how we should measure the “representational distance” between two spatiotem- poral neuronal activity patterns. In a theoretical chapter at the interface between mathematics and neuroscience, they consider metrics of dissimilarity comparing activity patterns that consist in multiple neurons’ spike trains.The aim is to find out which metric captures the functionally relevant differences between activity pat- terns. Houghton and Victor focus on “edit distances” (including the “earth mover’s distance”), which measure the distance between two patterns in terms of the “work” (i.e., the total amount of changes) required to transform one pattern into another. Jonathan Victor had previously proposed metrics to characterize the distance between single-neuron spike trains. Here this work is extended to populations of neurons, suggesting a rigorous and systematic approach to understanding neuronal coding. Inferior Temporal Cortex: A Map of Complex Object Features Moving farther down the ventral stream, Hans P. Op de Beeck discusses high-level object representations in inferior temporal (IT) cortex in the monkey and in the
  • 24.
    12 Introduction human (chapter9). This is the first chapter to review the findings of macroscopic regions selective for object categories (including faces and places). Face-selective neurons had been found in monkey-IT electrode recordings decades earlier. However, the clustering of such responses in macroscopic regions found in consis- tent anatomical locations along the ventral stream was discovered by fMRI, first in humans and later in monkeys. It has been suggested that these regions are “areas” or “modules,” terms that imply well-defined anatomical and functional boundaries, which have yet to be demonstrated. The proposition that the higher-level ventral stream might be composed of cate- gory-selective (i.e., semantic) modules sparked a new debate about localist versus distributed coding within the fMRI community.The new debate in fMRI concerned a larger spatial scale (the overall activation of entire brain regions, not single neurons) and also a larger representational scale (the regions represented catego- ries, not particular objects). Nonetheless, the theoretical arguments are analogous at both scales. Just like the functional role of highly selective single neurons remains contentious, it has yet to be resolved whether the higher ventral stream consists of a set of distinct category modules or a continuous map of visual and/or semantic object features. Op de Beeck argues that the finding of category-selective regions might be accommodated under a continuous-feature-map model. He reviews evidence sug- gesting that the feature map reflects the perceptual similarity space and subjective interpretations of the visual stimuli, and that it can be altered by visual experience. Chou Hung and James DiCarlo (chapter 10) describe a study in which they repeatedly presented seventy-seven grayscale object images in rapid succession (a different image every 200 ms) while sequentially recording from more than three hundred locations in monkey anterior IT. The images were from eight categories, including monkey and human faces, bodies, and inanimate objects. Single-cell responses to object images have been studied intensely for decades, showing that single neurons exhibit only weak object-category selectivity and limited tolerance to accidental properties. From a computational perspective, however, the more relevant question is what information can be read out from the neuronal population activity by downstream neurons. Single-neuron analyses can only hint at the answer. Hung and DiCarlo therefore analyzed the response patterns across object scales and locations by linear decoding. This approach provides a lower-bound estimate (as explained above) on the information available for imme- diate biologically plausible readout. The category (among 8) and identity (among 77) of an image could be decoded with high accuracy (94 percent and 70 percent correct, respectively), far above chance level. Once fitted, a linear decoder generalized reasonably well across sub-
  • 25.
    A Guided Tourthrough the Book 13 stantial scale (2 octaves) and small position changes (4 deg visual angle). The decoder also generalized to novel category exemplars (i.e., exemplars not used in fitting), and worked well even when based on a 12.5-ms temporal window (capturing just 0–2 spikes per neuron) at 125-ms latency. Category and identity information appeared to be concentrated in the same set of neurons, and both types of informa- tion appeared at about the same latency (around 100 ms after stimulus onset, as revealed by a sliding temporal-window decoding analysis). Hung and DiCarlo found only minimal task and training effects at the level of the population. This is in con- trast to some earlier studies, which focused on changes in particular neurons during more attention-demanding tasks. From a methodological perspective, Hung and DiCarlo’s study is exemplary for addressing a wide range of basic questions, by applying a large number of well-motivated pattern-information analyses to popula- tion response patterns elicited by a set of object stimuli. Representational Similarity Structure of IT Object Representations Classifier decoding can address how well a set of predefined categories can be read out, but not whether the representation is inherently organized by those categories. Nikolaus Kriegeskorte and Marieke Mur (chapter 11) review a study of the similar- ity structure of the IT representations of 92 object images in humans, monkeys, and computational models. Kriegeskorte and Mur show that the response patterns elic- ited by the ninety-two objects form clusters corresponding to conventional catego- ries. The two main clusters correspond to animate and inanimate objects; the animates are further subdivided into faces and bodies. The response-pattern dis- similarity matrices reveal a striking match of the structure of the representation between human and monkey. In both species, IT appears to emphasize the same basic categorical divisions. Moreover, even within categories the dissimilarity struc- ture is correlated between human and monkey. IT object similarity was not well accounted for by several computational models designed to mimic either low-level features (e.g., pixel images, processed versions of the images, features modeling V1 simple and complex cells) or more complex (e.g., natural image patch) features thought to reside in IT. This suggests that the IT features might be optimized to emphasize particular behaviorally important category distinctions. In terms of methods, the chapter shows that studying the similarity structure of response patterns to a sizable set of visual stimuli (“representational similarity analysis”) can allow us to discover the organization of the representational space and to compare it between species, even when different measurement techniques are used (here, fMRI in humans and cell recordings in monkeys). Like voxel- receptive-field modeling (see chapters 5 and 6, discussed earlier), this technique
  • 26.
    14 Introduction allows usto incorporate computational models of brain information processing into the analysis of population response patterns, so as to directly test the models. Andrew C. Connolly, M. Ida Gobbini, and James V. Haxby (chapter 12) discuss three virtues of studying object similarity structure:it provides an abstract character- ization of representational content, can be estimated on the basis of different data sources,and can help us understand the transformation of the representational space across stages of processing.They describe a human fMRI study of the similarity struc- ture of category-average response patterns and how it is transformed across stages of processing from early visual to ventral temporal cortex. The similarity structure in early visual cortex can be accounted for by low-level features. It is then gradually transformed from early visual cortex, through the lateral occipital region, to ventral temporal cortex.Ventral temporal cortex emphasizes categorical distinctions. Connolly and colleagues also report that the replicability of the similarity struc- ture of the category-average response patterns increases gradually from early visual cortex to ventral temporal cortex. This may reflect the fact that category-average patterns are less distinct in early visual cortex. Similarity structure was found to be replicable in all three brain regions, within as well as across subjects. Replicability did not strongly depend on the number of voxels included in the region of interest (100–1,000 voxels, selected by visual responsiveness). The theme of representational similarity analysis continues in the chapter by Dwight J. Kravitz, Annie W.-Y. Chan, and Chris I. Baker (chapter 13), who review three related human fMRI studies of ventral-stream object representations.The first study shows that the object representations in ventral-stream regions are highly dependent on the retinal position of the object. Despite the larger receptive fields found in inferior temporal cortex (compared to early visual regions), these high- level object representations are not entirely position invariant. The second study shows that particular images of body parts are most distinctly represented in body- selective regions when they are presented in a “natural” retinal position—assuming central fixation of a body as a whole (e.g., right torso front view in the left visual field).This suggests a role for visual experience in shaping position-dependent high- level object representations. The third study addresses the representation of scenes and suggests that the major categorical distinction emphasized by scene-selective cortex is that between open (e.g., outdoor) and closed (e.g., indoor) scenes. In terms of methods, Kravitz and colleagues emphasize the usefulness of ungrouped-events designs (i.e., designs that do not assume a grouping of the stimuli a priori) and they describe a straightforward and very powerful, split-half approach to representa- tional similarity analysis. The representation of scenes in the human brain is explored further in the chapter by Dirk B. Walther, Diane M. Beck, and Li Fei-Fei (chapter 14). These authors investigate the pattern representations of subcategories of scenes (including moun- tains, forests, highways, and buildings) with fMRI in humans.They relate the confus-
  • 27.
    A Guided Tourthrough the Book 15 ability of the brain response patterns (when linearly decoded) to behavioral confusions among the subcategories. This shows that early visual representations, though they distinguish scene subcategories, do not reflect behavioral confusions, while representations in higher-level object- and scene-selective regions do. In terms of methods, this chapter introduces the attractive method of relating confusions (a particular type of error) between behavioral classification tasks and response- pattern classification analyses, so as to assess to what extent a given region might contribute to a perceptual decision process. In chapter 15, John-Dylan Haynes discusses how fMRI studies of consciousness can benefit from pattern-information analyses. A central theme in empirical con- sciousness research is the search for neural correlates of consciousness (NCCs). Classical fMRI studies on NCCs have focused on univariate correlations between regional-average activation and some aspect of consciousness.For example,regional- average activation in area hMT+/V5 has been shown to be related to conscious percepts of visual motion. However, finding a regional-average-activation NCC, does not address whether the specific content of the conscious percept (e.g., the direction of the motion) is encoded in the brain region in question. Combining the idea of an NCC with multivariate population decoding can allow us to relate specific conscious percepts (e.g., upward visual motion flow) to specific patterns of brain activity (e.g., a particular population pattern in hMT+/V5) in human fMRI. Beyond the realm of consciousness, we return to this point at a more general level in chapter 20, where we consider how classical fMRI studies use regional-average activation to infer the “involvement” of a brain region in some task component, whereas pattern-information fMRI studies promise to reveal a region’s representational content, whether the organism is conscious of that content or not. Vision as a Hierarchical Model for Inferring Causes by Recurrent Bayesian Inference In chapter 16, the final chapter of the “Theory and Experiment” section, Karl Friston outlines a comprehensive mathematical theory of perceptual processing.The chapter starts by reviewing the theory of probabilistic population codes. A population code is probabilistic if the activity pattern represents not just one particular state of the external world, but an entire probability distribution of possible states. On one hand, bistable perceptual phenomena (e.g., binocular rivalry) suggest that the visual system, when faced with ambiguous input, chooses one possible interpretation (and explores alternatives only sequentially in time). On the other hand, there is evidence for a probabilistic representation of confidence. These findings suggest a code that is probabilistic but unimodal. Friston argues that the purpose of vision is to infer the causes of the visual input (e.g., the objects in the world that cause the light
  • 28.
    16 Introduction patterns fallingon the retina), and that different regions represent causes at differ- ent levels of abstraction. He interprets the hierarchy of visual regions as a hierarchi- cal statistical model of the causes of visual input. The model combines top-down and bottom-up processing to arrive at an interpretation of the input. The top-down component consists in prediction of the sensory input from hypotheses about its causes (or prediction of lower-level causes from higher-level causes). The predicted information is “explained away” by subtracting its representation out at each stage, so that the remaining bottom-up signals convey the prediction errors, that is the component of the input that requires further processing to be accommodated in the final interpretation of the input. Friston suggests that perceptual inference and learning can proceed by an empirical Bayesian mechanism. The chapter closes by reviewing some initial evidence in support of the model. In the second part of the book, “Background and Methods,” we collect chapters that provide essential background knowledge for understanding the first part.These chapters describe the neuroscientific background, the mathematical methods, and the different ways of measuring brain-activity patterns. A Primer on Vision In chapter 17, Kendra Burbank and Gabriel Kreiman give a general introduction to the primate visual system, which will be a useful entry point for researchers from other fields. They describe the cortical visual hierarchy, in which simple local image features are detected first, before signals converge for analysis of more complex and more global features. In low-level (or “early”) representations, neurons respond to simple generic local stimulus features such as edges and the cortical map is retino- topically organized, with each neuron responsive to inputs from a small patch of the retina (known as the neuron’s “receptive field”). In higher-level regions, neurons respond to more complex, larger stimulus features that occur in natural images and are less sensitive to the precise retinal position of the features (i.e., larger receptive fields). The system can be globally divided into a ventral stream and dorsal stream, where the ventral “what” stream (the focus of this book) appears to represent what the object is (object recognition) and the dorsal “where” stream appears to repre- sent spatial relationships and motion. Tools for Analyzing Population Codes: Statistical Learning and Information Theory Jed Singer and Gabriel Kreiman (chapter 18) give a general introduction to statisti- cal learning and pattern classification. This chapter should provide a useful entry
  • 29.
    A Guided Tourthrough the Book 17 point for neuroscientists. Statistical learning is a field at the interface between sta- tistics, computer science, artificial intelligence, and computational neuroscience, which provides important tools for analysis of brain-activity patterns. Moreover, some of its algorithms can serve as models of brain information processing (e.g., artificial neural networks) or are inspired by the brain at some level of abstraction. A key technique is pattern classification, where a set of training patterns is used to define a model that divides a multivariate space of possible input patterns into regions corresponding to different classes. The simplest case is linear classification, where a hyperplane is used to divide the space. In pattern classification as in other statistical pursuits, more complex models (i.e., models with more parameters to be fitted to the data) can overfit the data. A model is overfitted if it represents noise- dominated fine-scale features of the data. Overfitting has a depressing and important consequence: a complex model can perform worse at prediction than a simple model, even when the complex model is correct and the simple model is incorrect. The complex correct model will be more easily “confused” by the noise (i.e., overfitted to the data), while the simple model may gain more from its stability than it loses from being somewhat incorrect. This can happen even if the complex model subsumes the simple model as a special case. The phenomenon is also known as the bias-variance tradeoff: The simple model in our example has an incorrect bias, but it performs better because of its lower vari- ance (i.e., noise dependence).As scientists, we like our models “as simple as possible, but no simpler,” as Albert Einstein said. Real-life prediction from limited data, however, favors a healthy dose of oversimplification. In brain science, pattern classification is used to “decode” population activity patterns, that is, to predict stimuli from response patterns. This is the most widely used approach to multivariate analysis of population codes. Tutorial introductions to this method are given by Ethan Meyers and Gabriel Kreiman for neural data (chapter 19) and by Marieke Mur and Nikolaus Kriegeskorte for fMRI data (chapter 20). These chapters provide step-by-step guides and discuss the neuro- scientific motivation of particular analysis choices. Pattern analyses are needed to detect information interactively encoded by mul- tiple responses. In addition, they combine the evidence across multiple responses, thus boosting statistical power and providing useful summary measures. The combination of evidence would be useful even if interactive information were absent. These advantages apply to both neuronal and fMRI data, but in different ways. Single-neuron studies miss interactively encoded information, and perhaps also effects that are weak and widely distributed. However, they can still contribute to our understanding of population codes within a brain region. Arguably, most of what we know about population codes today has been learned from single-neuron studies.
  • 30.
    18 Introduction The single-voxelscenario is quite different, as discussed by Mur and Kriegeskorte. In addition to the hemodynamic nature of the fMRI signal and its low spatial reso- lution, single-voxel fMRI analyses have very little power because of the physiologi- cal and instrumental noise and because of the need to account for multiple testing carried out across many voxels. As we make the voxels smaller to pick up more fine-grained activity patterns within a region, we get (1) more and (2) noisier voxels. The combination of weaker effects and stronger correction for multiple tests leaves single-voxel analysis severely underpowered. Pattern-information analysis recovers power by combining the evidence across voxels. Classical fMRI studies have used regional averaging (or smoothing) to boost power. This approach enables us to detect overall regional activations at the cost of missing fine-grained pattern infor- mation. Regional-average activation is taken to indicate the “involvement” of a region in a task component (or in the processing of a stimulus category). However, the region remains a black box with respect to its internal processes and representa- tions. The pattern-information approach promises to enable us to look into each region and reveal its representational content, even with fMRI. Whether we use neuronal recordings or fMRI, we wish to reveal the information the code carries. If pattern classification provides above-chance decoding of the stimuli, then we know that there is mutual information between the stimulus and the response pattern. However, pattern classification is limited by the assumptions of the classification model. Moreover, the categorical nature of the output (i.e., predefined classes) leads to a loss of probabilistic information about class member- ship and does not address the representation of continuous stimulus properties. It would be desirable to detect stimulus information in a less biased fashion and to quantify its amount in bits. Stefano Panzeri and Robin A. A. Ince (chapter 21) describe a framework for information theoretic analysis of population codes. Information theory can help us understand the relationships between neurons and how they jointly represent behaviorally relevant stimulus properties. If the neurons carry independent infor- mation, the population information is the sum of the information values for single neurons. To the extent that different neurons carry redundant information, the population information will be less than that sum. To the extent that the neurons synergistically encode information, the population information can be greater than the sum. The case of synergistic information was described earlier in the context of chapter 3: If neurons A and B share noise, but not signal, A can be used to cancel B’s noise. Subtracting out the noise improves the signal-to-noise ratio and increases the information. Panzeri and Ince place these effects in a general mathematical framework, in which the mutual information between the stimulus and the popula- tion response pattern is decomposed into additive components, which correspond to the sum of the information values for single neurons and the synergistic offset
  • 31.
    A Guided Tourthrough the Book 19 (which can be positive or negative and is further decomposed into signal- and noise- related subcomponents). The abstract beauty of the mathematical concept of information lies in its general- ity. In empirical neuroscience, the necessarily finite amount of data requires us to sacrifice some of the generality in favor of stable estimates (i.e., to reduce the error variance of our estimates by accepting some bias). However, information theory is key to the investigation of population coding not only at the level of data analysis, but also at the level of neuroscientific theory. What We Measure with Electrode Recordings and fMRI The experimental studies described in this book relied on brain-activity data from electrode recordings and fMRI. We can analyze the response patterns from these measurement techniques with the same mathematical methods, and there is evi- dence that they suggest a broadly consistent view of brain function (e.g., chapter 11). However, fMRI and electrode recordings measure fundamentally different aspects of brain activity. Moreover, the two kinds of signal have been shown to be dissociated in certain situations. The final chapter by Philipp Berens, Nikos K. Logothetis, and Andreas S. Tolias (chapter 22) reviews the relationship between neuronal spiking, local field potentials, and the blood-oxygen-level-dependent (BOLD) fMRI signal, which reflects the local hemodynamic response thought to serve the function of adjusting the energy supply for neuronal activity. Neuronal spikes represent the output signal of neurons.They are sharp and short events, and thus reflected mainly in the high temporal-frequency band of the electri- cal signal recorded with an invasive extracellular electrode in the brain. The high band (e.g., >600 Hz) of electrode recordings reflects spikes of multiple neurons very close to the electrode’s tip (<200 micrometers away) and is known as the multi-unit activity (MUA). The low temporal-frequency band (e.g., <200 Hz) of electrode recordings is known as the local field potential (LFP). Compared to the MUA, the LFP is a more complex composite of multiple processes. It appears to reflect the summed excit- atory and inhibitory synaptic activity in a more extended region around the tip of the electrode (approaching the spatial scale of high-resolution-fMRI voxels). The LFP is therefore thought to reflect the input and local processing of a region, whereas the MUA is thought to reflect the spiking output. The LFP is also more strongly correlated with the BOLD fMRI signal than the MUA. Berens and col- leagues describe what is currently known about the highly complex relationships among these three very different kinds of brain-activity measurement.
  • 33.
  • 35.
    Grandmother Cells andDistributed Representations Simon J. Thorpe Summary It is generally accepted that a typical visual stimulus will be represented by the activity of many millions of neurons distributed across many regions of the visual cortex. However, there is still a long-running debate about the extent to which information about individual objects and events can be read out from the responses of individual neurons. Is it conceivable that neurons could respond selectively and in an invariant way to specific stimuli—the idea of “grandmother cells”? Recent single-unit recording studies in the human medial lobe seem to suggest that such neurons do indeed exist, but there is a problem, because the hit rate for finding such cells seems too high. In this chapter, I will look at some of the implications of this work and raise the possibility that the cortical structures that provide the input to these hippocampal neurons could well contain both highly distributed and highly localist coding. I will discuss how a combination of STDP and temporal coding can allow highly selective responses to develop to frequently encountered stimuli. Finally, I will argue that “grandmother cell” coding has some specific advantages not shared by conventional distributed codes. Specifically, I will suggest that when a neuron becomes very selective, its spontaneous firing rate may drop to virtually zero, thus allowing visual memories to be maintained for decades without the need for reactivation. Introduction The Distributed vs. Localist Representation Debate One of the longest-running and thorniest debates in the history of research on the brain concerns the nature of the representations that the brain uses to represent objects, and specifically the question of whether individual neurons may encode specific objects and events (Barlow, 1972; Gross, 2002). The debate has recently 1
  • 36.
    24 Simon J.Thorpe received new impetus, with the publication of a significant review paper by Jeff Bowers (Bowers, 2009), which has been followed by a series of commentaries (Plaut and McClelland, 2010; Quiroga and Kreiman, 2010). In addition, there have been a series of fascinating studies on single-unit responses from the human medial tem- poral lobe that have raised numerous questions about the link between single-unit activity and perception (Quian Quiroga et al., 2005). There is clearly something special that happens when we recognize a familiar visual stimulus.Virtually any such visual stimulus will doubtless activate millions, maybe hundreds of millions of neurons in our visual system. Many of these are presumably involved in generic processing tasks that will take place irrespective of whether the image is recognized or not. For example, simple cells in V1 will presumably signal the presence of an edge with a particular orientation at a particular point in the visual field, irrespective of whether the corresponding object can be recognized or not. But most scientists would probably accept that at some level in the brain, quite possibly relatively high levels in the visual hierarchy, there are neurons that are directly involved in encod- ing the presence of the object.The debate concerns the way in which those neurons do the encoding. One view, currently quite popular among scientists, is that the representation at the neuronal level is distributed across large numbers of neurons, none of which needs to be specifically tuned to a particular object. At the other extreme, some researchers have proposed that for some highly familiar objects, there may be neurons that respond very selectively to that object—a view often jokingly referred to as “grandmother cell” coding. This difference between “local” and “distributed” coding models has become a very hot topic in recent years, partly because there have been a number of reports of single neurons that have been recorded from the medial temporal lobe of human patients undergoing presurgical investigations for the treatment of intractable epi- lepsy. At first glance, some of these cells seem to have many of the properties that one might expect to find if grandmother cell coding was true. But, as some of the authors of the studies have pointed out, there are a number of puzzling features about these cells that do not seem to fit with the simple grandmother cell view. In this chapter, my plan is to look in more detail at some of the issues. I will argue that while the cells reported in the human medial temporal lobe may not be what one would predict for a true localist coding scheme, there may be other explanations of the results. Furthermore, I will argue that there are some good computational argu- ments for using grandmother cell–like coding in at least some cases. However, I will also argue strongly that there is no requirement to choose in favor of only one type of coding. Rather, as I have argued previously, it is likely that the brain simultane- ously uses both highly distributed and localist encoding, quite possibly within the same bit of neocortex.
  • 37.
    Grandmother Cells andDistributed Representations 25 A Test Case: Recognition in RSVP Streams To make the nature of the debate clear, consider the following experiment that you can try on yourself by downloading a set of movie files from the following site: <ftp:// www.cerco.ups-tlse.fr/Simon/Movies>. Each movie sequence is a string of highly varied photographs of animals drawn from a range of sources that are presented in a Rapid Serial Visual Presentation (RSVP) sequence at 10 frames per second. Ever since the classic studies of Molly Potter in the 1970s, we have known that our visual systems can process images at this rate and that we can spot a particular target image (such as a “boat” or a “baby”) very effectively under such conditions, even when only a verbal description of the stimulus is provided (Potter, 1975, 1976). In the case of the demo sequences, all the images are of animals except one, and the task is to report whatever image in the sequence does not fit. Whenever I have shown the first example sequence during lectures, almost everyone will immediately notice that the sequence contains an image of Mona Lisa. In a sense, the fact that Mona Lisa is easy to spot may not be that surprising, since it is effectively one of the most familiar images in Western civilization. Each of us has almost certainly seen it hun- dreds if not thousands of times.And, given that it is a 2D painting, the visual pattern that it produces on our retina is relatively stable with changes of viewing angle. Size is certainly not prespecified, since we can recognize the Mona Lisa at essentially any scale, but relative to most 3D objects that we encounter, its appearance is nevertheless relatively standardized, making it possible to imagine that recognition could be achieved with a relatively simple “pattern-matching” approach. However, by looking at the other movies, you will be convinced, I hope, that this sort of effort- less recognition occurs for many other types of object. For example, one of the sequences contains a photograph of the Statue of Liberty. Again, almost every one will notice its presence in the sequence of images, despite the fact that (unlike Mona Lisa), there are effectively an infinite number of different viewing angles that would work. This certainly makes life harder for the recognition mechanism that is being used by the visual system. In another of the sequences, there is a scene from a res- taurant with someone dressed up as Mickey Mouse. Again, a remarkably high number of people who see the sequence will immediately report that they saw Mickey Mouse, even though they had no reason to expect such an image. Indeed, the other “distractor” images in the sequence are all animals, and in a sense, so is Mickey Mouse (albeit a somewhat special one).Why then, do people almost invari- ably notice the intruder in the sequence? My personal view is that they notice such intruders because some sort of high-level representation is activated in the brain, and that this representation gets noticed because it does not fit with the rest of the context.
  • 38.
    26 Simon J.Thorpe There are a number of points that we can make on the basis of this sort of dem- onstration. One point concerns the question of whether all the images in the sequence need to be processed fully by the visual system, or whether it might be sufficient to only process the one (Mona Lisa, Statue of Liberty, or Mickey Mouse) that we actually notice. This seems to me to be very implausible. It is not because the other twenty or thirty images in the sequence presented at ten images per second cannot be reliably reported that they were not fully processed. Indeed, it seems difficult to imagine how the brain could determine whether any particular image in the sequence was worth noticing without processing them all fully. Rather, it seems more likely that all the images are being processed and that the intelligence of our visual system is demonstrated by the fact that we are automatically able to deter- mine whether or not a particular image is sufficiently important to merit being made the center of attention. Some Distinctions A key issue that I want to address here concerns the nature of the neural represen- tations that are activated in such a situation. A first point to make is that there are good reasons to believe that the brain could potentially use a complete range of representational schemes. Consider a classic 3-layer neural network architecture composed of a layer of input units, a layer of output units, and between them a layer of so-called hidden units. To make the problem clear, imagine that the input layer corresponds to a simple 8 × 8 retina, and the output layer corresponds to a set of 128 responses, with one response for each of the 128 members of the ASCII char- acter set (see figure 1.1). The desired input–output function is to generate the appropriate output when a particular character is presented on the “retina.” Theo- retical studies have shown that with sufficient units in the hidden layer,such a system can implement any arbitrary input function, but there are many different ways in which the function could be implemented at the level of the hidden units. One would be to use just 7 hidden units and the 7-bit ASCII code, illustrated in figure 1.1. This is a perfect illustration of a completely distributed coding strategy, since to know what is being represented, it is necessary to know the state of all 7 hidden units. None of the units actually “means” anything on its own, and an experimenter who was recording the response of any of the neurons to changing inputs would be unable to make sense of why the neuron was active for any given stimulus, because effectively, the assignment of each neuron to the set of stimuli is arbitrary. Each unit will be active for 50 percent of the input patterns, which means that the coding is in fact very efficient. Indeed, that is precisely why the ASCII code was chosen as a way to encode text based information within computers. Note also, that there are a very large number of different ways in which the 7 units could be used to represent all 128 characters—each is effectively as good as any other.
  • 39.
    Grandmother Cells andDistributed Representations 27 At the other end of the spectrum, one could imagine a hidden layer with 128 units, one for each of the characters in the set. For any one input pattern, only one of the units need be active. This would be an example of extreme localist coding— effectively corresponding to grandmother cell coding. Clearly, both extremes of representation could potentially be used for representation at the neuronal level. Is there any way of deciding which if any of these different schemes is actually used in the brain? Reading out from Distributed Representations The distinction between distributed and grandmother cell–based representations has become an increasingly hot issue in recent years for a number of reasons. One is that recent advances in multivoxel-based analysis of fMRI data mean that it has been possible to show how, using the distributed pattern of activity over a large number of voxels, one can make inferences about the stimulus identity and category. Such data provide a very clear demonstration that information can indeed be extracted from the distributed pattern of activity within a cortical region. This has been demon- strated using fMRI voxel activation levels (Kriegeskorte et al., 2007) but more recently,the approach has been extended to recordings from intracerebral electrodes in epileptic patients (Liu et al., 2009), as well as to single-unit recording studies from both monkey inferotemporal cortex (Kiani et al.,2007) and the medial temporal lobe in humans (Quian Quiroga et al.,2007).One particularly significant aspect of this sort Figure 1.1 Distributed coding in the ASCII code. Each of the seven units in the hidden layer participates to the coding of one of the 128 characters in the ASCII set. However, none of these units has activity that is specifically related to any particular stimulus.
  • 40.
    28 Simon J.Thorpe of work is that the decoding strategies used, although sophisticated, do not require the existence of mechanisms that could not be implemented with relatively simple neural circuits. Suppose that it can be demonstrated that by applying a classification algorithm to the activation levels of a few hundred voxels in some part of the ventral processing pathway it is possible to make some judgment about the stimulus—for example, whether or not a face is present, and maybe even whether the face is famil- iar or not.If the classification procedure could be implemented using a neural circuit, then it follows that the brain could also derive the same information simply by for- warding the same pattern of activation to another brain area where neurons could potentially learn to make the same distinction. It might also be that individual neurons within the region would also be in a position to learn to make the same categorical distinctions. In this case, the “readout” neurons would show a more local representation, as is the case for the third layer in figure 1.1. Does the fact that it is possible to extract information from the activity patterns of large numbers of neurons (Quian Quiroga and Panzeri, 2009) mean that there is no need for the brain to make information explicit at the level of individual cells? In other words, does this recent work actually allow us to say anything about whether or not grandmother cells really exist or not? A Highly Specialized Neuron in the Orbitofrontal Cortex One reason why I think one should be careful before assuming that it will be pos- sible to derive all the information needed to interpret the function of a particular brain region from imaging studies comes from my own experience as a doctoral student in Edmund Rolls’s lab at Oxford in the late 1970s.We were recording single- unit activity in the monkey orbitofrontal cortex (OFC) using behavioral tasks that we thought were likely to be interesting given the known effects of OFC lesions. Specifically, lesioned animals (and humans) were known to have severe problems in task shifting, for example, when performing visual discrimination tasks with reversals. We therefore explicitly tested this by training monkeys to perform a go/ no-go visual discrimination task for fruit juice reward, and then periodically revers- ing the rule.Thus, initially the monkey might be responding to a green “go” stimulus and withholding responses to a red “no-go” stimulus. However, we then reversed the contingency with the result that the monkey made one “mistake” and received a drop of saline after responding to the green stimulus. After weeks or months of training, he would immediately reverse his strategy and start responding systemati- cally to the other stimulus,until the next reversal occurred.More than three hundred neurons were recorded during the performance of this reversal task, most of which failed to show any significant activity changes related to the task. However, a handful of cells showed some quite remarkable responses during reversal, and one particular cell showed a quite amazing response, illustrated in figure 1.2 (Thorpe
  • 41.
    Grandmother Cells andDistributed Representations 29 Figure 1.2 Activity of a single neuron in monkey orbitofrontal cortex with activity very specifically related to reversals in a go/no-go visual discrimination task. On each trial, the monkey is shown one of two different stimuli through a shutter that opens for 1 second. One of the stimuli means that he can lick a tube for fruit juice reward, whereas the other stimulus means that he should not lick in order to avoid receiving saline. Licks are indicated by the inverted triangles. Without warning the meanings of the two stimuli are inverted (“Reversal”), at which point the monkey makes one mistake. This was followed by a strong burst of activity from the neuron that lasted several seconds. A second smaller burst occurred on the next correct trial. Adapted from Thorpe et al. 1983. et al., 1983). Following each reversal of the rule, the neuron showed a very strong increase in firing rate that lasted for several seconds.There was even a second burst of firing following the first correctly performed trial with the new rule. It is therefore difficult to imagine that the neuron is simply responding to the punishment, or to the making of an error. Rather, the neuron appeared to form part of a highly specific circuit that was specifically related to the performance of the reversal task. Given that the monkey was a true expert at performing the task, having spent months performing such reversals, I cannot help thinking that maybe the existence of such a neuron is a direct reflection of the automaticity of the behavior following training. In the current context, it is particularly important to realize that only one such cell was found out of hundreds recorded. It is therefore relatively unlikely that the activ- ity of the neuron could be seen at the level of more global measures of brain activa- tion, such as event-related potential recording or fMRI. Many other examples of highly specific but rare responses in individual neurons can be found in the literature.
  • 42.
    30 Simon J.Thorpe The Problem of Inferring Neuronal Selectivity from Global Measures The existence of this sort of highly specific, yet rare, neuronal response within a cortical area raises an important issue. Global activity measures certainly provide evidence to support the idea that a particular brain region could be involved in performing a certain cognitive task. However, it is probably impossible to make inferences about the degree of specialization of individual neurons on the basis of these global measures. In principle, one can even imagine a situation where the global activation measures provide no evidence for selectivity whatsoever, and yet where there might still be strong selectivity at the single-unit level. Indeed, the reverse can also be true, since there are cases when there is “global” activity in the absence of spiking activity (Sirotin and Das, 2009). Consider some of the very interesting fMRI-based studies that have shown that it is possible to read out the orientation of a grating from the pattern of activation seen across voxels in V1 (Haynes and Rees, 2005; Kamitani and Tong, 2005). Such techniques rely on the existence of the local variations in preferred orientation within V1. While the size of the orientation columns is small relative to the resolu- tion of the fMRI technique, there are nevertheless sufficient variations in local selectivity to allow the technique to be used. However, it is important to realize that it did not have to be that way. If the neurons selective to different orientations were really mixed up completely at the local level, nothing would be visible at the level of the voxels because each voxel would contain neurons coding all the different orientations. Thus, if information can be extracted from looking at the relatively coarse pattern of activity seen in imaging studies, this is certainly compatible with the hypothesis that the structure is involved in the processing of the stimulus attri- butes. However, the opposite is not true.The absence of differential fMRI activation does not imply that the structure has no role to play. Grandmother Cells in the Human Medial Temporal Lobe? Having made a few general points about the problems of relating results from imaging studies with responses seen at the single-unit level, I would now like to move on to the interpretation of the fascinating series of papers that have described the responses of single units in the human medial temporal lobe. The studies dem- onstrate that such neurons can have remarkably invariant responses to a wide range of different stimuli that effectively correspond to the same object or concept. One of the earliest such studies was a paper from 2000 describing neurons that would respond to a wide range of different photographs of animals (Kreiman et al., 2000), but over the past few years we have seen reports of neurons that would respond to
  • 43.
    Grandmother Cells andDistributed Representations 31 many different photographs of a particular actress (the famous “Jennifer-Aniston cell”), or even to the name of the person written in text (Quian Quiroga et al., 2005). And it is now clear that the same individual neuron can fire selectively to the same stimulus presented via a number of different sensory modalities—vision, text, voice (Quian Quiroga et al., 2009), implying a truly remarkable degree of invariance. Another fascinating result is the fact that when the stimuli are masked, and the duration of the presentation is so short that the subject can only report the nature of the stimulus on some limited percentage of trials, there is a remarkably high correlation on individual trials between whether the neuron responds and whether the subject can report the nature of the stimulus (Quian Quiroga et al., 2008b). At first sight, such results might appear to provide strong support for the notion of grandmother cell coding. While it is true that there have not actually been any reports of neurons that fire exclusively to the patient’s grandmother, the neurons do tend to respond best to stimuli that are personally relevant to the patient, responding in particular to members of the patient’s family or members of the experimental team (Viskontas et al., 2009). However, there is a problem with such a view, which stems from the fact that the hit rate for finding such cells appears to be much higher than one would expect if that part of the brain was really using such an explicitly localist coding scheme. The critical issue is the number of different objects that the system needs to be able to encode. One widespread source of confusion concerning localist coding is the belief that it would require having one neuron to code every possible stimulus that can be identified. As I have argued previously (Thorpe, 1995; Thorpe and Imbert, 1989b;Thorpe, 2002), this can be easily demonstrated to be erroneous. Con- sider the output of the retina via the optic nerve, which contains roughly 1 million axons. Even if we only consider the situation where each axon can either be “on” or “off,” this means that there are 21,000,000 possible patterns that can be presented. If we assume that we need one neuron to encode each one of these patterns, this would need roughly 10300,000 neurons. Assuming a reasonable size for each neuron, this would require that the brain be larger than the known universe—clearly, not the strategy used by natural selection! The error in the argument comes from assuming that we need to know the total number of possible stimuli. In fact, it is the number of visual categories that is the real number of interest. There are no hard and fast numbers for this, but Irving Biederman suggested about 30,000 distinct visual cat- egories (Biederman, 1987), and I myself have suggested a somewhat higher number based on the number of entries in a large encyclopedia (Thorpe and Imbert, 1989a). If we suppose that the real number is something like 100,000, this would mean that the probability that any given cell could be activated by a given familiar stimulus
  • 44.
    32 Simon J.Thorpe should on average be 1 in 100,000, assuming all stimuli to be equally represented. However, it is clear that in the human MTL recoding studies, the chances of finding a cell that responds appears to be much higher than this. Indeed, the probability of activation of a given cell to the sorts of stimuli used in these experiments has been estimated to be 0.54 percent (Waydo et al., 2006). This point is well made in the paper entitled “Sparse but not ‘grandmother cell’ coding in the medial temporal lobe” (Quian Quiroga et al., 2008a). In a typical experiment, the researchers are able to record from a few dozen cells simultane- ously. During a morning session, they show a set of roughly 100 photographs about five times each to the patient in a random order. Often, they will find one or two cells that respond well to one of the images. Let us suppose that one of the effective images was a photograph of Bill Clinton. During the lunch break, the researchers then constitute a new set of test images, including several other images of Bill Clinton that are then used to analyze the neuronal responses during an afternoon session.This is how they are then able to confirm that a single cell is able to respond to a wide range of different images (and even text strings or speech) that correspond to the same object. The critical point is that the hit rate during the morning session is much higher than one would expect if each neuron in the medial temporal lobe were a grandmother cell in a sort of library containing hundreds of thousands of possible objects. Distributed Coding and the Totem Pole Cell Hypothesis In their paper, they conclude that even if individual cells can respond in an invariant way to a highly diverse set of different stimuli that correspond to the same object, it is highly probable that the same cell might also respond to other completely unrelated objects. Rafi Malach has called this idea the totem pole cell hypothesis (Malach, personal communication). According to this idea, each cell has a number of different “faces,” and might simultaneously be able to respond invariantly to say “Bill Clinton” but also to some other completely unrelated stimuli—such as the “Taj Mahal” or an episode of The Simpsons, for example. Clearly, the probability that the experimenters might hit on two or more totally unrelated stimuli just by chance would be very low. Nevertheless, cells responding to two separate stimuli have been seen occasionally, so the idea is nevertheless a real possibility that deserves to be tested more explicitly. Note that Rafi Malach’s totem pole cell hypothesis is a clear case where object identity could only be deduced if one has access to the responses of multiple neurons. Thus, if one such “totem-pole cell” responded to Bill Clinton, the Taj Mahal, and the Simpsons, and another cell responded to another set of stimuli including Bill Clinton, the fact that both cells responded on a given trial could be used to determine that Bill Clinton was present.
  • 45.
    Grandmother Cells andDistributed Representations 33 An Alternative Hypothesis for the Significance of MTL Responses: Temporal Tagging While the high-hit-rate issue appears to argue against the idea that the neurons in the human hippocampus are truly instances of grandmother cell coding, the totem pole hypothesis is perhaps not the only option for explaining the phenomenon. Given the well-known implication of medial temporal lobe structures in memory, it may be interesting to think of how the responses of such neurons might fit within an alternative memory related hypothesis. Suppose that one of the key roles of the hippocampus is to keep track of a subset of all the possible objects and events that we are able to recognize, namely, those that have been experienced in the relatively recent past. According to this view, the neurons in the hippocampus are not a dic- tionary of all the objects that can be recognized, but rather a dictionary of recently experienced events. Although speculative, this hypothesis fits a number of interest- ing features of the medial temporal lobe. First, it has been known for decades that synapses in the hippocampus are very plastic and show long-term potentiation (LTP) following strong activation (Bliss and Collingridge, 1993; Bliss and Lømo, 1973). This potentiation can last for days and even weeks, meaning that a sensory input that is repeated is likely to produce a stronger response to the second presentation, even when the interval between presentations is a matter of weeks. Second, in a study of neuronal responses in a region close to the anterior thalamus that could potentially receive information from the medial temporal lobe, Rolls and colleagues described neurons that had the remarkable property of responding strongly to effectively any visual stimulus that had been seen recently (Rolls et al., 1982). A particularly remarkable finding is the fact that such neurons could have visual responses that have latencies as short as 130 ms. This form of invariant response to familiar stimuli is a major challenge for computational models, because it implies that there must be massive convergence from higher-order visual areas to allow such a general response. How might this be achieved? It is just conceivable that somehow all possible visual stimuli converge in one processing stage to produce a generic “familiarity” response, but this seems unlikely.Alternatively, it might be that the brain determines familiarity individually for recently encountered objects before putting them all together. Could this be what is seen at the level of the single-unit responses seen in the human medial temporal lobe? One of the strategies used by the team performing the human MTL recordings when choosing the initial set of images for testing is to specifically ask the patients for information about their favorite TV programs and movies, together with their preferred actors. This strategy could well be one of the reasons for the high success rates seen in the experiments but leaves open the issue of whether it is the fact that
  • 46.
    34 Simon J.Thorpe the patients are highly familiar with the stimuli or whether it is the fact that they may have seen them relatively recently that is critical. It appears that the neurons can sometimes respond strongly even on the very first presentation of a particular photograph during the morning recording session (Pedreira et al., 2010), and this might be taken as evidence that recency is not critical for obtaining a response. However, as far as I am aware, it would be difficult to rule out the possibility that the patient has seen the stimulus elsewhere in the relatively recent past. The hypothesis is therefore that these hippocampal neurons may effectively be keeping track of a relatively limited subset of all possible objects that can be rec- ognized, namely, those that have been experienced within say the past few weeks. The precise duration of this temporal tagging period may not be strictly fixed, but could potentially be related to the maximal duration of LTP mechanisms, that is to say, periods that could extend to several weeks. The critical question would now be to estimate the total number of different objects, scenes, and events that are typically tracked during this period. It may well be thousands or maybe more, but it certainly will be a lot smaller than the total number of objects that we are capable of recog- nizing, which will probably be orders of magnitude higher. If the numbers really are in the range of thousands, then this might well be able to account for the anoma- lously high hit rate seen in the hippocampus. Clearly, if during any recording session, it is possible to record from several dozens of neurons, and each is tested with 100 different images, many of which are likely to correspond to stimuli that have been experienced recently, it would be enough to have a system that was only tracking a few thousand stimuli to be relatively confident about finding at least one neuron that could be activated during any given experiment. One critical implication of this view of the hippocampus is that a neuron that currently shows highly selective and invariant responses to a particular input (for example, “Jennifer Aniston”) is not required to remain selective to that particular stimulus indefinitely. Specifically, if one were able to record again from the same cell two years later (clearly a technical impossibility), it might well be found to respond to something completely different. The idea is that a neuron will remain selective for a particular stimulus as long as that stimulus is reexperienced reason- ably frequently, that is to say every few weeks or so. We therefore have at least two quite different views about the highly selective responses reported in human medial temporal lobe, and the puzzling fact that the hit rate for finding these cells seems to be excessively high—too high to be compat- ible with the idea that the hippocampus contains a complete set of grandmother- type cells.The first is the suggestion by Rafi Malach that the cells may be multifaceted, like totem poles, and each capable of responding to many totally different objects. The second option is that the cells may only be responding to a subset of the large number of recognizable objects, namely the subset corresponding to objects that
  • 47.
    Grandmother Cells andDistributed Representations 35 have been seen or experienced in the relatively recent past. And, of course, these options are not mutually exclusive. The Origin of Highly Selective Responses It is important to realize that for both models, we still need to explain how the neurons are able to achieve this high level of selectivity. More specifically, the fun- damental question concerns the nature of the coding scheme being used by the neurons that provide the input to such neurons. We know that the neurons in the hippocampus will get their visual inputs from structures in the ventral stream, including areas such as perirhinal and entorhinal cortex. And before that, these structures in turn will be receiving from the hierarchy of processing areas that make up the ventral stream. What sort of coding is being used in these structures? Here, there are clearly a number of theoretical possibilities.To make things clear, consider a hypothetical neuron that responds selectively to any visual input that corresponds to a particular person—whether it is the patient’s brother or a well- known celebrity.What sorts of neurons are providing the inputs to such a cell? One possibility is that the hippocampal cell can somehow generate invariant responses to a wide range of physically different visual inputs directly from a true distributed code at the previous layer. If this is the case, then it is clear that this would involve mechanisms that we currently cannot understand. In contrast, one way of obtaining an invariant response that we can understand involves pooling together outputs from a population of cells that is each selective to a particular instance of the stimulus. This sort of pooling mechanism has already been suggested for generating selectivity to views of the head that are invariant to the angle of view, based on pooling together different responses, each of which is selective to a particular viewing angle (Perrett et al., 1987). Indeed, this form of view-specific recognition mechanism now seems to be increasingly seen as the most plausible way of generating invariance (Wallis and Rolls, 1997). But if true, it is clear that the neurons providing the input would themselves have to be quite selective, and indeed they might look quite a lot like the hypothetical grandmother cell, although without the remarkable invariance seen in the medial temporal lobe neurons. In the end, an answer to this sort of question will have to wait until we have single-unit recordings from the cortical regions that provide the inputs to the medial temporal lobe. For the time being, such recordings are simply not available, largely for technical reasons. Few researchers working in humans have been able to record from individual neurons in the ventral stream structures that provide the highly processed information. Obviously, there are far more data available from work on monkeys, but it is difficult to extrapolate from one species to the other. While we can be confident that the human patients can indeed recognize “Jennifer Aniston,”
  • 48.
    36 Simon J.Thorpe we have no idea whether monkeys would make the same sort of judgment of indi- vidual identity. Furthermore, there is even some recent evidence demonstrating that the frequency selectivity of neurons in the human auditory cortex is considerably sharper than has ever been reported in previous studies in mammals with the excep- tion of bats (Bitterman et al., 2008). This raises the intriguing possibility that selec- tivity in human cortical areas may be higher than has generally been seen in animal studies. The Latency Problem—and a Hypothesis There is another feature of the medial temporal lobe neurons that poses a major puzzle, namely, the latency at which they respond. Typically, their latency is around 300 ms (Mormann et al., 2008), although sometimes onset latencies can be even longer. Such values are substantially higher than the 100 ms latencies typically seen for neuronal responses in monkey inferotemporal cortex. It is even substantially longer than the 120–130 ms latency for ultrafast saccades to animals that have been reported recently in humans (Kirchner and Thorpe, 2006) and the 100–110 ms latency seen for selective saccades toward human faces (Crouzet et al., 2010). Intriguingly, one of the human single-unit recording studies compared onset latency in four different structures and found that while units in the hippocampus,amygdala, and entorhinal cortex all tended to start firing at similar times, those in the parahip- pocampal cortex started firing substantially earlier, from as little as 150–200 ms (Mormann et al., 2008). This suggests that there may be a significant delay before activation within the hippocampus can start, leaving the way open for more complex processing mechanisms than would be predicted by a simple feedforward pass through the relatively small number of intervening stages. What might explain these latency differences? It certainly does not fit the simple idea that the hippocampus is simply pooling the output of several “view-tuned” IT neurons. There are two synapses between the later stages of IT and the hippo- campus, perhaps three if the circuits are more complicated than currently believed. Conduction delays and postsynaptic integration times simply cannot explain this additional delay. Remember that in the primate ventral stream, the latency differ- ence between V1 and IT is only about 40 ms, despite the fact that this includes the time for processing in intermediate structures such as V2 and V4. Indeed, if the estimates of conduction velocities for cortico-cortical connections are correct (namely, 1–2 m.s–1 ), the physical distance between V1 and IT would account for a substantial proportion of that 40 ms difference, implying that the integration time at each processing stage must be remarkably short, probably only a few milliseconds. I would like to make what I believe to be a novel hypothesis, which appears to make considerable sense. Could it be that the hippocampus contains a mechanism
  • 49.
    Grandmother Cells andDistributed Representations 37 that will respond when a particular pattern of activation in the cortex is maintained for some minimal duration? Suppose that this minimal duration was 150–200 ms. This would mean that neurons in the neocortex that are only activated briefly will not result in activation in the hippocampus, even though processing in the ventral stream may have been complete. The fact that processing it the ventral stream can be very rapid has been beautifully demonstrated by neurophysiological studies of responses to RSVP sequences (Keysers et al.,2001).These authors showed that even at 72 frames per second, neurons in IT can show a transient “blip” of activation around 100 ms after their preferred visual stimulus has been shown, demonstrating that processing with just a single feedforward wave of processing can be enough. However, these “blips” of activation are generally too weak to reach consciousness. Suppose, though, that if the activity in inferotemporal neurons can be maintained long enough for the hippocampal circuits to kick in, even such briefly presented stimuli can potentially be stored in episodic memory by leaving a trace in the hippo- campus. If the same stimulus is presented again later, it would be recognized as familiar if and only if there is also a response in the hippocampus, which effectively would mean that the cortical activation on a previous presentation had been main- tained for the critical period. In fact, this suggestion can also be related to the highly controversial question of the neural correlates of consciousness (Dehaene and Naccache, 2001; Lamme, 2006; Tononi and Koch, 2008). Many authors have suggested that conscious perception may be related to synchronous activation across multiple cortical areas, or maybe specific types of oscillatory activity (Uhlhaas et al., 2009). The suggestion here is considerably simpler. According to this view, conscious perception and the storing of episodic memory traces could simply be gated by the duration of activation within the cortex, with the hippocampus acting as a form of gatekeeper, measuring the duration of activation within the cortex and selecting those patterns that are main- tained for longer than the minimum time. It could be that hippocampal circuits are specifically designed for performing this function. Mechanisms for Generating Highly Selective Responses Let us now return to the nature of the cortical inputs that provide the inputs to the medial temporal lobe, and the sort of selectivity that might be present. Given that the simplest hypothesis for generating invariant responses would involve combining the outputs of neuronal mechanisms that are themselves quite selective, let us now consider the question of whether we can propose any neurophysiologically plausible mechanism for producing selectivity. I suspect that one reason why “grandmother cell” encoding is often considered to be implausible lies in the lack of generally accepted mechanisms that could lead
  • 50.
    38 Simon J.Thorpe to development of such strong selectivity. It is probably fair to say that few if any such mechanisms are currently known. However, in this section, we will have a look at one possible mechanism that could potentially explain why neurons might become selective to frequently experienced stimuli. Two specific mechanisms will be introduced—one depending on spike-time dependent plasticity (STDP), the other using temporal coding to control the proportion of active neurons. STDP-Based Learning Spike-time dependent plasticity is a phenomenon that first became prominent in the late 1990s (Song et al., 2000) and has since generated a great deal of interest among the modeling community (Dan and Poo, 2006). One common type of STDP rule changes synaptic weight as a consequence of the relative timing of pre- and postsynaptic spikes—inputs that fire before the postsynaptic spike are strengthened, whereas those that fire afterward are depressed. This sort of mechanism is known to reinforce synapses that tend to fire in synchrony, and this effect has been exten- sively studied. But another significant finding is that STDP systematically concen- trates high synaptic strengths on early firing inputs (Guyonneau et al., 2005). Thus, if a neuron repeatedly receives waves of spikes via its afferents and these waves of spikes occur in a pattern that tends to repeat, over time the neuron will end up with all the highest synaptic weights located on the afferents that fired first. We have found that this simple mechanism can allow neurons to learn to respond selectively to repeating stimuli, and this may provide a simple mechanism for generating selectivity. To realize why this occurs, it is necessary to introduce another natural feature of any integrate-and-fire neuron, namely, the fact that they tend to fire earlier when activated strongly. With a weak input, the neuron can take a long time to reach threshold, whereas with strong inputs, the threshold is reached rapidly. As I first pointed out two decades ago, this means that when one considers a population of neurons, information can be contained in the order in which cells fire (Thorpe, 1990). However, only relatively recently has the idea been directly tested experimentally (see, for example, Gollisch and Meister, 2008). When an image is flashed on the retina, each cell will charge up and reach its threshold for spike initiation, but the time at which the neuron spikes will vary, with the most strongly activated cells (corresponding to the points in the image where the local contrast is highest) firing first.As a consequence, the order of firing will encode information about the image, even under conditions where each cell only fires one spike.Theoretical studies have demonstrated how this sort of order based information can be a very efficient way to transmit information (VanRullen and Thorpe, 2001), considerably more efficient than conventional rate based coding, especially if the underlying rate is encoded by a random Poisson-like process (Gautrais and Thorpe, 1998).
  • 51.
    Grandmother Cells andDistributed Representations 39 Now,let us consider what would happen if a neuron equipped with STDP“listens” to the output of a retina-like structure using this sort of coding. If there was only one neuron connected to the retina, and the same image was repeatedly flashed on the retina, the STDP rule will end up concentrating high weights on the earliest firing cells in the retina. Since these will correspond to the highest contrast parts of the image, the neuron will end up with a set of synaptic connections that will make it selective to the specific pattern that has been shown. Of course, this is not a real- istic situation, because in reality there will not be just one neuron listening, and there will not be just one stimulus being shown. But as we have shown using simula- tions (Guyonneau et al., 2004), when many neurons are present and inhibitory connections exist between the neurons such that as soon as one of the neurons fires, it prevents any other neurons from firing, only one neuron will be allowed to learn in response to a given stimulus. This will lead the system to act as a competitive learning mechanism in which different neurons will become selective to different patterns. The power of such a mechanism is illustrated by a study that looked at STDP-based learning in a model visual system that was stimulated by a set of images taken from the Caltech face database —a set of several hundred photographs of human faces seen on highly varied backgrounds (Masquelier and Thorpe, 2007). It was found that, even though there was no explicit instruction, the neurons at the top end of the visual pathway ended up selective for facelike features, simply because these were the features that occurred most frequently in the inputs. It is important to realize that this selectivity emerged in an entirely unsupervised way, and indeed, if the system was instead stimulated with photographs of motorcycles, the neurons became selective to parts of motorcycles instead. And when the input image set contains equal numbers of faces, motorcycles, and other varied distractors, only the faces and motorcycles are learned, because there is not enough similarity between the distractors to allow the development of selective responses. In that particular set of simulations, we used a hierarchical feedforward process- ing architecture similar to one that has been successfully used to model processing in the primate ventral stream by the MIT group (Serre et al., 2007). However, it is important to realize that the same basic principles apply for essentially any archi- tecture. For example, in another study, we considered the case of a neuron receiving activity from a set of 2,000 randomly firing afferents in which there was a particular pattern of activity lasting 50 ms,which affected a subset of the afferents and repeated at unpredictable intervals. Remarkably, using just one neuron equipped with STDP, we found that the neuron would reliably learn to respond to the repeating pattern, and indeed would learn to respond within just a few milliseconds of the start of the pattern (Masquelier et al., 2008). Furthermore, when several different neurons are listening to the same set of afferents, and lateral inhibition between the neurons prevents more than one neuron firing at the same time, two interesting phenomena
  • 52.
    40 Simon J.Thorpe can occur. First, several different neurons can learn to respond to the same stimulus pattern but at different times. Thus, with an input pattern lasting 50 ms, one neuron might respond very close to the start of the pattern, but other neurons would learn to respond to later parts of the same pattern, causing the neurons to “stack,” with the result that any given pattern would result in the firing of a string of different units (Masquelier et al., 2009). Secondly, when several different patterns are present in the input activity, different neurons will learn to respond selectively to those dif- ferent patterns. Temporal Coding for Controlling the Proportion of Active Neurons We have seen how STDP can cause high weights to concentrate on those inputs that fire earliest during a repeating spatiotemporal pattern. In order for this effect to allow responses to become selective, we need to add another mechanism, namely, a mechanism that keeps the percentage of active neurons within bounds. Here again, allowing neurons to use a temporal coding strategy proves to be particularly inter- esting. If we consider how a population of neurons will respond to a flashed visual stimulus, it is natural to suppose that the first neurons to fire will tend to be the ones that get the strongest input. As processing progresses, more and more neurons will fire, but it would be relatively straightforward to include inhibitory circuits that could control the percentage of neurons that are able to respond. The principle is illustrated in figure 1.3 (plate 1).The basic idea is that we use STDP to select a small subset of inputs that are given strong weights—the other inputs are given low or even zero weights. For example, in this case only 4 of the 40 inputs has a high weight, and we have also fixed the number of inputs that can fire at 4 by using an inhibitory feedback circuit that prevents more than that number of input neurons from firing. Under these conditions it is clear that the probability of getting a given level of activation in the output neuron can be calculated using a binomial distribution. To simplify the calculation, assume that each of the potentiated synapses has a weight of 1. In that case, if the input pattern is chosen randomly, and only 10 percent of the inputs are allowed to be active, it is relatively simple to calculate the probability that a given number of potentiated synapses are activated. There is a less than 0.4 percent chance of having 3 of the of the potentiated synapses active, and only once in 10,000 times would we expect to hit the maximum level of excitation.This follows very simply from the fact that there are a very large number of ways of choosing 4 inputs out of a set of 40, only one of which would match perfectly the set of 4 that have high weights. The example illustrated here is obviously very much simplified compared with the case of a real neuron that might well have 500 inputs from the previous layer. Suppose that 50 of the synapses are potentiated (the others set at zero) and the percentage of active inputs fixed at 10 percent. Here again, it is relatively straight-
  • 53.
    Grandmother Cells andDistributed Representations 41 forward to show that the probability of more than 10 of the 50 inputs with strong weights being active by chance is less than 1 percent.The key point is that by setting the threshold of the output neuron appropriately, the neuron can be made to be arbitrarily selective. It is perhaps worth stressing that these very attractive features stem from the fact that we have controlled the percentage of active cells. If there were no constraints on how many cells can fire, there would be nothing to stop any output cell from going over threshold. Other authors (Furber et al., 2004) have made similar points. The essential point is that generating high selectivity is not in itself a problem. Once a neuron has a set of weights where only a relatively small proportion of the incoming spikes can produce responses, and where the percentage of active inputs is limited, producing selectivity to a particular input pattern need only require that the threshold for firing be set well above the level of activation that could be pro- duced by chance.The real trick is to achieve invariance, that is, the ability to respond A Threshold B Threshold I I N N Figure 1.3 (plate 1) Illustration of how controlling the number of active neurons in a system where only a small number of synapses have high weights can allow output neurons to be highly selective.The output neuron N receives synapses from a large number (here only 40), of which only 10 percent are potentiated. During an activa- tion cycle, the input activation pattern increases until some fixed percentage of the input units has fired. At this point, the inhibitory feedback circuit prevents further neurons from firing. In this case, the first four input units to fire are the ones with the high synaptic weights, producing maximal activity of the output neuron. By appropriately setting the threshold of the output neuron, it can be made arbitrarily selective.
  • 54.
    42 Simon J.Thorpe to a wide range of variations of the same stimulus.This is clearly something that the brain can do, as illustrated by the responses of neurons in monkey inferotemporal cortex (Booth and Rolls, 1998; Tovee et al., 1994; Zoccolan et al., 2007), but even more clearly by neurons in the human medial temporal lobe (Quian Quiroga et al., 2005). Currently, the most plausible hypothesis for generating invariance involves pooling together responses from neurons that themselves are selective to a subset of different views. Indeed, there have been a number of interesting suggestions for how this regrouping of related stimuli could be achieved by having mechanisms that are sensitive to temporally correlated inputs that change relatively slowly (Foldiak, 1990; Wiskott and Sejnowski, 2002). A Continuum of Representations within Cortex? What sort of picture should we expect to find within cortical areas within the human ventral processing stream? According to one view, the neurons can be thought of as a distributed representation in which each neuron participates in representing a wide range of potentially unrelated visual patterns. In contrast, an extreme localist position would claim that individual neurons are effectively specialized for encoding particular visual objects. But there is another, perhaps more biologically plausible option. According to this view, an area such as inferotemporal cortex could reasonably contain a full range of types of neuronal representation. These could include a proportion of relatively uncommitted neurons that are either relatively visually unresponsive, or capable of responding to a wide range of different unrelated stimuli.There might also be neurons that look very much like a distributed representation system, responding to different extents to a wide range of differing stimuli. But there might also be some proportion of highly selective neurons that literally will only respond to very specific (and probably highly familiar) stimuli. Many of the existing neurophysiological data are actually consistent with this sort of hybrid view, because it is clear that there is indeed a continuum of selectivity (see, for example, Rolls and Tovee, 1995; Young and Yamane, 1992; Zoccolan et al., 2007). If we accept that such a continuum exists, it is clear that the proportions of these different types of neuron will be a critical issue. Imaging techniques such as fMRI are clearly limited to providing information about the average response pattern. While more advanced approaches such as fMRI adaptation have been used to provide evidence that neurons in particular brain areas can be selective to particular stimulus attributes (Grill-Spector et al., 1999), drawing direct conclusions about the degree of selectivity at the neuronal level is not simple (Sawamura et al., 2006). Clearly, the most direct way to obtain a picture of how neurons represent visual
  • 55.
    Random documents withunrelated content Scribd suggests to you:
  • 56.
    General Remarks uponthe differences of exhalations and absorptions. 67 ARTICLE FIRST. GENERAL ARRANGEMENT OF THE EXHALANTS. I. Origin, Course and Termination. Different hypotheses respecting these vessels.—What observation shows us concerning them. 69 II. Division of the Exhalants. They can be referred to three classes.—Table of these classes and their division. 71 III. Difference of the Exhalations. 73 ARTICLE SECOND. PROPERTIES, FUNCTIONS AND DEVELOPMENT OF THE EXHALANT SYSTEM. I. Properties. We are ignorant of those of texture.—The organic are very evident in it. 74 Characters of the Vital Properties.—They vary in each system. —Consequences as it regards functions. ib. II. Of Natural Exhalations. They are all derived from the vital properties.—They vary consequently like these properties.—Proofs.—Of sympathetic exhalations. 75 III. Of Preternatural Exhalations. Sanguineous exhalation.—Hemorrhage of the excrementitious exhalants.—Hemorrhage from the skin.—Hemorrhages from the mucous surfaces.—They take place by exhalation.— Proofs.—Experiments.—Of active and passive hemorrhages. —Differences between hemorrhages by rupture and by 78
  • 57.
    exhalation, between thoseof the capillaries and those of the great vessels. Hemorrhages of the recrementitious exhalants.— Hemorrhages of the serous surfaces.—Observations concerning dead bodies.—Cellular hemorrhages.—Other hemorrhages of the exhalants. 85 Preternatural exhalations, not sanguineous.—Varieties of the exhaled fluids, according to the state of the vital forces of the exhalants.—Different examples of these varieties. 87 IV. Of the preternatural development of the exhalants. It is especially in cysts that it takes place.—The secreted fluids are never preternaturally poured out like the exhaled. —Why.—Of the natural emunctories. 88 ABSORBENT SYSTEM. GENERAL OBSERVATIONS. ARTICLE FIRST. OF THE ABSORBENT VESSELS. I. Origin of the Absorbents. Table of absorptions.—Of external absorptions.—Of internal absorptions.—Of the nutritive absorptions.—It is impossible to know the mode of origin of the absorbents.—Interlacing of the branches. 91 II. Course of the Absorbents. Their division into two layers, superficial and deep-seated.— Their arrangement in the extremities and the trunk. 95 Forms of the absorbents in their course.—They are cylindrical, full of knots, &c.—Consequences of these forms. —The absorbents have not as great capacity during life as in the dead body. 97
  • 58.
    Of the capacityof the absorbents in their course.—Manner of ascertaining it.—Extreme varieties which it exhibits.— Capacity of the absorbents compared with that of the veins. 99 Anastomoses of the absorbents in their course.—Different modes of these anastomoses.—Remarks upon the lymphatic circulation. 102 Remarks upon the difference of dropsies that are produced by the increase of exhalation, and those that are the effect of a diminution of absorption.—Cases that may be referred to one or the other cause. 104 III. Termination of the Absorbents. Trunks of termination.—Their disproportion with the branches.—Consequences.—Difficulties in regard to the motion of the lymph.—Remarks upon venous absorption. 105 IV. Structure of the Absorbents. Exterior texture.—Vessels.—Peculiar membrane.—Valves.— Uses of these last. 109 ARTICLE SECOND. LYMPHATIC GLANDS. I. Situation, Size, Forms, &c. Varieties of their number and situation in the different regions.—Relation with the cellular texture.—Varieties from age, sex, &c. 111 II. Organization. Colour.—Its varieties.—Particular arrangement about the bronchia. 114 Common parts.—External cellular texture.—Cellular membrane.—Vessels. 115 Peculiar texture.—Density.—Cells.—Contained fluid.— Properties and phenomena of this texture.—Interlacing of the absorbents. 116
  • 59.
    ARTICLE THIRD. PROPERTIES OFTHE ABSORBENT SYSTEM. I. Properties of Texture. 118 II. Vital Properties. Animal sensibility.—Its phenomena in the vessels and the glands.—Organic properties.—Their duration after death.— Remarks upon the absorbent faculty of dead bodies. 119 Characters of the vital properties.—Life is very evident in this system.—Its disposition to inflammation.—Character which this affection has in it. 122 Differences of the vital properties in the absorbent vessels and their glands.—These differences are remarkable.—Their influence upon diseases. 123 Sympathies.—Sympathies of the glands.—Sympathies of the vessels.—Remarks upon the engorgements of the lymphatic glands. 124 ARTICLE FOURTH. OF ABSORPTION. I. Influence of the Vital Forces upon this Function. All depends on the organic properties. 128 II. Varieties of Absorption. Different examples.—Of resolution.—Of the absorption of morbific principles. 129 III. Motion of the Fluids in the Absorbents. Laws of this motion.—It is not subject to any reflux.—Why. 132 IV. Of Absorption in the different Ages. It appears that the internal and external absorptions are opposite at the two extreme ages.—Remarks. 134 V. Preternatural Absorption. Absorption of certain fluids different from those naturally absorbed.—Absorption in the cysts. 138
  • 60.
    SYSTEMS PECULIAR TOCERTAIN APPARATUS. GENERAL OBSERVATIONS. Differences of the systems peculiar to certain apparatus, from those common to all.—Characters of the first.—Their distribution in the apparatus. 139 OSSEOUS SYSTEM. GENERAL OBSERVATIONS. ARTICLE FIRST. OF THE FORMS OF THE OSSEOUS SYSTEM. DIVISION OF THE BONES. I. Of the Long Bones. Relation of their position with their general uses.—External forms of the body and the extremities.—Internal forms.— Medullary canal.—Its situation, extent and form.—Its use.— It disappears in the first periods of callus.—It is shorter in proportion in childhood. 144 II. Of the Flat Bones. Relations of their situation and external forms with the general use of forming the cavities.—Internal forms. 147 III. Of the Short Bones. Position.—Internal and external forms.—General uses. 149 IV. Of the Bony Eminences. Their division into those, 1st, of articulation; 2d, of insertion; 3d, of reflection; 4th, of impression.—Remarks upon each of these divisions.—Relations of the second with the muscular force.—How these last are formed. 150 V. Of the Osseous Cavities.
  • 61.
    Their division intothose, 1st, of insertion; 2d, of reception; 3d, of sliding; 4th, of impression; 5th, of transmission; 6th, of nutrition.—Particular remarks upon each division.—Of the three kinds of canals of nutrition. 153 ARTICLE SECOND. ORGANIZATION OF THE OSSEOUS SYSTEM. I. Texture Peculiar to the Osseous System. Common division of this texture. Texture with cells.—How it is formed.—When it is formed.—Of the cells and their communications.—Experiments. 156 Compact texture.—Arrangement of its fibres.—Their formation.—Experiments to ascertain their direction.—The osseous layers do not exist.—Proofs.—Influence of rickets upon the compact texture. 158 Arrangement of the two osseous textures in the three kinds of Bones.—Arrangement of the compact texture.—Two kinds of texture with cells in the long bones.—Proportion of the common texture with cells and the compact texture in the short and broad bones.—The same proportion examined in the cavities and the osseous eminences. 161 Of the composition of the osseous texture.—There are two principal bases.—Of the saline calcareous substance.— Experiments.—Nature of this substance.—Experiments to ascertain the gelatinous substance.—Different relations of each of these substances with vitality. 164 II. Common Parts which enter into the organization of the Osseous System. Three orders of blood vessels.—Arrangement of each.— Experiments.—Proportions according to age.— Communication.—Proofs of the existence of the cellular texture. 167
  • 62.
    ARTICLE THIRD. PROPERTIES OFTHE OSSEOUS SYSTEM. I. Physical Properties. Elasticity.—It is in the inverse ratio of the age. 171 II. Properties of Texture. Different examples of contractility and extensibility.— Characters of these properties. 171 III. Vital Properties. They are obscure. 173 Characters of these properties.—Slowness of their development.—Their influence upon diseases. 174 Sympathies.—Their character is always chronic.—General remark upon sympathies. 175 Seat of the vital properties.—They are not seated in the calcareous substance.—They exist only in the gelatinous.— Experiment which proves it. 177 ARTICLE FOURTH. OF THE ARTICULATIONS OF THE OSSEOUS SYSTEM. I. Division of the Articulations. Moveable Articulations.—Observations upon their Motions.— 1st. Opposition; it is extensive or confined.—2d. Circumduction; a motion composed of all those of opposition.—3d. Rotation; a motion upon the axis.—4th. Sliding. 180 Immoveable articulations.—They are on surfaces in juxta- position, inserted into each other or implanted. 182 Table of the Articulations. 183 II. Observations upon the Moveable Articulations. First genus.—Situation.—Form of the surfaces.—Rotation and circumduction are inversely in the humerus and the femur. —Why. 184
  • 63.
    Second genus.—Form ofthe surfaces.—Motions. 186 Third genus.—Diminution of the motions.—Direction in which they take place. 187 Fourth genus.—Motions still less. 189 Fifth genus.—Remarkable obscurity of the motions. 190 III. Observations upon the Immoveable Articulations. Situation, forms of each order.—Relation of the structure to the uses. 191 IV. Of the means of Union between the Articular Surfaces. Union of the immoveable Articulations.—Cartilages of union. 193 Union of moveable articulations.—Ligaments and muscles considered as articular bands. 194 ARTICLE FIFTH. DEVELOPMENT OF THE OSSEOUS SYSTEM. Remarks. 195 I. State of the Osseous System during Growth. Mucous State.—What should be understood by it. 195 Cartilaginous State.—Period and mode of its development.— Of this state in the broad bones. 197 Osseous State.—Its phenomena.—Its period. 198 Progress of the osseous state in the long bones; 1st, in the middle; 2d, in the extremities. 200 Progress of the osseous state in the broad bones.—Varieties according to the bones.—Formation of the ossa wormiana. ib. Progress of the osseous state in the short bones. 202 II. State of the Osseous System after its Growth. Increase in thickness.—Composition and decomposition after the termination of growth in thickness.—Experiments.— State of the bones in old age. 203 III. Peculiar Phenomena of the Development of the Callus.
  • 64.
    1st. Fleshy granulations.—2d.Adhesions of these granulations.—3d. Exhalation of gelatine and then of phosphate of lime. 206 IV. Peculiar Phenomena of the Development of the Teeth. Organization of the teeth.—Hard portion of the teeth.— Enamel.—Experiment which distinguishes it from bone.—Its thickness.—Its nature.—Reflections upon its organization.— Osseous portion.—Its form.—Cavity of the tooth. 209 Soft portion of the tooth.—Its spongy nature.—Its acute sensibility.—Remarks upon its different sympathies. 211 First dentition considered before cutting.—Follicle.— Membrane of this follicle analogous to the serous membranes.—Albuminous nature of the fluid which lubricates it.—Mode of development of the osseous tooth upon the follicle.—Number of the first teeth. 213 First dentition considered at the period of cutting.—Mode of cutting.—Accidents.—Their causes. 216 Second dentition considered before cutting.—Formation of the second follicle. 217 Second dentition considered at the period of cutting.—Fall of the first teeth.—Appearance of the second. Phenomena subsequent to the cutting of the second teeth.— Growth in length and thickness.—Fall of the teeth earlier than the death of the bones.—Why.—State of the jaws after the fall of the teeth. 219 V. Particular Phenomena of the Development of the Sesamoid Bones. General arrangement of the sesamoid bones.—Situation.— Forms. 221 Fibro-cartilaginous state.—Osseous state.—Phenomena of the patella.—Use of the sesamoid bones. 222 MEDULLARY SYSTEM. Division of this system. 225
  • 65.
    ARTICLE FIRST. MEDULLARY SYSTEMOF THE FLAT AND SHORT BONES, AND THE EXTREMITIES OF THE LONG ONES. I. Origin and Conformation. It is an expansion of the vessels of the second order. 225 II. Organization. There is no medullary membrane.—Vascular interlacing. 226 III. Properties. There are only organic ones.—Experiments. 227 IV. Development. There is no medullary oil in infancy.—Proofs.—Experiments. 227 ARTICLE SECOND. MEDULLARY SYSTEM OF THE MIDDLE OF THE LONG BONES. I. Conformation. It is like the cellular. 229 II. Organization. The medullary membrane is not an expansion of the periosteum.—Its vessels. 230 III. Properties. Properties of texture.—Vital properties.—Animal sensibility.— Vitality more active than in the bones. 231 IV. Development. How the medullary membrane is formed.—The marrow of the infant is wholly different from that of the adult.—Proofs. 233 Functions.—The marrow is exhaled.—Its alterations.—Its relations with the nutrition of the bone.—Necrosis.—The marrow is foreign to the synovia. 234 CARTILAGINOUS SYSTEM.
  • 66.
    What must beunderstood by cartilage. 237 ARTICLE FIRST. OF THE FORMS OF THE CARTILAGINOUS SYSTEM. I. Forms of the Cartilages of the Moveable Articulations. Internal and external surfaces.—Relations of the two corresponding cartilages.—Peculiar characters of these cartilages in each kind of moveable articulations. 238 II. Forms of the Cartilages of the Immoveable Articulations. 241 III. Forms of the Cartilages of the Cavities. 242 ARTICLE SECOND. ORGANIZATION OF THE CARTILAGINOUS SYSTEM. I. Texture peculiar to the Cartilaginous System. Fibres.—Remarkable resistance of the cartilaginous texture to putrefaction, maceration, &c.—Stewing and desiccation of this texture.—Its various alterations. 243 II. Parts common to the Organization of the Cartilaginous Texture. Cellular texture.—Means of seeing it.—Absence of blood vessels.—White vessels.—Their colour in jaundice. 245 ARTICLE THIRD. PROPERTIES OF THE CARTILAGINOUS SYSTEM. I. Physical Properties. Elasticity.—It appears to be owing to the superabundance of gelatine.—Proofs. 247 II. Properties of Texture. They are very obscure. 248 III. Vital Properties. They are inconsiderable, as well as the sympathies. 249
  • 67.
    Character of theVital Properties.—All the phenomena over which they preside have a chronic progress.—General observations upon the reunion of the parts. 250 ARTICLE FOURTH. DEVELOPMENT OF THE CARTILAGINOUS SYSTEM. I. State of the Cartilaginous System in the First Age. Predominance of gelatine in the early periods.—Property which the cartilages then have of becoming red by maceration.—Vascular layers between the cartilage and the bone.—Cause which limits ossification in the cartilage.— Development of the cartilages of the cavities. 252 II. State of the Cartilaginous System in the after Ages. Different character which the gelatine assumes.—Ossification of the cartilages in old age.—Those of the cavities are the soonest ossified. 255 III. Preternatural Development of the Cartilaginous System. Tendency of the membrane of the spleen to become the seat of it.—Preternatural cartilages of the articulations. 257 FIBROUS SYSTEM. GENERAL OBSERVATIONS. ARTICLE FIRST. OF THE FORMS AND DIVISIONS OF THE FIBROUS SYSTEM. The fibrous forms are either membranous or in fasciæ. 259 I. Of the Fibrous Organs of a Membranous Form. Fibrous membranes.—Fibrous capsules.—Fibrous sheaths.— Aponeuroses. 260 II. Of the Fibrous Organs in the form of Fasciæ. 1st. Tendons.—2d. Ligaments. 262
  • 68.
    III. Table ofthe Fibrous System. Analogy of the different organs of this system.—The periosteum is the common centre of these organs. 262 ARTICLE SECOND. ORGANIZATION OF THE FIBROUS SYSTEM. I. Of the Texture peculiar to the Organization of the Fibrous System. Peculiar nature of the fibrous texture.—Its extreme resistance.—Phenomena of this resistance.—It can be overcome.—Difference of the fibrous and muscular textures. —Experiments upon the fibrous texture subjected to maceration, ebullition, putrefaction, the action of the acids, the digestive juices, &c. 264 II. Of the Common Parts which enter into the Organization of the Fibrous System. Cellular texture.—Blood vessels.—Their varieties according to the organs. 270 ARTICLE THIRD. PROPERTIES OF THE FIBROUS SYSTEM. I. Physical Properties. II. Properties of Texture. Extensibility.—Peculiar law to which it is subjected there. Contractility.—It is almost nothing.—When it is manifested. 272 III. Vital Properties. Animal sensibility.—Singular mode of putting it in action by distension.—Consequence of this peculiar phenomenon to the fibrous texture. 274 Character of the vital properties.—The vital activity is more evident in this system than in the preceding.—It appears that the fibrous texture does not suppurate. 277
  • 69.
    Sympathies.—Examples of thoseof the animal and the organic properties. 279 ARTICLE FOURTH. DEVELOPMENT OF THE FIBROUS SYSTEM. I. State of the Fibrous System in the First Age. The fibres are wanting in most of the fibrous organs of the fœtus.—Softness of these organs at this age.—Varieties of development.—Remarks upon rheumatism. 281 II. State of the Fibrous System in the After Ages. Phenomena of the adult.—General stiffness in old age. 283 III. Preternatural Development of the Fibrous System. Various tumours exhibit fibres analogous to those of this system. 284 ARTICLE FIFTH. OF THE FIBROUS MEMBRANES IN GENERAL. I. Forms of the Fibrous Membranes. Their double surface.—These membranes are like moulds of their respective organs.—Researches respecting that of the corpus cavernosum.—Experiments which show that it differs essentially from the subjacent spongy texture.—Other researches upon that of the testicle. 285 II. Organization of the Fibrous Membranes. 288 III. Of the Periosteum. Of its Form. Its two surfaces.—Their adhesion to the bones. 289 Organization of the periosteum.—Preternatural development of its fibres in elephantiasis.—Its connexions with the fibrous bodies in infancy. 291 Development of the periosteum. Functions of the Periosteum.—In what way it assists ossification.—It relates as much to the fibrous organs as to 292
  • 70.
    the bones. IV. Perichondrium. Experimentsupon this membrane. 294 ARTICLE SIXTH. OF THE FIBROUS CAPSULES. I. Forms of the Fibrous Capsules. They are very few.—Arrangement of the two principal ones.— Canal between them and the synovial capsule. 295 II. Functions of the Fibrous Capsules. 296 ARTICLE SEVENTH. OF THE FIBROUS SHEATHS. Their division. 297 I. Partial Fibrous Sheaths. Their form.—Their arrangement.—Why the flexor tendons are alone provided with them. 297 II. General Fibrous Sheaths. 299 ARTICLE EIGHTH. OF THE APONEUROSES. I. Of the Aponeuroses for Covering. Their division. 299 Aponeuroses for general covering. 300 Forms.—They are accommodated to the extremities, &c. ib. Tensor muscles.—Organization.—Examples of the tensor muscles.—Their uses relative to the aponeuroses.—Analogy with the tendons and difference from them.—Arrangement of the fibres. 301 Functions. 302
  • 71.
    Aponeuroses for partialcovering.—Examples.—General uses of these aponeuroses. 303 II. Of the Aponeuroses of Insertion. Aponeuroses of insertion with a broad surface.—Their origin. —Their uses.—The identity of their nature with that of the tendons.—Experiments. 304 Aponeuroses of insertion in the form of an arch.—They are rare.—They exist where vessels pass through.—They do not compress them. 305 Aponeuroses of insertion with separate fibres. 306 ARTICLE NINTH. OF THE TENDONS. I. Form of the Tendons. Relation of the uses with the forms.—Union with the fleshy fibres. 307 II. Organization of the Tendons. Method of seeing their fibres advantageously.—They appear to be destitute of blood vessels.—Their tendency to be penetrated with the phosphate of lime. 309 ARTICLE TENTH. OF THE LIGAMENTS. I. Ligaments with, Regular Fasciæ. General arrangement. 311 II. Ligaments with Irregular Fasciæ. 312 FIBRO-CARTILAGINOUS SYSTEM. Organs which compose it. 315 ARTICLE FIRST.
  • 72.
    OF THE FORMSOF THE FIBRO-CARTILAGINOUS SYSTEM. Division into three classes of the organs of this system.— Characters of each class. 315 ARTICLE SECOND. ORGANIZATION OF THE FIBRO-CARTILAGINOUS SYSTEM. I. Texture peculiar to the Organization of the Fibro-Cartilaginous System. It arises, 1st, from a fibrous substance; 2d, from a cartilaginous one.—It owes its resistance to the first and its elasticity to the second.—Action of caloric, air and water upon the fibro-cartilaginous texture.—It reddens by maceration.—Absence of the perichondrium upon most of the fibro-cartilages. 317 II. Parts common to the Organization of the Fibro- Cartilaginous System. 320 ARTICLE THIRD. PROPERTIES OF THE FIBRO-CARTILAGINOUS SYSTEM. I. Physical Properties. Elasticity and suppleness united. 320 II. Properties of Texture. Extensibility.—It is quite evident in it.—Contractility.— Difference from elasticity. 321 III. Vital Properties. They are inconsiderable.—Influence of the obscurity of these forces upon the properties of the fibro-cartilages. 322 ARTICLE FOURTH. DEVELOPMENT OF THE FIBRO-CARTILAGINOUS SYSTEM. I. State of this System in the First Age.
  • 73.
    Mode of developmentof the three classes. 323 II. State of this System in the after Ages. General rigidity of these organs.—Consequences.— Ossification of the fibro-cartilages rare. 325 MUSCULAR SYSTEM OF ANIMAL LIFE. Difference between the muscles of the two lives.— Observations upon those of animal life. 327 ARTICLE FIRST. OF THE FORMS OF THE MUSCULAR SYSTEM OF ANIMAL LIFE. Division of these muscles into long, broad and short. 327 I. Forms of the Long Muscles. Place which they occupy.—Their division.—Their separation and reunion.—Peculiar forms of the long muscles of the spine. 328 II. Forms of the Broad Muscles. Where they are situated.—Thickness.—Peculiar forms of the broad pectoral muscles. 330 III. Forms of the Short Muscles. Where they are found.—Their arrangement.—Remarks upon the three species of muscles. 331 ARTICLE SECOND. ORGANIZATION OF THE MUSCULAR SYSTEM OF ANIMAL LIFE. I. Texture peculiar to this Organization. Arrangement of this texture into fasciculi.—Its division into fibres.—Length of the fleshy fibres compared with that of the muscle.—Their direction.—Their figure.—Their softness. 332
  • 74.
    —Ease of theirrupture in the dead body.—Difficulty in the living. Composition of the muscular texture.—Action of the air in desiccation and putrefaction.—Action of cold water.— Maceration and its products.—Ease with which the colouring substance is removed.—Analogy of the remaining texture with the fibrin of the blood.—Relation of the forces with this texture.—Action of boiling water.—Some peculiar phenomena of common boiled flesh.—Roasting of the fleshy texture.—Singular affinity of the digestive juices to this sort of texture.—General observations.—Influence of sex and the genital organs upon the fleshy texture. 336 II. Parts common to the Organization of this System. Cellular texture.—Manner in which it envelops the fibres.—Its uses for muscular motion.—Experiment.—Fatty muscles. 343 Blood vessels.—Arteries.—Of the blood of the muscles.—Of their colour.—Free and combined state of the colouring substance.—Veins.—Remarks upon the injection of them. 346 Nerves.—There are hardly any but those of animal life.—Their difference in the extensors and the flexors.—Manner in which the nerves penetrate the muscles. 348 ARTICLE THIRD. PROPERTIES OF THE MUSCULAR SYSTEM OF ANIMAL LIFE. I. Properties of Texture. Extensibility. This property is continually in action.—It is in proportion to the length of the fibres.—Its exercise in diseases. 350 Contractility of texture.—Phenomena of the antagonists.— Distinction in these phenomena of that which belongs to the vital properties from that which belongs to those of texture.—Of the contractility of texture in diseases.—Extent and quickness of the contractions.—They continue after 352
  • 75.
    death.—Essential differences betweenthe contractility of texture and horny hardening. Their parallel. II. Vital Properties. Properties of animal life.—Sensibility.—Most of the ordinary agents do not develop it.—It is put into action by repeated contractions.—Of the sensation of lassitude.—Sensibility of the muscles in their affections. 359 Animal Contractility.—It should be considered in three relations. 361 Animal contractility considered in the brain.—The principle of this property exists in this organ.—Proofs drawn from observation.—Proofs derived from diseases.—Proofs borrowed from experiments upon animals.—Cases in which the brain is foreign to the muscles. 362 Animal contractility considered in the nerves.—Influence of the spinal marrow upon this property.—Observations and experiments.—Influence of the nerves.—Observations and experiments.—All the nerves do not transmit equally the different irradiations of the brain.—Direction of the propagation of the nervous influence. 367 Animal contractility considered in the muscles.—Necessary conditions in the muscle for it to contract.—Obstacles to contraction.—Various experiments. 374 Causes which bring into action animal contractility.—Division of these causes.—Of the will.—Of the involuntary causes.— Direct excitement.—Sympathetic excitement.—Influence of the passions.—Remarks upon the motion of the fœtus. 374 Duration of the animal contractility after death.—Various experiments.—Consequences relative to respiration.— Variety of the duration of this property.—How it is extinguished. 379 Organic Properties.—Organic sensibility and insensible organic contractility.—Sensible organic contractility.—Various experiments upon this last property.—Phenomena of irritations.—In order to study this contractility the animal 382
  • 76.
    contractility must bedestroyed.—How this is done.—Various modes of contraction. Sympathies.—The animal sensibility is the property especially brought into action by them.—General Remarks.— Sympathies of animal sensibility.—The organic properties are rarely brought into action. 386 Characters of the vital properties.—Different remarks upon these characters. 388 ARTICLE FOURTH. PHENOMENA OF THE ACTION OF THE MUSCULAR SYSTEM OF ANIMAL LIFE. I. Force of the Muscular Contraction. Difference according as it is put into action by stimuli or by the cerebral influence.—Experiments.—Influence of muscular organization upon contraction.—The laws of nature the reverse of those of mechanics in the production of motions.—Multiplication of forces.—Uncertainty of calculations upon this point. 390 II. Quickness of the Contractions. Varieties according as the contractions are, 1st, from stimuli; 2d, from nervous action.—Different degrees of quickness in different individuals.—Influence of habit upon this degree. 395 III. Duration of the Contractions. 397 IV. State of the Muscles in Contraction. Different phenomena which they then experience.—Essential remark upon the different modes of contraction. 398 V. Motions imparted by the Muscles. Simple Motions.—1st. In the muscles with a straight direction.—How we determine the uses of these muscles.— 2d. In the muscles with a reflected direction.—3d. In those with a circular direction. 400
  • 77.
    Compound Motions.—Almost everymotion is compound.— How.—Different examples of compound motions.— Antagonist muscles. 403 VI. Phenomena of the Relaxation of the Muscles. They are opposite to the preceding. 406 ARTICLE FIFTH. DEVELOPMENT OF THE MUSCULAR SYSTEM OF ANIMAL LIFE. I. State of this System in the Fœtus. It contains but little blood.—Slight contractility at this age.— Influence upon these phenomena, of the blood which then penetrates the muscles.—These organs are then slender and weak. 407 II. State of this System during Growth. Sudden effect of the red blood which penetrates the muscles, and of the other irritations which are connected with it.— Colour of the Muscles.—Period of the brightest colour.— Varieties of the action of reagents on the fleshy texture of young animals. 410 III. State of this System after Growth. The thickness constantly increases.—The external forms are more evident.—Colour in the adult.—Innumerable variety. 413 IV. State of this System in Old Age. Increase of density.—Diminution of cohesion.—Phenomena of the vacillation of the muscles.—Atrophous muscles. 416 V. State of the System at Death. Relaxation or stiffness of the muscles. 419 END OF CONTENTS TO VOL. II.
  • 78.
    VOLUME THIRD. MUSCULAR SYSTEMOF ORGANIC LIFE. GENERAL OBSERVATIONS. ARTICLE FIRST. FORMS OF THE MUSCULAR SYSTEM OF ORGANIC LIFE. PAGE Curved direction of the fibres.—They do not arise from the fibrous system.—Varieties of the muscular forms, according to the organs. 4 ARTICLE SECOND. ORGANIZATION OF THE MUSCULAR SYSTEM OF ORGANIC LIFE. General difference of organization from the preceding muscles. 5 I. Peculiar Texture. General arrangement of the muscular fibre.—Analogy with the preceding and difference. 6 II. Common Parts. Cellular Texture.—Blood vessels.—Nerves of the ganglions and of the brain.—Proportion of each class. 8 ARTICLE THIRD. PROPERTIES OF THE MUSCULAR SYSTEM OF ORGANIC LIFE. I. Properties of Texture.
  • 79.
    Welcome to ourwebsite – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com