Language and Lateralization Overview One of the most remarkable features of complex cortical functions in humans is the ability to associate arbitrary symbols with specific meanings to express thoughts and emotions to ourselves and others by means of language. Indeed, the achievements of human culture rest largely upon this skill, and a person who for one reason or another fails to develop a facility for language as a child is severely incapacitated. Studies of patients with damage to specific cortical regions indicate that linguistic abilities of the human brain depend on the integrity of several specialized areas of the association cortices in the temporal and frontal lobes. In the vast majority of people, these primary language functions are located in the left hemisphere: the linkages between speech sounds and their meanings are mainly represented in the left temporal cortex, and the circuitry for the motor commands that organize the production of meaningful speech is mainly found in the left frontal cortex. Despite this left-sided predominance, the emotional (affective) content of language is governed largely by the right hemisphere. Studies of congenitally deaf individuals have shown further that the cortical areas devoted to sign language are the same as those that organize spoken and heard communication. The regions of the brain devoted to language are, therefore, specialized for symbolic representation and communication, rather than heard and spoken language as such. Understanding functional localization and hemispheric lateralization of language is especially important in clinical practice. The loss of language is such a devastating blow that neurologists and neurosurgeons make every effort to identify and preserve those cortical areas involved in its comprehension and production. Language Is Both Localized and Lateralized It has been known for more than a century that two more or less distinct regions in the frontal and temporal association cortices of the left cerebral hemisphere are especially important for normal human language. That language abilities are localized is expected. The unequal representation of language functions in the two cerebral hemispheres is, however, especially clear in this domain. Although functional lateralization has already been introduced in the unequal functions of the parietal lobes in attention and of the temporal lobes in recognizing different categories of objects, it is in studies of language that this controversial concept was first proven. Such functional asymmetry is referred to as hemispheric lateralization and, because language is so important, has given rise to the misleading idea that one hemisphere in humans is actually “dominant” over the other—namely, the hemisphere in which the major capacity for language resides. The true significance of lateralization, however, lies in the efficient subdivision of complex functions between the hemispheres, rather than in any superiority of one hemisphere over the other. Indeed, a safe presumption is that every region of the brain is doing something important! The representation of language in the association cortices is clearly distinct from the circuitry concerned with the motor control of the mouth, tongue, larynx, and pharynx, the structures that produce speech sounds; it is also distinct from the circuits underlying the auditory perception of spoken words and the visual perception of written words in the primary auditory and visual cortices, respectively. The neural substrate for language transcends these essential motor and sensory functions in that its main concern is with a system of symbols—spoken and heard, written and read (or, in the case of signlanguage, gestured and seen). The essence of language, then, is symbolic representation. The obedience to grammatical rules and the use of appropriate emotional tone are recognizable regardless of the particular mode of representation and expression (a point that is especially important in comparing human language and the communicative abilities of great apes, which suggests how language may have evolved in the brains of our prehominid ancestors; Box A). Summary Neurological, neuropsychological, and electrophysiological methods have all been used to localize linguistic function in the human brain. This effort began in the nineteenth century by correlating clinical signs and symptoms with the location of brain lesions determined postmortem. In the twentieth century, additional clinical observations together with studies of split-brain patients, mapping at neurosurgery, sodium amytal anesthesia of a single hemisphere, and noninvasive imaging techniques such as PET and ƒMRI have greatly extended knowledge about the localization of language. Together, these various approaches show that the perisylvian cortices of the left hemisphere are especially important for normal language in the vast majority of humans. The right hemisphere also contributes to language, most obviously by giving it emotional tone. The similarity of the deficits after comparable brain lesions in congenitally deaf patients and their speaking counterparts strongly supports the idea that the cortical representation of language is independent of the means of its expression or perception (spoken and heard, versus gestured and seen). The specialized language areas that have been identified to date are evidently the major components of a widely distributed set of brain regions that allow humans to communicate effectively by means of symbols and concepts.
THE CONNECTIONS BETWEEN THOUGHT AND LANGUAGE All living beings communicate. In fact, the moment two animals encounter each other, they exchange visual, auditory, and olfactory signals that create mental images in their nervous systems. Every animal thus constructs its own mental representation of the world, to which it responds through adaptive behaviour. Some species communicate through a code of gestures; the “dance”of the bees is one example. Other species communicate through a code of sounds—primates, for instance, use their vocal cords to produce various auditory signals. Compared with visual signals, auditory signals have two advantages: they can be perceived at night and over longer distances. Human language, which also uses sounds, is thus only one form of communication among many. But it is a very sophisticated one: to speak with other people is to arbitrarily agree that particular series of sounds designate particular things. The big advantage of spoken language over grunts and cries is that this precise association between arbitrary combinations of sounds and objects lets speakers refer to these objects even when they are not physically present. For these arbitrary conventions to make sense, a group of humans must agree to them. Every one of the many different languages spoken in the world constitutes a set of agreed-upon conventions that establish equivalences between sounds and things. We can thus say that the use of an articulate language is the characteristic that defines the human species: there is no human society that does not have a language, and no species that has a language except human beings. The production and understanding of language thus constitute one of the most specific functions of the human brain. A parrot can imitate the sounds of human language but will never communicate abstract concepts with these sounds. Similarly, experimental attempts to teach great apes the rudiments of language have met with rather limited success (follow the Experiment module link to the left). So what is the connection between thought and language in human beings? When we think, do we really always use language to do so? And if we do, why is it still often so hard to express our thoughts clearly? Might speaking a given language predispose people to think in a certain way? What about deaf people who communicate with sign language: when they are thinking to themselves, do they use signs? Though it is hard to evaluate precisely how language contributes to the development of our thoughts, we do know that it remains the tool par excellence for sharing these thoughts with other people. Birds share 'language' gene with humans 30/3/04. BY DUKE UNIVERSITY MEDICAL CENTER A nearly identical version of a gene whose mutation produces an inherited language deficit in humans is a key component of the song-learning machinery in birds. The researchers, who published their findings in the 31 March 2004 issue of the Journal of Neuroscience, said that their finding will aid research on how genes contribute to the architecture and function of brain circuitry for singing in birds. Lead researchers included neurobiologist Erich Jarvis, of Duke University Medical Center and Constance Scharff of the Max Planck Institute for Molecular Genetics in Germany. According to Jarvis, the search for the gene, called FOXP2, began when other researchers reported that the human version of the gene was responsible for a deficit in language production in humans. &quot;In affected humans, the mutation causes a very specific dysfunction,&quot; said Jarvis. &quot;These people have largely normal motor coordination, but an inability to correctly pronounce words or form them into grammatically correct sentences. What's more, they have trouble understanding complex language.&quot; Feature: The FOXP2 story When evolutionary geneticists compared the DNA sequence of the normal human FOXP2 gene with nonhuman primates and other species, they found that humans have a specific sequence variation not found in any other mammal, said Jarvis. &quot;Since birdsong is a learned vocal behaviour like speech, we decided to find out if a version with this same variation was present in vocal-learning birds,&quot; said Jarvis. Particularly significant, he said, is that the human version of FOXP2 is a type of gene that regulates many other genes – thus making it an ideal candidate for a gene in which a single change during evolution could create a cascade of changes that would influence an advance such as speech or birdsong. &quot;One advantage of using vocal learning birds,&quot; said Wada, also at Duke, &quot;is that there are thousands of songbird species and several hundred parrot and hummingbird species that have vocal learning, but only one primate species that has vocal learning, us humans. Thus, vocal-learning birds provide a rich source of material for evolutionary comparisons.&quot; In their studies, Jarvis and Kazuhiro Wada at Duke, and Scharff and Sebastian Haesler in Germany, compared brain expression of the FOXP2 gene in birds that are vocal learners with those that are nonlearners. Vocal learners included species of finches, song sparrows, canaries, black-capped chickadees, parakeets and hummingbirds. A vocal nonlearner included ring doves; and the researchers also studied the gene in crocodiles, the closest living relative to birds. Researchers in another laboratory headed by Stephanie White at UCLA published a paper in the same issue of the Journal of Neuroscience comparing the expression of FOXP2 and its close relative FOXP1 in songbirds and humans. Scharff, Jarvis and their colleagues confirmed that all the non-mammals they studied, including crocodiles, did have a FOXP2 gene. And although the genes in humans and song-learning birds were almost identical (98 per cent), the song-learning birds did not have the specific variation characteristic of humans. &quot;Thus, this human-specific mutation is not necessarily required for vocal learning, at least not in birds,&quot; said Jarvis. &quot;Or perhaps there's another variation in the songbird gene that also leads to vocal learning.&quot; The researchers did find that the FOXP2 gene was expressed in the same area of the brain – called the basal ganglia – in both humans and song-learning birds. And most importantly, the researchers found the FOXP2 gene to be expressed at higher levels in the &quot;vocal learning nucleus of the basal ganglia of song-learners&quot; at times during the bird's life when it is learning song. This critical learning time might either be during early development in the case of zebra finches, or during seasonal changes in song learning, as in canaries. Jarvis emphasised that the discovery of FOXP2 represents only the beginning of a major effort to explore the genetic machinery underlying vocal learning and cautions that it has not been proven that FOXP2 is required for vocal learning. &quot;We definitely don't think that FOXP2 is the single causal gene for vocal learning,&quot; he said. &quot;The difference between vocal learners and nonlearners – whether between humans and nonhuman primates; or between learning and nonlearning birds – is most likely to arise in connections of forebrain areas to motor neurons that control the voice. It is intriguing though that an ancient gene like FOXP2 appears to have something to do with learned vocalisations both in humans and in birds. Adapted from a news release by Duke University Medical Center .
Section of keyboard showing lexical symbols used to study symbolic communication in great apes (From Savage-Rumbaugh et al., 1998.) The brains of great apes are remarkably similar to those of humans, including regions that, in humans, support language. The areas comparable to Broca's area and Wernicke's area are indicated. Do Apes Have Language? Over the centuries, theologians, natural philosophers, and a good many modern neuroscientists have argued that language is uniquely human, this extraordinary behavior setting us qualitatively apart from our fellow animals. The gradual accumulation of evidence during the last 75 years demonstrating highly sophisticated systems of communication in species as diverse as bees, birds, and whales has made this point of view increasingly untenable, at least in a broad sense. Until recently, however, human language has appeared unique in its semantic aspect, i.e., in the human ability to associate specific meanings with arbitrary symbols, ad infinitum. In the dance of the honeybee described so beautifully by Karl von Frisch, for example, each symbolic movement made by a foraging bee that returns to the hive encodes only a single meaning, whose expression and appreciation has been hardwired into the nervous system of the actor and the respondents. A series of controversial studies in great apes, however, have indicated that the rudiments of the human symbolic communication are indeed evident in the behavior of our closest relatives. Although early efforts were sometimes patently misguided (some initial attempts to teach chimpanzees to speak were without merit simply because they lack the necessary vocal apparatus), modern work on this issue has shown that if chimpanzees are given the means to communicate symbolically, they demonstrate some surprising talents. While techniques have varied, most psychologists who study chimps have used some form of manipulable symbols that can be arranged to express ideas in an interpretable manner. For example, chimps can be trained to manipulate tiles or other symbols such as the gestures of sign language to represent words and syntactical constructs, allowing them to communicate simple demands, questions, and even spontaneous expressions. The most remarkable results have come from increasingly sophisticated work with chimps using symbolic keyboards (figure A). With appropriate training, chimps can choose from as many as 400 different symbols to construct expressions, allowing the researchers to have something resembling a rudimentary conversation with their charges. The more accomplished of these animals are alleged to have “vocabularies” of several thousand words or phrases, equivalent to the speech abilities of a child of about 3 or 4 years of age. Given the challenge this work presents to some long-held beliefs about the superiority and evolutionary independence of human language, it is not surprising that these claims continue to stir up a great deal of debate and are not universally accepted. Nonetheless, the issues raised certainly deserve careful consideration by anyone interested in human language abilities, and how our remarkable symbolic skills may have evolved from the capabilities of our ancestors. The pressure for the evolution of some form of symbolic communication in great apes seems clear enough. Ethologists studying chimpanzees in the wild have described extensive social communication based on gestures, the manipulation of objects, and facial expressions. This intricate social intercourse is likely to be the antecedent of human language; one need only think of the importance of gestures and facial expressions as ancillary aspects of our own speech to appreciate this point. (The sign language studies described later in the text are also pertinent here.) Whether the regions of the temporal, parietal, and frontal cortices that support human language also serve these symbolic functions in the brains of great apes (figure B) is an important question that remains to be tackled. Although much uncertainty remains, only someone given to extraordinary anthropocentrism would, in light of this evidence, continue to argue that symbolic communication is a uniquely human attribute. Even in many species that are quite distant from humans in evolutionary terms (frogs, for example), the brain is left-lateralized for the vocalization function. In chimpanzees, lateralization for the anatomical areas corresponding to Broca’s and Wernicke’s areas already exists, even though it does not yet correspond to the language function. And like the majority of humans, the majority of chimpanzees use their right hand in preference to their left. These asymmetries in the other primates represent persuasive evidence of the ancient phylogenetic origin of lateralization in the human brain. The expansion of the prefrontal cortex in humans might in part reflect its role in the production of language Other Animals Appear to Lack Homologs of Human Language, but Language May Nonetheless Have Evolved by Darwinian Natural Selection One might think that if language evolved by gradual Darwinian natural selection it must have a precursor in other animals. But the natural communication systems of nonhuman animals are strikingly unlike human language. They are based on one of three designs: a finite repertoire of calls (eg, one to warn of predators, one to announce a claim to territory), a continuous analog signal that registers the magnitude of some condition (eg, the distance a bee dances signals the distance of a food source), or sequences of randomly ordered responses that serve as variations on a theme (as in birdsong). There is no hint of the discrete, infinite combinatorial system of meaningful elements seen in human language. Some animals can be trained to mimic certain aspects of human language in artificial settings. In several famous and controversial demonstrations, chimpanzees and gorillas have been taught to use some hand signs based on American Sign Language (though never its grammar), manipulate colored switches or tokens, and carry out some simple spoken commands. Parrots and dolphins have also learned to recognize or produce ordered sequences of sounds or other signals. Such studies have taught us much about the cognitive categories of nonhuman species, but the relevance of these animal behaviors to human language is questionable. It is not a matter of whether one wants to call the trained artificial systems “language.” This is not a scientific question, but a matter of definition—how far are we willing to stretch the meaning of the word language? The scientific question, and the only one relevant to whether these trained behaviors can serve as an animal model for language, is whether the abilities are homologous to human language—whether the two cognitive systems show the same basic organization owing to descent from a single system in a common ancestor. For example, biologists do not debate whether the wing-like structures of gliding rodents (flying squirrels) may be called “genuine wings” or something else (an uninformative question of definition). These structures are not homologous to the wings of bats because they have a different anatomical plan reflecting a different evolutionary history. Bat wings are modifications of the hands of a common mammalian ancestor; the wings of a flying squirrel are modifications of its rib cage. The two structures are merely similar in function, or analogous , but not homologous. Though artificial signaling systems taught to animals have some analogies to human language (eg, they are used in communication and sometimes involve combining basic signals), it seems unlikely that they are homologous. Chimpanzees require extensive teaching contrived by another species (humans) to acquire rudimentary abilities, mostly limited to a small number of signs, strung together in repetitive, quasirandom sequences, used with the intent of requesting food or tickling. The core design of human language—particularly the formation of words, phrases, and sentences according to a single plan that supports both production and comprehension—fails to emerge as the chimps interact with each other and, as far as we know, it cannot be taught to them. All this contrasts sharply with human children, who learn thousands of words spontaneously; combine them in novel structured sequences in which every word has a role; respect the word order, case marking, and agreement of the adult language; use sentences for a variety of non utilitarian purposes, such as commenting on interesting objects; and creatively interpret the grammatical complexity of the input they receive (reflected in their systematic errors or in their creation of novel sign languages and creoles). Nevertheless, this lack of homology does not cast doubt on a Darwinian, gradualist account of language evolution. Humans did not evolve directly from chimpanzees. Both evolved from a common ancestor, probably around 6-8 million years ago. This time span leaves about 300,000 generations in which language could have evolved gradually in the lineage leading to humans after it split off from the lineage leading to chimpanzees. Presumably language evolved in the human lineage because of two related adaptations in our ancestors: They developed technology to exploit the local environment in their lifetimes and were involved in extensive cooperation. These preexisting features of the hominid lifestyle may have set the stage for language, because language would have allowed evolving hominids to benefit by sharing hard-won knowledge with their kin and exchanging it with their neighbors. The specific origins of language are obscure. Homo habilis , which lived about 2.5 to 2 million years ago, left behind caches of stone tools that may have served as home bases or butchering stations. The sharing of skills required to make the tools, and the social coordination required to share the caches, may have made it necessary for H. habilis to put simple language to use, though this is speculation. The skulls of H. habilis bear faint imprints of the gyral patterns of their brains, and some of the areas involved in language in modern humans (Broca's area and the supramarginal and angular gyri) are visible and are larger in the left hemisphere. We cannot be sure, of course, that these areas were used for language, since they have homologs even in monkeys, but the homologs play no role in the monkeys' vocal communication. Homo erectus , which spread from Africa across much of the Old World from 1.5 million to 500,000 years ago, controlled fire and used a stereotyped kind of stone hand ax. It is easy to imagine some form of language contributing to such successes, though again we cannot be sure. The first Homo sapiens , thought to have appeared about 200,000 years ago and to have moved out of Africa 100,000 years ago, had skulls similar to ours and much more elegant and complex tools showing considerable regional variation. They almost certainly had language, given that their anatomy suggests they were biologically very similar to modern humans and that all modern humans have language. The major human races probably diverged about 50,000 years ago, which puts a late boundary on the emergence of language, because all modern races have indistinguishable language abilities.
LEARNING TO SPEAK Unlike learning to walk, where you can name the exact date that a baby first lets go of its parent’s hand and walks on its own, learning to speak a language is a gradual process that spans a number of years. But it still represents a kind of minor miracle as the child’s speech becomes richer and more articulate every day. To understand how a child starts learning to speak, we must go back and see how its senses begin to develop during the first weeks of life in the womb (follow the Experiment module link to the left), because it is through these senses that the child will construct its first mental representations of the world, which it will subsequently refine by means of language. Once the child is born, its memory develops and helps it to move beyond the “here and now” limits of the sensory world. The child discovers memories of pleasant and unpleasant events and learns how to act on other people’s minds to reinvoke the pleasant sensations while avoiding the unpleasant ones. The first way that babies do this is through their repertoire of facial expressions, cries, and babbling. These are called the prelinguistic stages of communication, and they correspond roughly to the first year of postnatal life. Though not all children go through the same stages at the same ages, scientists have been able to determine some approximate ages for various milestones in language development. For the first two months after birth, babies make only reflexive or quasi-reflexive vocalizations—a mixture of cries and vegetative sounds (yawns, sighs, etc.). At around 3 months , the baby’s language is best described as babbling—sounds produced in no specific way. The child is thus producing its first rudimentary syllables at just about the same time as its first smile, which is the first sign of social communication. From 3 to 8 months , the baby’s babbling evolves. The baby starts playing with its voice, producing sounds that are very high-pitched, very low-pitched, very loud, or very soft. Between 5 and 10 months, what is known as “canonical babbling” first appears, marking the culmination of prelinguistic development. The child now has the ingredients for future structured language: well formed strings of consonant/vowel syllables, such as /bababa/, mamama/, /papapa/, /tabada/, and so on. At age 6 to 8 months, babies also begin to acquire the elements of prosody (melody and rhythm) specific to the language they hear being spoken around them. The consonants and vowels of their canonical babbling then begin to reflect certain specific features of this language. It is at this age, for example, that Japanese babies stop being able to distinguish the sounds of “r ”and “l ”. Between 7 and 12 months , babies begin to understand simple, familiar orders accompanied by gestures. It is also now that “mixed babbling” begins, as babies begin to pronounce actual words in the midst of their babbling. At around 11 to 13 months , all of the sounds that children produce belong to their mother tongue.They make increasingly frequent use of gestures and changes in intonation to impart meaning to “proto-words”. Gradually, these gestures supporting proto-words give way to auditory labels that other people understand: that is, to actual words. Just before the child says its first words, however, it must pass an important milestone: pointing with its finger . Until the age of about 10 months , a baby who is stuck in its high chair and wants an object that it sees but cannot reach will express this desire by gesturing with its arm with the palm of its hand open downward, displaying great agitation, making intense vocalizations, and looking back and forth between the object and its mother. But between 11 and 13 months , the baby changes its attitude radically as it manages to point with its index finger to identify the object of its desire. This simple gesture is immensely powerful, because it actually plants an idea in the other person’s mental world. In fact, when a baby starts pointing with its finger, this means that it has understood the principle of speech; all it has to do now is learn how to make certain “auditory gestures” with its mouth and tongue instead of pointing with its finger. Pointing with one’s index finger thus seems to be a mandatory step toward speaking one’s first word (even though not all children who learn to point necessarily go on to develop language). When children are learning to speak, they do not immediately master all the subtle articulatory movements involved in making the sounds of their language. Thus, until at least age 5, it is quite common and entirely normal for children to say things like “wowwipop”(lollipop), “betht” (best), and “twees” (trees). What are the relative roles of heredity and environment in the acquisition of language? First of all, it seems obvious that language is not completely genetic. Human beings speak a great many different languages, and young children can learn any of them easily if exposed to them early enough in life. But the opposite is just as true: children cannot learn any language unless they are exposed to it during a very specific critical period that is genetically determined. There are several other universal characteristics that are inherent in language or in language learning. Consequently, as is so often the case with human behaviours, the true nature of language is a combination of nature and nurture. LEARNING TO SPEAK Around the end of their first year of life, children realize that they have their own inner psychic landscapes and that they can share them with other people. At that point children become part of the world of intersubjectivity, in which they no longer respond solely to stimuli from the inside, such as hunger, or from the outside, such as their parents’ smiles, but also to their own conception of other people’s mental worlds. Children at this stage have understood that words are used not just to produce a pleasant flow of sounds, but are actually symbols used to designate things, often things that are absent. Thus children are no longer stuck with tangible reality. They can form their own representation of the world. This is the psychic context in which children speak their first words. Their very first words will refer to the people on whom children imprint (Mommy, Daddy, Grandma, etc.). Next come words about the objects in their daily lives. Only after that come words about objects that are absent to the children themselves and absent to other people. It is around the age of 10 months that a baby usually says its first word, usually “mama” or“dada”, scarcely distinguishable from the babble around it. A small word for an adult, but a considerable achievement for the baby, who has thus come quite a long way since the moment its parents’ sperm and egg cells met. At age 1 , the baby knows a handful of words, and at age 18 months, from 30 to 50. Of course, every child develops its vocabulary at its own pace, with the process generally accelerating so that the child knows over 100 words at 21 months and over 200 at 2 years. Approximate number of words in a child’s vocabulary from birth to age 3. Note the exponential learning curve during these first years of life. During this period of lexical development, children express themselves with individual words in isolation or with groups of two or three words. These first words are often actually sentences contracted into a single word, because they do not refer solely to an object, but to an action or a situation. To interpret these word/sentences properly, you have to know the context in which they are being used. But gradually, children’s use of words becomes freed from the present context. Children develop the concept of object permanence and become able to form a mental image of something without having it there in front of them. At age 2 , children have a nearly complete understanding of the language they hear, and when they want something, they ask for it by formulating a request orally. Children’s first two- or three-word sentences begin to follow rules of syntax, but do not include pronouns or articles, and their use of verbs is simplified (French-speaking children, for example, still use only the infinitive form). From age 2 to age 5 , children master the syntax of their mother tongue. They do so without ever learning the rules explicitly, but simply through exposure to the regular structures in other people’s speech. One proof of this process is that the errors that children make at this stage are very regular as well. For example, having observed that most English verbs form the past tense through the addition of the sound “-ed”, a child might say “I goed” instead of “I went”. At about age 3, children’s distortions of words have disappeared almost completely, and the basic subject-verb-object syntactic structure is in place. Their vocabulary now includes nearly 1000 words, and they have mastered the use of the pronoun “I”. Children at this age love listening to stories and asking questions and are starting to tell about things that they have seen or done. At about age 4 , children’s words come in a torrent, composed largely of incessant questions. Children can now talk about the concept of time (yesterday, today, and tomorrow), and they make more and more use of prepositions. Thus, at age 4, the primary components of language are normally in place, and so it is at this age that specific language disorders can be detected. At age 5 , relative pronouns and conjunctions appear. Children can conjugate verbs and in general handle language more subtly, even though some minor imperfections persist. Children also learn to say things in a way that is more appropriate to the context. They acquire this ability as they gain some distance from their own perceptions and realize that other people do not necessarily see the world the same way they do. At age 6 , children use more and more nouns, verbs, and adjectives. Their vocabulary now totals over 2500 words. Despite some variations from child to child, the average age at which these various language abilities are acquired and the sequence in which they are acquired remain constant from one culture to the next. Another thing common to all cultures is that the ability to learn another language diminishes considerably after puberty. When a child begins to talk, its conversation is directed at other people, of course, but also at itself. Piaget and Vygotsky called this second type of language “egocentric discourse”, and children appear to use it to “think out loud”. It is a sort of monologue in which children seem to explain their actions to themselves, often after having completed them. This egocentric discourse, however, seems to gradually become internalized when the child is 3 to 5 years old. Once this discourse has become completely internal, children have learned not only how to talk with words, but also how to think with them. A number of studies have attempted to establish a connection between gestures and words (studies involving mirror neurons , for example). Another study has shown that performative gestures demonstrating the transition from intention to attention can be observed in babies 9 to 13 months old. Starting at 14 months of age, articulate language, for which the gestures laid the groundwork, takes over from them. There thus appears to be a continuity between gestures and spoken words. Suppose a mother gives her child a new pair of shoes, and then somebody telephones the home, speaks to the child, and asks him what he just got. The child’s reaction will depend on his age. At age 3, all children will try to show their new shoes to the telephone. At age 4, all or almost all children will use words instead of the object itself. Children will often go on to elaborate, perhaps saying that the shoes are a pretty colour, or that it was nice of Mommy to buy them, and so on. In this way, children begin to use words to act on the mental images and the inner world of the person with whom they are talking. Children acquire the sense of being themselves at the age of about 5 months, well before they learn to speak. Their emerging use of language, however, depends on two things: their neurological maturation, and their cultural and linguistic environment. For example, when a child is 3 or 4 years old, the word “dead” means what happens when he points his finger at an adult and says “Bang! Bang!”: the adult makes a face, falls on the floor, and stops moving. By the time this child is 5 or 6, though, his nervous system has matured enough for him to begin to imagine people and things being very distant physically. “Dead” now means a state of someone’s being very far away somewhere, for a long, long time. Somewhere between the ages of 7 and 10, connections form in the child’s brain between the prefrontal lobe, which is responsible for anticipation, and the limbic system, which manages memories. Now the child can understand the concept of time, and the word “dead” refers to something absolute. Thus it takes seven to 10 years for this word to acquire its “adult” meaning. This maturation of language is also affected by genetic and environmental constraints. For instance, a child who loses her mother at an early age may experience an accelerated maturation of the word “dead” and understand it completely by age 4 or 5. Conversely, children who grow up in an environment that is so safe and free of any frightening ideas as to be almost stifling may remain attached to a very childish conception of the word “dead” for a longer time. Thus language maturation depends on both endogenous and exogenous factors. LEARNING TO SPEAK A FIRST AND SECOND LANGUAGE Research by authors such as the American linguist Noam Chomsky has shown that for human language to be as sophisticated as it is, the brain must contain some mechanisms that are partly preprogrammed for this purpose (follow the Tool module link to the left). Babies are born with a language-acquisition faculty that lets them master thousands of words and complex rules of grammar in just a few years. This is not the case for our closest primate cousins, who have never succeeded in learning more than a few hundred symbols and a few simple sentences (follow the Experiment module link to the left). Until babies are about one year old, they cannot utter anything but babble. This limitation is due to the immaturity of their temporal lobe, which includes Wernicke’s area. This area, by associating words with their meanings, is directly involved in the memorization of the signs used in language. The acquisition of vocabulary during the first years of life seems to closely track the maturation of Wernicke’s area, which eventually enables adults to maintain a vocabulary of some 50 000 to 250 000 words. Our ability to retain such an impressive number of words involves two different types of memory, depending on whether the language in question is our mother tongue or a second language that we learned later in life (follow the Tool module link to the left). To learn our mother tongue, we rely on procedural memory (also known as implicit memory), the same kind that is involved when we learn skills that become automatic, like riding a bike or tying our shoelaces. Because we are so immersed in our mother tongue, we end up using it just as automatically, without even being aware of the rules that govern it. In contrast, to learn a second language, we must usually make a conscious effort to store the vocabulary and the grammatical rules of this language in our memories. When learned in this way, a second language depends on declarative memory (also known as explicit memory). Sometimes, however, people learn a second language “in the street” without having to pay much attention. In this case, the learning process is much the same as it was for their first language and, as in that case, is handled by procedural memory. In fact, the more the method used to teach a second language is based on communicating and practicing, the more the students who learn it will rely on procedural memory when using it. Conversely, the more formal and systematic the method used to teach the second language, the more the students will rely on declarative memory. Learning and using a language may thus involve applying either implicit linguistic skills or explicit metalinguistic knowledge. Because each of these skill sets is supported by different structures in the brain (see sidebar), language disorders can affect people’s first languages and second languages selectively. Following brain injuries, bilingual people may selectively lose the use of one of their two languages. But the language that they retain is not necessarily their mother tongue, nor is it necessarily the language that they spoke most fluently before their accident. Some types of brain damage can make people amnesic without affecting their ability to speak their mother tongue (which depends on procedural memory). But other types can cause serious problems in someone’s automatic use of speech without affecting their ability to remember a language that they learned consciously (using declarative memory). Other observations have been made that reflect this same distinction. For example, some people who have aphasia seem to recover their second language more successfully than their first, whereas some people with amnesia lose access to their second language completely. Alzheimer’s patients retain those language functions based on procedural memory but lose those, such as vocabulary, that are based on declarative memory. But even in one’s mother tongue, not all aspects of language rely on procedural memory. It is believed, for example, that the lexicon for a person’s first language, which consists of the association of groups of phonemes with meanings, may have close connections with declarative memory. Vocabulary thus seems to constitute a special aspect of language: the great apes are capable of learning a large number of symbols related to words (follow the Experiment module link to the left); “wild children” who are deprived of language at the start of their lives can also learn many words, but comparatively little syntax; and people who have anterograde amnesia, though they can acquire new motor or cognitive skills, cannot learn new words. While the lexicon for a person’s first language depends on declarative memory, which involves the parietal and temporal lobes, the grammar of this language depends on procedural memory, which involves the frontal lobes and the basal ganglia. Procedural memory is used for unconscious learning of motor and cognitive skills that involve chronological sequences of operations. This description clearly applies to grammatical operations, which consist in sequencing the lexical elements of a language in real time. Broca’s area and the supplementary motor area and the premotor cortex of the left hemisphere, all of which participate in the production of language, are activated when you repeat words mentally without saying them out loud. In this way, you continually refresh your “phonological buffer” and thus increase the time that you can hold this information in your verbal memory. Thus these frontal areas of the left hemisphere are involved in actively maintaining information in working memory . Some studies of children with reading problems have shown that these problems were actually due to difficulties in understanding syntax, which were in turn caused by deficiencies in the children’s working memory. It is also working memory that lets you understand especially long or complex sentences such as “The clown who is carrying the little boy kisses the little girl.” More specifically, working memory lets you keep this verbal information in mind long enough for the sequence of words in the sentence to assume a meaning. When a child is one year old, the temporal lobe that includes Wernicke’s area is still very immature and has scarcely more than 50% of the surface area that it will have when the child becomes an adult. Moreover, the central part of this lobe, which in adults is associated with lexical storage, is scarcely 20% of its adult size. The same thing goes for the inferior parietal lobule, which is connected to Wernicke’s area and enables words to be assigned to visual, auditory, and somatosensory events. The neurons of this lobule show relatively little myelinization during the first year of life, and its surface is less than 40% of an adult’s. By the age of about 20 months, when the child can speak nearly 100 words and understand twice as many, the surface of the temporal lobe has grown to about 65% of an adult’s. At age 30 months, when the child has mastered about 500 words, its temporal lobe is 85% of the size of an adult’s. The maturation of Wernicke’s area thus seems to be one factor that contributes to the growth of a child’s lexical capacities. Procedural (implicit) memory for language depends on the integrity of the cerebellum, the corpus striatum, and other basal ganglia, as well as on a particular area in the left perisylvian cortex. Implicit language skills also seem to call on the limbic system , which governs emotions and motivations. Declarative (explicit) memory, on the other hand, depends on the integrity of the hippocampus, the medial temporal lobe, and large areas of the associative cortex in both hemispheres. The neuronal phenomenon of the activation threshold is not associated with any particular system of the brain but affects all the higher functions, including language skills. The neural substrate of any mental representation requires a certain frequency of nerve impulses to reach its activation threshold, that is, to generate action potentials itself. Whenever someone uses a particular word or syntactic construction, their activation threshold for it is lowered and its subsequent reuse is facilitated. Conversely, if a neural circuit remains inactive, its activation threshold gradually increases. The same effects are also seen at the molecular level, on two phenomena that play a role in the activation threshold: long-term potentiation (LTP) and long-term depression (LTD) . Complex Language Develops Spontaneously in Children According to Darwin, “Man has an instinctive tendency to speak, as we see in the babble of our young children; while no child has an instinctive tendency to brew, bake, or write.” In the first year of life children work on sounds. They begin to make language-like sounds at 5-7 months, babble in well-formed syllables at 7-8 months, and gibber in sentence-like streams by the first year. In their first few months, they can discriminate speech sounds, including ones that are not used in their parents' language and that their parents do not normally discriminate (for example, Japanese babies can discriminate r and l ). By 10 months they discriminate phonemes much as their parents do. This tuning of speech perception to the specific ambient language precedes the first words, so it must be based on the infant performing sophisticated acoustic analyses, rather than on the infant correlating the sounds of words with their meanings. A child's first words are spoken around the time of his or her first birthday, and the rate of word learning increases suddenly around age 18 months, which is also the age at which children first string words into combinations such as More outside and Allgone doggie. Children at age 2 begin to speak in rich phrase structures and master the grammatical vocabulary of their language (articles, prepositions, etc). By age 3, children use grammatical words correctly most of the time, use most of the constructions of the spoken language appropriately, and in general become fluent and expressive conversational partners. Although children make many errors, their errors occur in a minority of the words they use and are highly systematic. Indeed, this fact confirms what we might have guessed from the fact that children are so fluent and creative: children must be engaging in sophisticated grammatical analysis of their parents' speech rather than merely imitating them. Take, for example, this error of a 3-year-old: Mommy, why did he dis it appear? First, it shows that children do not merely record stretches of sound but are hyperalert for word boundaries. This child misanalyzed disappear as dis appear. Second, this error shows that children work to classify words in grammatical categories. The child's newly extracted appear is being used not as the verb an adult uses but as a unit of speech that the child has hypothesized from the context, namely, a particle (examples of particles are the second words in blow away and take apart ). Third, it shows how children look for grammatical subcategories that determine the interaction between words and grammar. The child creatively converted an intransitive verb meaning “vanish” to a transitive verb meaning “cause to vanish,” in conformity with a widespread pattern in English grammar: The ice melted/She melted the ice; The ball bounced/She bounced the ball , and so on. Languages Are Learned and the Capacity to Learn Language Is Innate Language, like other cognitive abilities, cannot be attributed entirely to either innate structure or learning. Clearly, learning plays a crucial role: any child will acquire any language he or she is exposed to. Likewise, “wild children” who are abandoned by parents to survive in forests, or who are raised in silent environments by deranged parents, are always mute. But learning cannot happen without some innate mechanism that does the learning, and other species exposed to the same input as a child fail to learn at all. In 1959 Chomsky proposed a then-revolutionary hypothesis that children possess innate neural circuitry specifically dedicated to the acquisition of language. The hypothesis is still controversial; some psychologists and linguists believe that the innate capacity for language is merely a general capacity to learn patterns, not a specific system for language, and that the brain areas subserving these skills have no properties that are tailored specifically to the design of language. Several kinds of evidence have been adduced to support Chomsky's hypothesis. First, there are the gross facts of the distribution of language across the species. People in technologically primitive cultures, helpless 3-year-olds, and poorly educated adults in our culture all master complex grammar when they first acquire language, and they do so without special training sequences or feedback. Indeed, when children in a social environment are deprived of a bona fide language, they create one of their own; this is how the sign languages of the deaf arise. Similarly, in the eighteenth and nineteenth centuries, slave children living on plantations and children in other mixed-culture societies who were exposed to crude pidgin languages (choppy strings of words) used by their parents developed full-fledged languages, called creoles , from the pidgin languages. In all these cases the languages children master or create follow the universal design of language described earlier in this chapter. In addition, language and more general intelligence are dissociated from one another in several kinds of pathological conditions. Children with a heritable syndrome called specific language impairment can have high intelligence, intact hearing, and normal social skills but have long-lasting difficulty in speaking and understanding according to the grammatical rules of their language. Conversely, children with certain kinds of mental retardation can express their confabulated or childlike thoughts in fluent, perfectly grammatical language and score at normal levels on tests of comprehension of complex sentences. These dissociations, in which complex language abilities are preserved despite compromised intelligence, can appear in people who have hydrocephalus caused by spina bifida and in people with Williams syndrome, a form of retardation associated with a defective stretch of chromosome 7. Finally, grammar has a partly quirky design that cuts across the categories underlying concepts and reasoning. Consider the statements It is raining and Pat is running. Both “it” and “Pat” have the same grammatical function; both words serve as subjects of the sentence and both can be inverted with the verb to form a question (Is it raining? Is Pat running?) despite the fact that “it” is a grammatical placeholder without cognitive content. Though the sentence The child seems to be sleepy can be shortened to The child seems sleepy , the nearly identical sentence The child seems to be sleeping cannot be shortened to The child seems sleeping. Subtleties such as these, which emerge in all speakers without specially tailored training sequences or systematic feedback, are consequences of the design of grammar; they cannot be predicted from principles governing what makes sense or what is easy or difficult to understand. In sum, children seem to acquire language using abilities that are more specific than general intelligence but not so specific as the capacity to speak a given language—English, Japanese, and so on. What, then, is innate? Presumably, it is some kind of neural system that analyzes communicative signals from other people, not as arbitrary sequences of sound or behavior but according to the design of language. By following this design a child learns a lexicon of bidirectional arbitrary pairings of sound and meaning and several kinds of grammatical rules. One kind assembles phonological elements into words; other kinds assemble words into bigger words, phrases, and sentences according to the principles of phrase structure, grammatical categories and subcategories, case and agreement, anaphora, long-distance dependencies, and movement transformations. Presumably all these abilities come from adaptations of the human brain that arose in the course of human evolution.
Figure 24.2. A critical period for learning language is shown by the decline in language ability (fluency) of non-native speakers of English as a function of their age upon arrival in the United States. The ability to score well on tests of English grammar and vocabulary declines from approximately age 7 onward. (After Johnson and Newport, 1989.) The Development of Language: A Critical Period in Humans Many animals communicate by means of sound, and some (humans and songbirds are examples) learn these vocalizations. There are, in fact, provocative similarities in the development of human language and birdsong . Most animal vocalizations, like alarm calls in mammals and birds, are innate, and require no experience to be correctly produced. For example, quails raised in isolation or deafened at birth so that they never hear conspecifics nonetheless produce the full repertoire of species-specific vocalizations. In contrast, humans obviously require extensive postnatal experience to produce and decode speech sounds that are the basis of language. Importantly, this linguistic experience, to be effective, must occur in early life. The requirement for hearing and practicing during a critical period is apparent in studies of language acquisition in congenitally deaf children. Whereas most babies begin producing speechlike sounds at about 7 months (babbling), congenitally deaf infants show obvious deficits in their early vocalizations, and such individuals fail to develop language if not provided with an alternative form of symbolic expression (such as sign language). If, however, these deaf children are exposed to sign language at an early age (from approximately six months onward), they begin to “babble” with their hands just as a hearing infant babbles audibly. This suggests that, regardless of the modality, early experience shapes language behavior (Figure 24.1). Children who have acquired speech but subsequently lose their hearing before puberty also suffer a substantial decline in spoken language, presumably because they are unable to hear themselves talk and thus lose the opportunity to refine their speech by auditory feedback. Examples of pathological situations in which normal children were never exposed to a significant amount of language make much the same point. In one well-documented case, a girl was raised by deranged parents until the age of 13 under conditions of almost total language deprivation. Despite intense subsequent training, she never learned more than a rudimentary level of communication. This and other examples of so-called “feral children” starkly define the importance of early experience. In contrast to the devastating effects of deprivation on children, adults retain their ability to speak and comprehend language even if decades pass without exposure or speaking. In short, the normal acquisition of human speech is subject to a critical period: The process is sensitive to experience or deprivation during a restricted period of life (before puberty) and is refractory to similar experience or deprivations in adulthood. On a more subtle level, the phonetic structure of the language an individual hears during early life shapes both the perception and production of speech. Many of the thousands of human languages and dialects use appreciably different repertoires of speech elements called phonemes to produce spoken words (examples are the phonemes “ba” and “pa” in English). Very young human infants can perceive and discriminate between differences in all human speech sounds, and are not innately biased towards the phonemes characteristic of any particular language. However, this universal appreciation does not persist. For example, adult Japanese speakers cannot reliably distinguish between the /r/ and /l/ sounds in English, presumably because this phonemic distinction is not present in Japanese. Nonetheless, 4-month-old Japanese infants can make this discrimination as reliably as 4-month-olds raised in English-speaking households (as indicated by increased suckling frequency or head turning in the presence of a novel stimulus). By 6 months of age, however, infants show preferences for phonemes in their native language over those in foreign languages, and by the end of their first year no longer respond to phonetic elements peculiar to non-native languages. The ability to perceive these phonemic contrasts evidently persists for several more years, as evidenced by the fact that children can learn to speak a second language without accent and with fluent grammar until about age 7 or 8. After this age, however, performance gradually declines no matter what the extent of practice or exposure (Figure 24.2). A number of changes in the developing brain could explain these observations. One possibility is that experience acts selectively to preserve the circuits in the brain that perceive phonemes and phonetic distinctions. The absence of exposure to non-native phonemes would then result in a gradual atrophy of the connections representing those sounds, accompanied by a declining ability to distinguish between them. In this formulation, circuits that are used are retained, whereas those that are unused get weaker (and eventually disappear). Alternatively, experience could promote the growth of rudimentary circuitry pertinent to the experienced sounds. The reality, however, is considerably more complex than either of these scenarios suggest. Experiments by Patricia Kuhl and her colleagues have demonstrated that as a second language is acquired, the brain gradually groups sounds according to their similarity with phonemes in the native language. For example, when asked to categorize a continuous spectrum of artificial phonemes between /r/ and /l/, native English speakers, but not Japanese speakers, tend to perceive sounds as all sounding like either /r/ or /l/, a phenomenon that Kuhl has likened to a “perceptual magnet.” Related but varying sounds (defined by their audiographic spectrum) are evidently grouped together and eventually perceived as representing the same phoneme. Without ongoing experience during the critical period, this process fails to occur. Interestingly, the “baby-talk” or “parentese” used by adults speaking to young children actually emphasizes these phonetic distinctions compared to normal speech among adults. Thus, learning language during the critical period for its development entails an amplification and reshaping of innate biases by appropriate postnatal experience
Brain Ventricles and the Concept of Mind Nemesius (circa 320), bishop of Emesa, a city in Syria, embraced Galen’s ideas and based in the cerebral ventricles the intellectual faculties. In his book “ On the Nature of Man ”, a treatise of physiology modeled on Greek medicine, it is said that the soul could not be localized, but the functions of the mind could. The cerebral ventricles were supposed to be responsible for mental operations, from sensation to memorization. The first pair of ventricles were the seat of the “common senses”. They would make the analysis of the information originated in the sense organs. The resultant images were carried to the middle ventricle, the seat of reason, thinking and wisdom. Then came into action the last ventricle, the seat of memory. Up to the Middle Age, the figures depicting the brain would show the ventricles with great detail. The idea that spirits wandered in the ventricles, favored by the Church, prevailed up to the Renaissance. In a book published in the thirteenth century, named “ On the Properties of Things ”, a compilation made by Bartholomew the Englishman, it is stated that “the anterior cavity is soft and moist in order to facilitate association of sensual perceptions and imagination. The middle cell must also be warm, since thinking is a process of separation of pure from impure, comparable to digestion, and heat is known to be the main factor in digestion. The posterior cell, however, is a place for cold storage in which a cool and dry atmosphere must allow for the stocking of goods. That is why the cerebellum is harder, i.e. less medullary and airy, than the rest of the brain”.
Figure 1-1 According to the nineteenth-century doctrine of phrenology, complex traits such as combativeness, spirituality, hope, and conscientiousness are controlled by specific areas in the brain, which expand as the traits develop. This enlargement of local areas of the brain was thought to produce characteristic bumps and ridges on the overlying skull, from which an individual's character could be determined. This map, taken from a drawing of the early 1800s, purports to show 35 intellectual and emotional faculties in distinct areas of the skull and the cerebral cortex underneath. In fact, by as early as the end of the eighteenth century the first attempts had been made to bring together biological and psychological concepts in the study of behavior. Franz Joseph Gall, a German physician and neuroanatomist, proposed three radical new ideas. First, he advocated that all behavior emanated from the brain. Second, he argued that particular regions of the cerebral cortex controlled specific functions. Gall asserted that the cerebral cortex did not act as a single organ but was divided into at least 35 organs (others were added later), each corresponding to a specific mental faculty. Even the most abstract of human behaviors, such as generosity, secretiveness, and religiosity were assigned their spot in the brain. Third, Gall proposed that the center for each mental function grew with use, much as a muscle bulks up with exercise. As each center grew, it purportedly caused the overlying skull to bulge, creating a pattern of bumps and ridges on the skull that indicated which brain regions were most developed (Figure 1-1). Rather than looking within the brain, Gall sought to establish an anatomical basis for describing character traits by correlating the personality of individuals with the bumps on their skulls. His psychology, based on the distribution of bumps on the outside of the head, became known as phrenology .
The child prodigy Marie-Jean-Pierre Flourens received his medical degree at Montpellier when aged 19. As a young promising physician Flourens was asked to investigate Gall’s controversial views on cerebral localization. To test Gall’s assertions, Flourens developed ablation as a procedure to explore the workings of the brain. By removing anatomically defined areas of the brain of an animal and watching its behaviour, he thought he might localize certain functions. Flourens did not favour the idea of cerebral localization and concluded that the brain functioned as a whole and thus arose the concept of ‘cerebral equipotentiality’ . This culminated in his 1824 Recherches expérimentales sur les propriétés et les fonctions du système nerveux. His techniques were, however, crude and imperfect, and his experiments were mainly on birds. Much criticism and debate ensued. A gifted man, Flourens also advanced the physiology of the vestibular apparatus and described the anaesthetic properties of ether. In the late 1820s Gall's ideas were subjected to experimental analysis by the French physiologist Pierre Flourens. By systematically removing Gall's functional centers from the brains of experimental animals, Flourens attempted to isolate the contributions of each “cerebral organ” to behavior. From these experiments he concluded that specific brain regions were not responsible for specific behaviors, but that all brain regions, especially the cerebral hemispheres of the forebrain, participated in every mental operation. Any part of the cerebral hemisphere, he proposed, was able to perform all the functions of the hemisphere. Injury to a specific area of the cerebral hemisphere would therefore affect all higher functions equally. In 1823 Flourens wrote: “All perceptions, all volitions occupy the same seat in these cerebral) organs; the faculty of perceiving, of conceiving, of willing merely constitutes therefore a faculty which is essentially one.” The rapid acceptance of this belief (later called the aggregate-field view of the brain) was based only partly on Flourens's experimental work. It also represented a cultural reaction against the reductionist view that the human mind has a biological basis, the notion that there was no soul, that all mental processes could be reduced to actions within different regions in the brain!
The process of identifying the parts of the brain that are involved in language began in 1861, when Paul Broca, a French neurosurgeon, examined the brain of a recently deceased patient who had had an unusual disorder. Though he had been able to understand spoken language and did not have any motor impairments of the mouth or tongue that might have affected his ability to speak, he could neither speak a complete sentence nor express his thoughts in writing. The only articulate sound he could make was the syllable “tan”, which had come to be used as his name. Paul Broca Tan’s brain When Broca autopsied Tan’s brain, he found a sizable lesion in the left inferior frontal cortex. Subsequently, Broca studied eight other patients, all of whom had similar language deficits along with lesions in their left frontal hemisphere. This led him to make his famous statement that “we speak with the left hemisphere” and to identify, for the first time, the existence of a “language centre” in the posterior portion of the frontal lobe of this hemisphere. Now known as Broca’s area, this was in fact the first area of the brain to be associated with a specific function—in this case, language. Broca Aphasia Results From a Large Frontal Lobe Lesion Broca aphasia is not a single entity. True persisting Broca aphasia is a syndrome resulting from damage to Broca's area (the inferior left frontal gyrus, which contains Brodmann's areas 44 and 45); surrounding frontal fields (the external aspect of Brodmann's area 6, and areas 8, 9, 10, and 46); the underlying white matter, insula, and basal ganglia (Figure 59-2); and a small portion of the anterior superior temporal gyrus. The patient's speech is labored and slow, articulation is impaired, and the melodic intonation of normal speech is lacking (Table 59-2). Yet patients have considerable success at verbal communication even when their words are difficult to understand because the selection of words, especially nouns, is often correct. Verbs, as well as grammatical words such as conjunctions, are less well selected and may be missing altogether. Another major sign of Broca aphasia is a defect in the ability to repeat complex sentences spoken by the examiner. In general, patients with Broca aphasia appear to comprehend the words and sentences they hear, but the comprehension is only partial, as we shall see below. When damage is restricted to Broca's area alone, or to its subjacent white matter, a condition now known as Broca area aphasia , the patient develops a milder and transient aphasia rather than true Broca aphasia.
Ten years later, Carl Wernicke, a German neurologist, discovered another part of the brain, this one involved in understanding language, in the posterior portion of the left temporal lobe. People who had a lesion at this location could speak, but their speech was often incoherent and made no sense. Carl Wernicke Brain with a lesion causing Wernicke’s aphasia Wernicke's observations have been confirmed many times since. Neuroscientists now agree that running around the lateral sulcus (also known as the fissure of Sylvius) in the left hemisphere of the brain, there is a sort of neural loop that is involved both in understanding and in producing spoken language. At the frontal end of this loop lies Broca's area , which is usually associated with the production of language, or language outputs . At the other end (more specifically, in the superior posterior temporal lobe), lies Wernicke's area , which is associated with the processing of words that we hear being spoken, or language inputs. Broca's area and Wernicke's area are connected by a large bundle of nerve fibres called the arcuate fasciculus .
Geshwind modle To repeat a word that is heard , we can hypothesize that information must first get to the primary auditory cortex. From the primary auditory cortex, information is transmitted to the posterior speech area, including Wernicke's area. From Wernicke's area, information travels to Broca's area, then to the Primary Motor Cortex to produce the utterance. To speak a word that is read , information must first get to the primary visual cortex. From the primary visual cortex, information is transmitted to the posterior speech area, including Wernicke's area (via the angular gyrus, which mediates between visual and auditory aspects of language). From Wernicke's area, information travels to Broca's area, then to the Primary Motor Cortex. The primary difference, naturally, is that only in the case of reading is the visual cortex centrally involved. Further work with techniques such as fMRI is needed to explore the many unanswered questions regarding the parts of the brain that are involved in language processing, from perception and articulation to syntactic structure and lexical meaning.
Figure 27.5. Cortical mapping of the language areas in the left cerebral cortex during neurosurgery. (A) Location of the classical language areas. (B) Evidence for the variability of language representation among individuals. This diagram summarizes data from 117 patients whose language areas were electrically mapped at the time of surgery. The number in each circle indicates the percentage of the patients who showed interference with language in response to stimulation at that site. Note also that many of the sites that elicited interference fall outside the classic language areas. (B after Ojemann et al., 1989.) Mapping Language Function The pioneering work of Broca and Wernicke, and later Geschwind and Sperry, clearly established differences in hemispheric function. Several techniques have since been developed that allow hemispheric attributes to be assessed in neurological patients with an intact corpus callosum, and in normal subjects. One method that has long been used for the assessment of language lateralization was devised in the 1960s by Juhn Wada at the Montreal Neurological Institute. In the so-called Wada test, a short-acting anesthetic (e.g., sodium amytal) is injected into the left carotid artery; this procedure transiently “anesthetizes” the left hemisphere and thus tests the functional capabilities of the affected half of the brain. If the left hemisphere is indeed dominant for language, then the patient becomes transiently aphasic while carrying out an ongoing verbal task like counting (the anesthetic is rapidly diluted by the circulation, but not before its local effects can be observed). Less invasive (but less definitive) ways to test the cognitive abilities of the two hemispheres include positron emission tomography, functional magnetic resonance imaging (see Box C in Chapter 1 ), and the sort of tachistoscopic presentation used so effectively by Sperry and his colleagues (even when the hemispheres are normally connected, subjects show delayed verbal responses and other differences when the right hemisphere receives the instruction). These techniques have all confirmed the hemispheric lateralization of language functions. More importantly, they have provided valuable diagnostic tools to determine in preparation for neurosurgery which hemisphere is “eloquent”: although most individuals have the major language functions in the left hemisphere, a few—about 3% of the population—do not (the latter are much more often left-handed; see Box C ). Once the appropriate hemisphere is known, the neurosurgeon can map language functions more definitively by electrical stimulation of the cortex during the surgery. By the 1930s, the neurosurgeon Wilder Penfield and his colleagues at the Montreal Neurological Institute had already carried out a more detailed localization of cortical capacities in a large number of patients (see Chapter 9 ). Penfield used electrical mapping techniques adapted from neurophysiological work in animals to delineate the language areas of the cortex prior to the removal of brain tissue in the treatment of tumors or epilepsy. Such intraoperative mapping guaranteed that the cure would not be worse than the disease and has been widely used ever since. As a result, considerable new information about language localization emerged. Penfield's observations, together with more recent studies performed by George Ojemann and his group at the University of Washington, have generally confirmed the conclusions inferred from postmortem correlations and other approaches: a large region of the perisylvian cortex of the left hemisphere is clearly involved in language production and comprehension ( Figure 27.5 ). A surprise, however, was the variability in language localization from patient to patient in such studies. Thus, Ojemann found that the brain regions involved in language are only approximately those indicated by most textbook treatments, and that their exact locations differ unpredictably among individuals. Indeed, bilingual patients do not necessarily use the same bit of cortex for storing the names of the same objects in two different languages! Moreover, although single neurons in the temporal cortex in and around Wernicke's area respond preferentially to spoken words, they do not show preferences for a particular word. Rather, a wide range of words can elicit a response in any given neuron. While performing brain operations without general anaesthesia, neurosurgeons such as Wilder Penfield and George Ojemann discovered that applying electrical stimuli directly to the areas of the cortex that are involved in language can disrupt language functions in certain very specific ways. If, for example, while a subject is speaking, a weak stimulus is applied to a part of the left hemisphere that corresponds to Broca’s area, the subject will begin to show hesitations in his or her speech. But if a strong enough stimulus is applied to the same area, the person becomes unable to speak at all. Curiously, stimulating sites that are close to Broca’s area sometimes produces different effects, while stimulating sites that are farther away may produce similar ones. This observation has led many researchers to assert that the areas of the brain that are involved in language are probably far more complex than the Geschwind-Wernicke model proposes.
Broca’s area is generally defined as comprising Brodmann areas 44 and 45, which lie anterior to the premotor cortex in the inferior posterior portion of the frontal lobe. Though both area 44 and area 45 contribute to verbal fluency, each seems to have a separate function, so that Broca’s area can be divided into two functional units. Area 44 (the posterior part of the inferior frontal gyrus) seems to be involved in phonological processing and in language production as such; this role would be facilitated by its position close to the motor centres for the mouth and the tongue. Area 45 (the anterior part of the inferior frontal gyrus) seems more involved in the semantic aspects of language. Though not directly involved in accessing meaning, Broca’s area therefore plays a role in verbal memory (selecting and manipulating semantic elements).
Wernicke’s area lies in the left temporal lobe and, like Broca’s area, is no longer regarded as a single, uniform anatomical/functional region of the brain. By analyzing data from numerous brain-imaging experiments, researchers have now distinguished three sub-areas within Wernicke’s area. The first responds to spoken words (including the individual’s own) and other sounds. The second responds only to words spoken by someone else but is also activated when the individual recalls a list of words. The third sub-area seems more closely associated with producing speech than with perceiving it. All of these findings are still compatible, however, with the general role of Wernicke’s area, which relates to the representation of phonetic sequences, regardless of whether the individual hears them, generates them himself or herself, or recalls them from memory. Wernicke’s area, of which the temporal planum is a key anatomical component, is located on the superior temporal gyrus, in the superior portion of Brodmann area 22. This is a strategic location, given the language functions that Wernicke’s area performs. It lies between the primary auditory cortex (Brodmann areas 41 and 42) and the inferior parietal lobule. This lobule is composed mainly of two distinct regions: caudally, the angular gyrus (area 39), which itself is bounded by the visual occipital areas (areas 17, 18, and 19), and dorsally, the supramarginal gyrus (area 40) which arches over the end of the lateral sulcus, adjacent to the inferior portion of the somatosensory cortex.
Figure 59-1 Language-related areas in the human brain. A. A highly simplified view of the primary language areas of the brain are indicated in this lateral view of the exterior surface of the left hemisphere. Broca's area (B) is adjacent to the region of the motor cortex (pre-central gyrus) that controls the movements of facial expression, articulation, and phonation. Wernicke's area (W) lies in the posterior superior temporal lobe near the primary auditory cortex (superior temporal gyrus). Wernicke's and Broca's areas are joined by a bidirectional pathway, the arcuate fasciculus (brown arrow). These regions are part of a complex network of areas that all contribute to normal language processing. B. Modern and more elaborate view of the language areas in a lateral view of the left hemisphere. These languages area contain three functional language systems: the implementation system, the mediational system, and the conceptual system. Two of these are illustrated here. The implementation system is made up of several regions located around the left sylvian fissure. It includes the classical language areas (B = Broca's area; W = Wernicke's area) and the adjoining supramarginal gyrus (Sm), angular gyrus (AG), auditory cortex (A), motor cortex (M), and somatosensory cortex (Ss). The posterior and anterior components of the implementation system, respectively Wernicke's area and Broca's area, are interconnected by the arcuate fasciculus. The mediational system surrounds the implementation system like a belt (blue areas). The regions identified so far are located in the left temporal pole (TP), left inferotemporal cortex (It), and left prefrontal cortex (Pf). The left basal ganglia complex (not pictured) is an integral part of the language implementation system. (Courtesy of H. Damasio.) The Study of Aphasia Led to the Discovery of Critical Brain Areas Related to Language The lack of a homolog to language in other species precludes the attempt to model language in animals, and our understanding of the neural basis of language must be pieced together from other sources. By far the most important source has been the study of language disorders, known as aphasias , which are caused by focal brain lesions that result, most frequently, from stroke or head injury. The early study of the aphasias paved the way for a number of important discoveries on the neural basis of language processing. First, it suggested that in a majority of individuals language depends principally on left hemisphere rather than on right hemisphere structures. All but a few right-handed individuals have left cerebral dominance for language, and so do most left-handed individuals. All told, about 96% of people depend on the left hemisphere for language processing related to grammar, the lexicon, phonemic assembly, and phonetic production. Even languages such as American Sign Language, which rely on visuomotor signs rather than auditory speech signs, depend primarily on the left hemisphere. Second, the early study of aphasia revealed that damage to each of two cortical areas, one in the lateral frontal region, the other in the posterior superior temporal lobe, was associated with a major and linguistically different profile of language impairment. The two cortical areas are Broca's area and Wernicke's area (Figure 59-1). These findings allowed neurologists to develop a model of language that has become known as the Wernicke-Geschwind model. The earliest version of this model had the following components. First, two areas of the brain, Wernicke's and Broca's, were assumed to have the burden of processing the acoustic images of words and the articulation of speech, respectively. Second, the arcuate fasciculus was thought to be a unidirectional pathway that brought information from Wernicke's area to Broca's area. Third, both Wernicke's and Broca's areas were presumed to interact with the polymodal association areas. After a spoken word was processed in the auditory pathways and the auditory signals reached Wernicke's area, the word's meaning was evoked when brain structures beyond Wernicke's area were activated. Similarly, nonverbal meanings were converted into acoustic images in Wernicke's area and turned into vocalizations after such images were transferred by the arcuate fasciculus into Broca's area. Finally, reading and writing ability both depended on Wernicke's and Broca's areas, which, in the case of reading, received visual input from left visual cortices and, in the case of writing, could produce motor output from Exner's area (in the premotor region above Broca's area). This general model formed the basis for a useful classification of the aphasias (Table 59-1) and provided a framework for the investigation of the neural basis of language processes. However, several decades of new lesion studies and research in psycholinguistics and experimental neuropsychology have shown that the general model has important limitations. In particular, much progress has come from the advent of new methodologies, including positron emission tomography (PET), functional magnetic resonance imaging (fMRI), event-related electrical potentials (ERP), and the direct recording of electrical potentials from the exposed cerebral cortex of patients undergoing surgery for the management of intractable epilepsy. Each of these techniques has contributed to a better definition of the areas important for the performance of language tasks. As a result of these advances, it now is apparent that the roles of Wernicke's and Broca's areas are not as clear as they first appeared. Similarly, the arcuate fasciculus is now appreciated to be a bidirectional system that joins a broad expanse of sensory cortices with prefrontal and premotor cortices. Finally, a variety of other regions in the left hemisphere, both cortical and subcortical, have been found to be critically involved in language processing. These include higher-order association cortices in the left frontal, temporal, and parietal regions, which seem to be involved in mediating between concepts and language; selected cortex in the left insular region thought to be related to speech articulation; and prefrontal and cingulate areas that implement executive control and mediation of necessary memory and attentional processes. The processing of language requires a large network of interacting brain areas. The modern framework that has emerged from this work suggests that three large systems interact closely in language perception and production. One system is formed by the language areas of Broca and Wernicke, selected areas of insular cortex, and the basal ganglia. Together, these structures constitute a language implementation system. The implementation system analyzes incoming auditory signals so as to activate conceptual knowledge and also ensures phonemic and grammatical construction as well as articulatory control. This implementation system is surrounded by a second system, the mediational system , made up of numerous separate regions in the temporal, parietal, and frontal association cortices (Figure 59-1). The mediational regions act as third-party brokers between the implementation system and a third system, the conceptual system , a collection of regions distributed throughout the remainder of higher-order association cortices, which support conceptual knowledge.
In the 1980s, American neurologist Marsel Mesulam proposed an alternative to the Wernicke-Geschwind model for understanding the brain’s language circuits. Mesulam’s model posits a hierarchy of networks in which information is processed by levels of complexity. For example, when you perform simple language processes such as reciting the months of the year in order, the motor and premotor areas for language are activated directly. But when you make a statement that requires a more extensive semantic and phonological analysis, other areas come into play first. When you hear words spoken, they are perceived by the primary auditory cortex, then processed by unimodal associative areas of the cortex: the superior and anterior temporal lobes and the opercular part of the left inferior frontal gyrus. According to Mesulam’s model, these unimodal areas then send their information on to two separate sites for integration. One of these is the temporal pole of the paralimbic system, which provides access to the long-term memory system and the emotional system. The other is the posterior terminal portion of the superior temporal sulcus, which provides access to meaning. The triangular and orbital portions of the inferior frontal gyrus also play a role in semantic processing. Approximate location of the inferior frontal gyrus. It is divided into three parts: the opercular, triangular, and orbital. The triangular part of the inferior frontal gyrus forms Broca’s area. Mesulam does, however, still believe that there are two “epicentres”for semantic processing , i.e.,Broca ’ s area and Wernicke ’ s area. This new conception of these two areas is consistent with the fact that they often work synchronously when the brain is performing a word processing task, which supports the idea that there are very strong connections between them. Mesulam’s concept of epicentres resembles that of convergence zones as proposed by other authors: zones where information obtained through various sensory modalities can be combined. This combining process is achieved through the forming of cell assemblies: groups of interconnected neurons whose synapses have been strengthened by their simultaneous firing, in accordance with Hebb’s law. This concept of language areas as convergence zones where neuronal assemblies are established thus accords a prominent place to epigenetic influences in the process of learning a language. Unquestionably, one of these convergence zones is the left inferior parietal lobule, which comprises the angular gyrus and the supramarginal gyrus. In addition to receiving information from the right hemisphere, the left inferior parietal lobule also integrates emotional associations from the amygdala and the cingulate gyrus. One important idea in Mesulam’s model is that the function of a brain area dedicated to language is not fixed but rather varies according to the “neural context”. In other words, the function of a particular area depends on the task to be performed, because these areas do not always activate the same connections between them. For instance, the left inferior frontal gyrus interacts with different areas depending on whether it is processing the sound of a word or its meaning. This networked type of organization takes us beyond the “one area = one function” equation and explains many of the some times highly specific language disorders . For example, some people cannot state the names of tools or the colours of objects. Other people can explain an object’s function without being able to say its name, and vice versa.
Figure 27.6. Language-related regions of the left hemisphere mapped by positron emission tomography (PET) in a normal human subject. Language tasks such as listening to words and generating words elicit activity in Broca's and Wernicke's areas, as expected. However, there is also activity in primary and association sensory and motor areas for both active and passive language tasks. These observations indicate that language processing involves cortical regions other than the classic language areas. (From Posner and Raichle, 1994.) Despite these advances, neurosurgical studies are complicated by their intrinsic difficulty and to some extent by the fact that the brains of the patients in whom they are carried out are not normal. The advent of positron emission tomography in the 1980s, and more recently functional magnetic resonance imaging, has allowed the investigation of the language regions in normal subjects by noninvasive imaging. Recall that these techniques reveal the areas of the brain that are active during a particular task because the related electrical activity increases local metabolic activity and therefore local blood flow. The results of this approach, particularly in the hands of Marc Raichle, Steve Petersen, and their colleagues at Washington University in St. Louis, have challenged excessively rigid views of localization and lateralization of linguistic function. Although high levels of activity occur in the expected regions, large areas of both hemispheres are activated in word recognition or production tasks (Figure 27.6). Some scientists believe that over the course of evolution, language remained under limbic control until the inferior parietal lobule evolved and became a convergence zone that provides a wealth of inputs to Broca’s area. Some scientists also think that it was the emergence of the inferior parietal lobule that gave humans the ability to break down the sounds that they heard so as to make sense of them and, conversely, to express sounds in a sequential manner so as to convey meaning. In this way, primitive emotional and social vocalizations would have eventually come to be governed by grammatical rules of organization to create what we know as modern language. Lastly, a number of researchers now reject classic locationist models of language such as Geschwind’s and Mesulam’s. Instead, they conceptualize language, and cognitive functions in general, as being distributed across anatomically separate areas that process information in parallel (rather than serially, from one “language area” to another). Even those researchers who embrace this view that linguistic information is processed in parallel still accept that the primary language functions, both auditory and articulatory, are localized to some extent. This concept of a parallel, distributed processing network for linguistic information constitutes a distinctiveepistemological paradigm that is leading to the reassessment of certain functional brain imaging studies. The proponents of this paradigm believe that the extensive activation of various areas in the left hemisphere and the large number of psychological processes involved make it impossible to associate specific language functions with specific anatomical areas of the brain. For example, the single act of recalling words involves a highly distributed network that is located primarily in the left brain and that includes the inferolateral temporal lobe, the inferior posterior parietal lobule, the premotor areas of the frontal lobe, the anterior cingulate gyrus, and the supplementary motor area. According to this paradigm, with such a widely distributed, parallel processing network, there is no way to ascribe specific functions to each of these structures that contribute to the performance of this task. Brain-imaging studies have shown to what a large extent cognitive tasks such as those involving language correspond to a complex pattern of activation of various areas distributed throughout the cortex. That a particular area of the brain becomes activated when the brain is performing certain tasks therefore does not imply that this area constitutes the only clearly defined location for a given function. In the more distributed model of cognitive functions that is now increasingly accepted by cognitive scientists, all it means is that the neurons in this particular area of the brain are more involved in this particular task than their neighbours. It in no way excludes the possibility that other neurons located elsewhere, and sometimes even quite far from this area, may be just as involved. Thus, just because the content of a word is encoded in a particular neuronal assembly does not necessarily mean that all of the neurons in this assembly are located at the same place in the brain. On the contrary, understanding or producing a spoken or written word can require the simultaneous contribution of several modalities (auditory, visual, somatosensory, and motor). Hence the interconnected neurons in the assembly responsible for this task may be distributed across the various cortexes dedicated to these modalities. In contrast, the neuronal assemblies involved in encoding grammatical functions appear to be less widely distributed. It may therefore be that the brain processes language functions in two ways simultaneously: in parallel mode by means of distributed networks, and in serial mode by means of localized convergence zones.
Figure 59-4 Positron emission tomography images compare the adjusted mean activity in the brain during separate tasks: naming of unique persons, animals, and tools. All sections are axial (horizontal) with left hemisphere structures on the right half of each image. The search volume (the section of the brain sampled in the analysis) includes inferotemporal and temporal pole regions (enclosed by the dotted lines). Red areas are statistically significant activity after correction for multiple comparisons. There are distinct patterns of activation in the left inferotemporal and temporal pole regions for each task. (Courtesy of H. Damasio.) Beyond the Classic Language Areas: Other Brain Areas Are Important for Language The anatomical correlates of the classical aphasias comprise only a restricted map of language-related areas in the brain. The past decade of research on aphasia has uncovered numerous other language-related centers in the cerebral cortex and in subcortical structures. Some are located in the left temporal region. For example, until recently the anterior temporal and inferotemporal cortices, in either the left or the right hemisphere, had not been associated with language. Recent studies reveal that damage to left temporal cortices (Brodmann's areas 21, 20, and 38) causes severe and pure naming defects—impairments of word retrieval without any accompanying grammatical, phonemic, or phonetic difficulty. When the damage is confined to the left temporal pole (Brodmann's area 38) the patient has difficulty recalling the names of unique places and persons but not names for common things. When the lesions involve the left midtemporal sector (areas 21 and 20) the patient has difficulty recalling both unique and common names. Finally, damage to the left posterior inferotemporal sector causes a deficit in recalling words for particular types of items—tools and utensils—but not words for natural things or unique entities. Recall of words for actions or spatial relationships is not compromised. These findings suggest that the left temporal cortices contain neural systems that access words denoting various categories of things but not words denoting the actions of the things or their relationships to other entities. Localization of a brain region that mediates word-finding for classes of things has been inferred from two types of studies: examination of patients with lesions in their brain from stroke, head injuries, herpes encephalitis, and degenerative processes such as Alzheimer disease and Pick disease, and functional imaging studies of normal individuals and electrical stimulation of these same temporal cortices during surgical interventions (Figure 59-4).
For many years, scientists’ understanding of how the brain processes language was rather simple: they believed that Wernicke’s area interpreted the words that we hear or read, then relayed this information via a dense bundle of fibres to Broca’s area, which generated any words that we spoke in response. But subsequent experiments with brain imaging have revealed the existence of a third region of the brain that is also indispensable for language. This region is the inferior parietal lobule, also known as “Geschwind’s territory”, in honour of the American neurologist Norman Geschwind, who foresaw its importance as early as the 1960s. Brain imaging studies have now shown that the inferior parietal lobule is connected by large bundles of nerve fibres to both Broca’s area and Wernicke’s area. Information might therefore travel between these last two areas either directly, via the arcuate fasciculus, or by a second, parallel route that passes through the inferior parietal lobule. The inferior parietal lobule of the left hemisphere lies at a key location in the brain, at the junction of the auditory, visual, and somatosensory cortexes, with which it is massively connected. In addition, the neurons in this lobule have the particularity of being multimodal, which means that they can process different kinds of stimuli (auditory, visual, sensorimotor, etc.) simultaneously. This combination of traits makes the inferior parietal lobule an ideal candidate for apprehending the multiple properties of spoken and written words: their sound, their appearance, their function, etc. This lobule may thus help the brain to classify and label things, which is a prerequisite for forming concepts and thinking abstractly. The inferior parietal lobule is one of the last structures of the human brain to have developed in the course of evolution . This structure appears to exist in rudimentary form in the brains of other primates, which indicates that language may have evolved through changes in existing neural networks, rather than through the emergence of completely new structures in the brain. The inferior parietal lobule is also one of the last structures to mature in human children, and there are reasons to believe that it may play a key role in the acquisition of language . The late maturation of this structure would explain, among other things, why children cannot begin to read and write until they are 5 or 6 years old. Over the years, and especially since the advent of brain-imaging technologies, scientists’ beliefs regarding the anatomical and functional boundaries of Broca’s area, Wernicke’s area, and the inferior parietal lobule have changed a great deal. But it seems more and more likely that, as with other systems in the brain , these variations reflect the existence of several sub-regions with distinct functions . When you hear a sound, your brain first automatically determines whether it comes from a human voice or from some other source. Next, if the source is a human voice, your brain decides whether the sound is a syllable or not, and lastly, whether it is a real word or a pseudo-word (a group of sounds that has no meaning). By capturing functional brain images while subjects listened to these various sounds, researchers have been able to distinguish between those areas of the brain that are involved in simply listening to sounds and those areas that are involved in understanding them. Though the terms Broca’s area and Wernicke’s area are in common use, it is important to remember that the boundaries of these areas are not clearly defined and may vary from one individual to another. It should also be noted that these areas may also be involved in functions other than language. Originally thought of as a “centre for language”, Broca’s area is now regarded more as one part of a complex network involved in semantic, syntactic, and phonological processing, and even in tasks not related to language (such as movement ). This new concept suggests that subdivisions of this area may exist , or even that it can really be defined only in more abstract terms. Thus Broca’s area may eventually come to be seen as a historical concept that has no true anatomical or functional correlate.
The supramarginal gyrus seems to be involved in phonological and articulatory processing of words, whereas the angular gyrus (together with the posterior cingulate gyrus) seems more involved in semantic processing. The right angular gyrus appears to be active as well as the left, thus revealing that the right hemisphere also contributes to semantic processing of language. Together, the angular and supramarginal gyri constitute a multimodal associative area that receives auditory, visual, and somatosensory inputs . The neurons in this area are thus very well positioned to process the phonological and semantic aspect of language that enables us to identify and categorize objects. The language areas of the brain are distinct from the circuits responsible for auditory perception of the words we hear or visual perception of the words we read. The auditory cortex lets us recognize sounds, an essential prerequisite for understanding language. The visual cortex , which lets us consciously see the outside world, is also crucial for language, because it enables us to read words and to recognize objects as the first step in identifying them by a name.
Figure 59-5 A. Lesions of 25 patients with deficits in planning articulatory movements were computer-reconstructed and overlapped. All patients had lesions that included a small section of the insula, an area of cortex underneath the frontal, temporal, and parietal lobes. The area of infarction shared by all patients is depicted here in dark purple. B. The lesions of 19 patients without a deficit in planning articulatory movements were also reconstructed and overlapped. Their lesions completely spare the precise area that was infarcted in the patients with the articulatory deficit. (For this figure, left hemisphere lesions were reconstructed on the left side of the image.) Another area not included in the classical model is a small section of the insula, the island of cortex buried deep inside the cerebral hemispheres (Figure 59-5). Recent evidence suggests that this area is important for planning or coordinating the articulatory movements necessary for speech. Patients who have lesions in this area have difficulty pronouncing phonemes in their proper order; they usually produce combinations of sounds that are very close to the target word. These patients have no difficulty in perceiving speech sounds or recognizing their own errors and do not have trouble in finding the word, only in producing it. This area is also damaged in patients with true Broca aphasia and accounts for much of their articulatory deficits.
The frontal cortices in the mesial surface of the left hemisphere, which includes the supplementary motor area and the anterior cingulate region (also known as Brodmann's area 24), play an important role in the initiation and maintenance of speech. They are also important to attention and emotion and thus can influence many higher functions. Damage to these areas does not cause an aphasia in the proper sense but impairs the initiation of movement (akinesia) and causes mutism , the complete absence of speech. Mutism is a rarity in aphasic patients and is seen only during the very early stages of the condition. Patients with akinesia and mutism fail to communicate by words, gestures, or facial expression. They have an impairment of the drive to communicate, rather than aphasia.
The percentage of left-handers in the normal population as a function of age (based on more than 5000 individuals). Taken at face value, these data indicate that right-handers live longer than left-handers. Another possibility, however, is that the paucity of elderly left-handers at present may simply reflect changes over the decades in the social pressures on children to become right-handed. (From Coren, 1992 .) Handedness Approximately 9 out of 10 people are right-handed, a proportion that appears to have been stable over thousands of years and across all cultures in which handedness has been examined. Handedness is usually assessed by having individuals answer a series of questions about preferred manual behaviors, such as “Which hand do you use to write?”; “Which hand do you use to throw a ball?”; or “Which hand do you use to brush your teeth?” Each answer is given a value, depending on the preference indicated, providing a quantitative measure of the inclination toward right- or left-handedness. Anthropologists have determined the incidence of handedness in ancient cultures by examining artifacts; the shape of a flint ax, for example, can indicate whether it was made by a right- or left-handed individual. Handedness in antiquity has also been assessed by examining the incidence of figures in artistic representations who are using one hand or the other. Based on this evidence, our species appears always to have been a right-handed one. Moreover, handedness is probably not peculiar to humans; many studies have demonstrated paw preference in animals ranging from mice to monkeys that is, at least in some ways, similar to human handedness. Whether an individual is right- or left-handed has a number of interesting consequences. As will be obvious to left-handers, the world of human artifacts is in many respects a right-handed one. Implements such as scissors, knives, coffee pots, and power tools are constructed for the right-handed majority. Books and magazines are also designed for right-handers (compare turning this page with your left and right hands), as are golf clubs and guitars. Perhaps as a consequence of this bias, the accident rate for left-handers in all categories (work, home, sports) is higher than for right-handers. The rate of traffic fatalities among left-handers is also greater than that among right-handers. However, there are also some advantages to being left-handed. For example, an inordinate number of international fencing champions have been left-handed. The reason for this fact is ultimately obvious: since the majority of any individual's opponents will be right-handed, the average fencer, right- or left-handed, is less prepared to parry thrusts from left-handers. One of the most hotly debated questions about the consequences of handedness in recent years has been whether being left-handed entails a diminished life expectancy. No one disputes the fact that there is currently a surprisingly small number of left-handers among the elderly (see figure). These data have come from studies of the general population and have been supported by information gleaned from The Baseball Encyclopedia , in which longevity and other characteristics of a large number of healthy left- and right-handers have been recorded because of interest in the U.S. national pastime. Two explanations of this peculiar finding have been put forward. Stanley Coren and his collaborators at the University of British Columbia have argued that these statistics reflect a higher mortality rate among lefthanders, partly as a result of increased accidents, but also because of other data that show left-handedness to be associated with a variety of pathologies. In this regard, Coren and others have suggested that left-handedness may arise because of developmental problems in the pre- and/or perinatal period. If true, then a further rationale for decreased longevity would have been identified. An alternative explanation, however, is that the diminished number of left-handers among the elderly is primarily a reflection of sociological factors—namely, a greater acceptance of left-handed children today compared to earlier in the twentieth century. In this view, there are fewer older left-handers now because parents, teachers, and others encouraged right-handedness in earlier generations. Although this controversy continues, the weight of the evidence favors the sociological explanation. The relationship between handedness and other lateralized functions—language in particular—has long been a source of confusion. It is unlikely that there is any direct relationship between language and handedness, despite much speculation to the contrary. The most straightforward evidence on this point comes from the results of the Wada test (the injection of sodium amytal into one carotid artery to determine the hemisphere in which language function is located; see text on this page). The large number of such tests carried out for clinical purposes indicate that about 97% of humans, including the majority of left-handers, have their major language functions in the left hemisphere (although it should be noted that right hemispheric dominance for language is much more common among left-handers). Since most left-handers have language function on the side of the brain opposite the control of their preferred hand, it is hard to argue for any strict relationship between these two lateralized functions. In all likelihood, handedness, like language, is first and foremost an example of the advantage of having any specialized function on one side of the brain or the other to make maximum use of the available neural circuitry in a brain of limited size.
When split-brain patients stare at the &quot;X&quot; in the center of the screen, visual information projected on the right side of the screen goes to the patient's left hemisphere, which controls language. When asked what they see, patients can reply correctly. When split-brain patients stare at the &quot;X&quot; in the center of the screen, visual information projected on the left side of the screen goes to the patient's right hemisphere, which does not control language. When asked what they see, patients cannot name the object but can pick it out by touch with the left hand. A Dramatic Confirmation of Language Lateralization Until the 1960s, observations about language localization and lateralization were based primarily on patients with brain lesions of varying severity, location, and etiology. The inevitable uncertainties of clinical findings allowed more than a few skeptics to argue that language function (or other complex cognitive functions) might not be lateralized (or even localized) in the brain. Definitive evidence supporting the inferences from neurological observations came from studies of patients whose corpus callosum and anterior commissure had been severed as a treatment for medically intractable epileptic seizures. (Recall that a certain fraction of severe epileptics are refractory to medical treatment, and that interrupting the connection between the two hemispheres remains an effective way of treating epilepsy in highly selected patients; see Box C in Chapter 25). In such patients, investigators could assess the function of the two cerebral hemispheres independently , since the major axon tracts that connect them had been interrupted. The first studies of these so-called split-brain patients were carried out by Roger Sperry and his colleagues at the California Institute of Technology in the 1960s and 1970s, and established the hemispheric lateralization of language beyond any doubt; this work also demonstrated many other functional differences between the left and right hemispheres and continues to stand as an extraordinary contribution to the understanding of brain organization. To evaluate the functional capacity of each hemisphere in split-brain patients, it is essential to provide information to one side of the brain only. Sperry, Michael Gazzaniga (who was a key collaborator in this work), and others devised several simple ways to do this, the most straightforward of which was to ask the subject to use each hand independently to identify objects without any visual assistance (Figure 27.3A). Recall from Chapter 9 that somatic sensory information from the right hand is processed by the left hemisphere, and vice versa. By asking the subject to describe an item being manipulated by one hand or the other, the language capacity of the relevant hemisphere could be examined. Such testing showed clearly that the two hemispheres differ in theirlanguage ability (as expected from the postmortem correlations described earlier). Using the left hemisphere, split-brain patients were able to name objects held in the right hand without difficulty. In contrast, and quite amazingly, an object held in the left hand could not be named! Using the right hemisphere, subjects could produce only an indirect description of the object that relied on rudimentary words and phrases rather than the precise lexical symbol for the object (for instance, “a round thing” instead of “a ball”), and some could not provide any verbal account of what they held in their left hand. Observations using special techniques to present visual information to the hemispheres independently (a method called tachistoscopic presentation ; see Figure 27.3B) showed further that the left hemisphere can respond to written commands, whereas the right hemisphere can respond only to nonverbal stimuli (e.g., pictorial instructions, or, in some cases, rudimentary written commands). These distinctions reflect broader hemispheric differences summarized by the statement that the left hemisphere in most humans is specialized for processing verbal and symbolic material important in communication, whereas the right hemisphere is specialized for visuospatial and emotional processing (Figure 27.3B). The ingenious work of Sperry and his colleagues on split-brain patients put an end to the century-long controversy about language lateralization; in most individuals, the left hemisphere is unequivocally the seat of the major language functions (although see Box C). It would be wrong to imagine, however, that the right hemisphere has no language capacity. As noted, in some individuals the right hemisphere can produce rudimentary words and phrases, and it is normally the source of emotional coloring of language. Moreover, the right hemisphere in at least some split-brain patients understands language to a modest degree, since it can respond to simple visual commands presented tachistoscopically. Consequently, Broca's conclusion that we speak with our left brain is not strictly correct; it would be more accurate to say that one speaks very much better with the left hemisphere than with the right, and that the contributions of the two hemispheres to language are markedly different.
The brain’s anatomical asymmetry, its lateralization for language, and the phenomenon of handedness are all clearly interrelated, but their influences on one another are complex. Though about 90% of people are right-handed, and about 95% of right-handers have their language areas on the left side of their brains, that still leaves 5% of right-handers who are either right-lateralized for language or have their language areas distributed between their two hemispheres. And then there are the left-handers, among whom all of these patterns can be found, including left-lateralization. Some scientists suggest that the left hemisphere’s dominance for language evolved from this hemisphere’s better control over the right hand. The circuits controlling this “skilful hand” may have evolved so as to take control over the motor circuits involved in language. Broca’s area, in particular, is basically a premotor module of the neocortex and co-ordinates muscle contraction patterns that are related to other things besides language. Brain-imaging studies have shown that several structures involved in language processing are larger in the left hemisphere than in the right. For instance, Broca’s area in the left frontal lobe is larger than the homologous area in the right hemisphere. But the greatest asymmetries are found mainly in the posterior language areas, such as the temporal planum and the angular gyrus. Two other notable asymmetries are the larger protrusions of the frontal lobe on the right side and the occipital lobe on the left. These protrusions might, however, be due to a slight rotation of the hemispheres (counterclockwise, as seen from above) rather than to a difference in the volume of these areas. These protrusions are known as the right-frontal and left-occipital petalias (“petalias” originally referred to the indentations that these protrusions make on the inside of of the skull). The structures involved in producing and understanding language seem to be laid down in accordance with genetic instructions that come into play as neuronal migration proceeds in the human embryo. Nevertheless, the two hemispheres can remain just about equipotent until language acquisition occurs. Normally, the language specialization develops in the left hemisphere, which matures slightly earlier. The earlier, more intense activity of the neurons in the left hemisphere would then lead both to right-handedness and to the control of language functions by this hemisphere. But if the left hemisphere is damaged or defective, language can be acquired by the right hemisphere . An excess of testosterone in newborns due to stress at the time of birth might well be one of the most common causes of slower development in the left hemisphere resulting in greater participation by the right. This hypothesis of a central role for testosterone is supported by experiments which showed that in rats, cortical asymmetry is altered if the rodents are injected with testosterone at birth. This hormonal hypothesis would also explain why two-thirds of all left-handed persons are males.
Figure 24.1. Manual “babbling” in deaf infants raised by deaf, signing parents compared to manual babble in hearing infants. Babbling was judged by scoring hand positions and shapes that showed some resemblance to the components of American Sign Language. In deaf infants, meaningful hand shapes increase as a percentage of manual activity between ages 10 and 14 months. Hearing children raised by hearing, speaking parents do not produce similar hand shapes. (After Petito and Marentette, 1991.) The Development of Language: A Critical Period in Humans Many animals communicate by means of sound, and some (humans and songbirds are examples) learn these vocalizations. There are, in fact, provocative similarities in the development of human language and birdsong ( Box B ). Most animal vocalizations, like alarm calls in mammals and birds, are innate, and require no experience to be correctly produced. For example, quails raised in isolation or deafened at birth so that they never hear conspecifics nonetheless produce the full repertoire of species-specific vocalizations. In contrast, humans obviously require extensive postnatal experience to produce and decode speech sounds that are the basis of language. Importantly, this linguistic experience, to be effective, must occur in early life. The requirement for hearing and practicing during a critical period is apparent in studies of language acquisition in congenitally deaf children. Whereas most babies begin producing speechlike sounds at about 7 months (babbling), congenitally deaf infants show obvious deficits in their early vocalizations, and such individuals fail to develop language if not provided with an alternative form of symbolic expression (such as sign language; see Chapter 27 ). If, however, these deaf children are exposed to sign language at an early age (from approximately six months onward), they begin to “babble” with their hands just as a hearing infant babbles audibly. This suggests that, regardless of the modality, early experience shapes language behavior ( Figure 24.1 ). Children who have acquired speech but subsequently lose their hearing before puberty also suffer a substantial decline in spoken language, presumably because they are unable to hear themselves talk and thus lose the opportunity to refine their speech by auditory feedback. Examples of pathological situations in which normal children were never exposed to a significant amount of language make much the same point. In one well-documented case, a girl was raised by deranged parents until the age of 13 under conditions of almost total language deprivation. Despite intense subsequent training, she never learned more than a rudimentary level of communication. This and other examples of so-called “feral children” starkly define the importance of early experience. In contrast to the devastating effects of deprivation on children, adults retain their ability to speak and comprehend language even if decades pass without exposure or speaking. In short, the normal acquisition of human speech is subject to a critical period: The process is sensitive to experience or deprivation during a restricted period of life (before puberty) and is refractory to similar experience or deprivations in adulthood. On a more subtle level, the phonetic structure of the language an individual hears during early life shapes both the perception and production of speech. Many of the thousands of human languages and dialects use appreciably different repertoires of speech elements called phonemes to produce spoken words (examples are the phonemes “ba” and “pa” in English). Very young human infants can perceive and discriminate between differences in all human speech sounds, and are not innately biased towards the phonemes characteristic of any particular language. However, this universal appreciation does not persist. For example, adult Japanese speakers cannot reliably distinguish between the /r/ and /l/ sounds in English, presumably because this phonemic distinction is not present in Japanese. Nonetheless, 4-month-old Japanese infants can make this discrimination as reliably as 4-month-olds raised in English-speaking households (as indicated by increased suckling frequency or head turning in the presence of a novel stimulus). By 6 months of age, however, infants show preferences for phonemes in their native language over those in foreign languages, and by the end of their first year no longer respond to phonetic elements peculiar to non-native languages. The ability to perceive these phonemic contrasts evidently persists for several more years, as evidenced by the fact that children can learn to speak a second language without accent and with fluent grammar until about age 7 or 8. After this age, however, performance gradually declines no matter what the extent of practice or exposure ( Figure 24.2 ). A number of changes in the developing brain could explain these observations. One possibility is that experience acts selectively to preserve the circuits in the brain that perceive phonemes and phonetic distinctions. The absence of exposure to non-native phonemes would then result in a gradual atrophy of the connections representing those sounds, accompanied by a declining ability to distinguish between them. In this formulation, circuits that are used are retained, whereas those that are unused get weaker (and eventually disappear). Alternatively, experience could promote the growth of rudimentary circuitry pertinent to the experienced sounds. The reality, however, is considerably more complex than either of these scenarios suggest. Experiments by Patricia Kuhl and her colleagues have demonstrated that as a second language is acquired, the brain gradually groups sounds according to their similarity with phonemes in the native language. For example, when asked to categorize a continuous spectrum of artificial phonemes between /r/ and /l/, native English speakers, but not Japanese speakers, tend to perceive sounds as all sounding like either /r/ or /l/, a phenomenon that Kuhl has likened to a “perceptual magnet.” Related but varying sounds (defined by their audiographic spectrum) are evidently grouped together and eventually perceived as representing the same phoneme. Without ongoing experience during the critical period, this process fails to occur. Interestingly, the “baby-talk” or “parentese” used by adults speaking to young children actually emphasizes these phonetic distinctions compared to normal speech among adults. Thus, learning language during the critical period for its development entails an amplification and reshaping of innate biases by appropriate postnatal experience
Signing deficits in congenitally deaf individuals who had learned sign language from birth and later suffered lesions of the language areas in the left hemisphere. Left hemisphere damage produced signing problems in these patients analogous to the aphasias seen after comparable lesions in hearing, speaking patients. In this example, the patient (upper panels) is expressing the sentence “We arrived in Jerusalem and stayed there.” Compared to a normal control (lower panels), he cannot properly control the spatial orientation of the signs. The direction of the correct signs and the aberrant direction of the “aphasic” signs are indicated in the upper left-hand corner of each panel. (After Bellugi et al., 1989.) Sign Language The account so far has already implied that language localization and lateralization does not simply reflect brain specializations for hearing and speaking; thelanguage regions of the brain appear to be more broadly organized for processing symbols. Strong support for this conclusion has come from studies of signlanguage in individuals deaf from birth. American Sign Language has all the components (e.g., grammar and emotional tone) of spoken and heard language. Based on this knowledge, Ursula Bellugi and her colleagues at the Salk Institute examined the localization of sign language in patients who had suffered localized lesions of either the left or right hemisphere. All these individuals were prelingually deaf, had been signing throughout their lives, had deaf spouses, were members of the deaf community, and were right-handed. The patients with lefthemisphere lesions, which in each case involved the language areas of the frontal and/or temporal lobes, had measurable deficits in sign production and comprehension when compared to normal signers of similar age (Figure 27.8). In contrast, the patients with lesions in approximately the same areas in the right hemisphere did not have sign “aphasias.” Instead, as predicted from other studies of subjects with normal hearing, visuospatial and other abilities (e.g., emotional processing and the emotional tone of signing) were impaired. Although the number of subjects studied was necessarily small (deaf signers with lesions of the language areas are understandably difficult to find), the capacity for signed and seen communication is evidently represented predominantly in the left hemisphere, in the same areas as spoken language. This evidence confirms that the language regions of the brain are specialized for the representation of symbolic communication, rather than for heard and spoken language per se. The capacity for seen and signed communication, like its heard and spoken counterpart, emerges in early infancy. Careful observation of babbling in hearing (and, eventually, speaking) infants shows the production of a predictable pattern of sounds related to the ultimate acquisition of spoken language. Thus, babbling represents an early behavior that prefigures true language, indicating that an innate capacity for language imitation is a key part of the learning process. The congenitally deaf offspring of deaf, signing parents “babble” with their hands in gestures that are apparently the forerunners of signs (see Figure 24.1). Like babbling, the amount of manual “babbling” increases with age until the child begins to form accurate, meaningful signs. These observations indicate that the strategy for acquiring the rudiments of symbolic communication from parental or other cues—regardless of the means of expression—is similar in deaf and hearing individuals. These developmental facts are also pertinent to the possible antecedents of language in the nonverbal communication of great apes (see Box A)
Image of the brain of a woman who is deciding whether or not certain words rhyme. As can be seen, the right hemisphere is very active. Source: Shaywitz and Shaywitz, Yale Medical School The Right Cerebral Hemisphere Is Important for Prosody and Pragmatics In almost all right-handers, and in a smaller majority of left-handers, linguistic abilities—phonology, the lexicon, and grammar—are concentrated in the left hemisphere. This conclusion is supported by numerous studies of patients with brain lesions and studies of electrical and metabolic activity in the cerebral hemispheres of normal people. In “split-brain” patients, whose corpus callosum has been sectioned to control epilepsy, the right hemisphere occasionally has rudimentary abilities to comprehend or read words, but syntactic abilities are poor, and in many cases the right hemisphere has no lexical or grammatical abilities at all. Nonetheless, the right cerebral hemisphere does play a role in language. In particular, it is important for communicative and emotional prosody (stress, timing, and intonation). Patients with right anterior lesions may produce inappropriate intonation in their speech; those with right posterior lesions have difficulty interpreting the emotional tone of others' speech. In addition, the right hemisphere plays a role in the pragmatics of language. Patients with damage in the right hemisphere have difficulty incorporating sentences into a coherent narrative or conversation and using appropriate language in particular social settings. They often do not understand jokes. These impairments make it difficult for patients with right hemisphere damage to function effectively in social situations, and these patients are sometimes shunned because of their odd behavior. When adults with severe neurological disease have the entire left hemisphere removed, they suffer a permanent and catastrophic loss of language. In contrast, when the left hemisphere of an infant is removed the child learns to speak fluently. Adults do not have this plasticity of function, and this age difference is consistent with other findings that suggest there is a critical period for language development in childhood. Children can acquire several languages perfectly, whereas most adults who take up a new language are saddled with a foreign accent and permanent grammatical errors. When children are deprived of language input because their parents are deaf or depraved, they can catch up fully if exposed to language before puberty, but they are strikingly inept if the first exposure comes later. Despite the remarkable ability of the right hemisphere to take on responsibility for language in young children, it appears to be less suited for the task than the left hemisphere. One study of a small number of children in whom one hemisphere had been removed revealed that the children with only a right hemisphere were impaired in language (and other aspects of intellectual functioning), compared with children who had only a left hemisphere (these children were less impaired overall). Like people with Broca aphasia, children with only a right hemisphere comprehend most sentences in conversation but have trouble interpreting more complex constructions, such as sentences in the passive voice. A child with only a left hemisphere, in contrast, has no difficulty even with complex sentences. Among those scientists who argue that the brain’s language processing system is distributed across various structures, some, such as Philip Lieberman, believe that the basal ganglia play a very important role in language. These researchers further believe that other subcortical structures traditionally regarded as involved in motor control, such as the cerebellum and the thalamus, also contribute to language processing. These views stand in opposition to Chomsky’s on the exceptional nature of human language and fall squarely within an adaptationist, evolutionary perspective. More on the Role of the Right Hemisphere in Language Since exactly the same cytoarchitectonic areas exist in the cortex of both hemispheres, a puzzling issue remains. What do the comparable areas in the righthemisphere actually do? In fact, language deficits often do occur following damage to the right hemisphere. Most obvious is an absence of the normal emotional and tonal components—called prosodic elements—of language, which impart additional meaning (often quite critical) to verbal communication (see Chapter 29). These deficiencies, referred to as aprosodias, are associated with right-hemisphere lesions of the cortical areas that correspond to Broca's and Wernicke's areas in the left hemisphere. The aprosodias emphasize that although the left hemisphere (or, better put, distinct cortical regions within thathemisphere) figures prominently in the comprehension and production of language for most humans, other regions, including areas in the right hemisphere, are needed to generate the full richness of everyday speech In summary, whereas the classically defined regions of the left hemisphere operate more or less as advertised, a variety of more recent studies have shown that other left- and right-hemisphere areas clearly make a significant contribution to generation and comprehension of language Prosody refers to the intonation and stress with which the phonemes of a language are pronounced. People with aprosodia —RHD that impairs their use of prosody—cannot use intonation and stress to effectively express the emotions they actually feel. As a result, they speak and behave in a way that seems flat and emotionless. The second category of pragmatic communication disorders that can be caused by RHD affect the organization of discourse according to the rules that govern its construction. In some individuals, these disorders take the form of a reduced ability to interpret the signs that establish the context for a communication, or the nuances conveyed by certain words, or the speaker’s intentions or body language, or the applicable social conventions. With regard to social conventions, for example, people generally do not address their boss the same way they would their brother, but people with certain kinds of RHD have difficulty in making this distinction. Last but not least among the types of pragmatic communication disorders caused by RHD are disorders in the understanding of non-literal language . It is estimated that fewer than half of the sentences that we speak express our meaning literally, or at least they do not do so entirely. For instance, whenever we use irony, or metaphors, or other forms of indirect language, people’s ability to understand our actual meaning depends on their ability to interpret our intentions. To understand irony, for example, people must apply two levels of awareness, just as they must do to understand jokes. First, they must understand the speaker’s state of mind, and second, they must understand the speaker’s intentions as to how his or her words should be construed. Someone who is telling a joke wants these words not to be taken seriously, while someone who is speaking ironically wants the listener to perceive their actual meaning as the opposite of their literal one. Metaphors too express an intention that belies a literal interpretation of the words concerned. If a student turns to a classmate and says “This prof is a real sleeping pill”, the classmate will understand the implicit analogy between the pill and the prof and realize that the other student finds this prof boring. But someone with RHD that affects their understanding of non-literal language might not get this message. When experimental subjects are asked to identify the emotional content of recorded sentences that are played back into only one of their ears, they perform better if these sentences are played into their left ear (which sends them to the right hemisphere) then into their right (which sends them to the left hemisphere).
Women have the reputation of being able to talk and listen while doing all sorts of things at the same time, whereas men supposedly prefer to talk or hear about various things in succession rather than simultaneously. Brain-imaging studies may now have revealed an anatomical substrate for this behavioural difference, by demonstrating that language functions tend to place more demands on both hemispheres in women while being more lateralized (and mainly left-lateralized) in men. Women also have more nerve fibres connecting the two hemispheres of their brains, which also suggests that more information is exchanged between them. Females have an advantage over males as regards several different kinds of verbal ability. For example, females’ speech is more fluid: they can pronounce more words or sentences in a given amount of time. Also, language disorders are more common among boys than among girls, regardless of the type of education received. For example, 4 times more boys than girls suffer from stuttering ,dyslexia, and autism. The males’ higher levels of testosterone, which delays the development of the left hemisphere, might partly explain these differences, though other factors may also come into play. Accroding to a stduy by Cmabrigde Uvinertisy, the odrer of the ltteers in a wrod is not ipmrotnat; the olny ipmrotnat thnig is that the frist and lsat ltteers be in the rihgt palce. The rset can be in toatl dsiaarry and you can sitll raed the wrods wtih no pborelm. That’s bacesue the hamun biran does not raed erevy ltteer istlef, but the wrod as a wlohe. Are you outraged at the number of spelling mistakes in the preceding paragraph? Well, don’t let that stop you from appreciating how well your brain works, because those mistakes didn’t keep you from reading and understanding the paragraph anyway! Studies have shown that among heterosexual couples, certain communication problems arise because men and women use language for different purposes. Each sex therefore applies its own criteria to interpret what the other is saying, which can result in misunderstandings. For example, women may misconstrue men’s natural tendency to be less verbally expressive in conjugal relationships as a sign of rejection or indifference. Likewise, men may tend to misinterpret women’s desire to discuss these relationships as an attempt to control them. Another classic example is when a woman simply wants to talk about her problems, and her male partner immediately starts offering solutions. Some authors see communication between male and female partners as so challenging that it amounts to an attempt at intercultural communication!
11f language and communication
Language and Communication
Communication in Lower Animals <ul><li>Beas have complex sign language for direction of food </li></ul><ul><li>Singing birds share same gene FOXP2 expressed in basal ganglia of human. </li></ul>
Do Apes Have Language? <ul><li>Apes can learn limited signs, understand and speak </li></ul><ul><li>Great apes have similar brain area like human to support language </li></ul>
The Development of Language: A Critical Period in Humans <ul><li> A critical period for learning language is shown by the decline in language ability (fluency) of non-native speakers of English as a function of their age upon arrival in the United States </li></ul>
Brain as hollow organ : Nemesius (circa 320), Nature of Man <ul><li>The cerebral ventricles were supposed to be responsible for mental operations </li></ul><ul><li>Language was not represented </li></ul>
Language area in Phrenology 19th century, Franz Joseph Gall and J. G. Spurzheim The Anatomy and Physiology of the Nervous System in General, and of the Brain in Particular
Aggregate field view of the brain (Flourens 1820) Experimental psychologist ‘ a large section of the cerebral lobes can be removed without loss of function. As more is removed, all functions weaken and gradually disappear. Thus the cerebral lobes operate in unison for the full exercise of their functions ... The cerebral cortex functioned as an indivisible whole ... [housing] an ‘‘essentially single faculty’’ of perception, judgement and will ... the last refuge of the soul’ (Flourens, cited by Changeux, p. 17,  ).
Broca’s Aphasia <ul><li>Autopsy of a patient who could understand, with normal speech apparatus but could not speak or write a sentence. </li></ul><ul><li>Only articulate sound he could make was “tan” </li></ul><ul><li>After autopsying eight similar patient with lesion in the left frontal lobe </li></ul><ul><li>He made a famous statement that “we speak with the left hemisphere” </li></ul><ul><li>First time identified the language center on the left frontal lobe </li></ul>Paul Broca 1861
Wernicke’s Aphasia <ul><li>Ten years later, Carl Wernicke, a German neurologist, discovered another part of the brain, this one involved in understanding language, in the posterior portion of the left temporal lobe. </li></ul><ul><li>People who had a lesion at this location could speak, but their speech was often incoherent and made no sense. </li></ul>
Cortical mapping of the language areas in the left cerebral cortex during neurosurgery (A) Location of the classical language areas. (B) Evidence for the variability of language representation among individuals. This diagram summarizes data from 117 patients whose language areas were electrically mapped at the time of surgery. The number in each circle indicates the percentage of the patients who showed interference with language in response to stimulation at that site. Note also that many of the sites that elicited interference fall outside the classic language areas. (B after Ojemann et al., 1989.)
Broca’s Area <ul><li>Area 44 (the posterior part of the inferior frontal gyrus) : phonological processing and in language production as such; this role would be facilitated by its position close to the motor centres for the mouth and the tongue. </li></ul><ul><li>Area 45 (the anterior part of the inferior frontal gyrus) seems more involved in the semantic aspects of language. Though not directly involved in accessing meaning, Broca’s area therefore plays a role in verbal memory (selecting and manipulating semantic elements). </li></ul>
Wernicke’s Area <ul><li>WA lies superior temporal gyrus, in the superior portion of Brodmann area 22 has 3 subdivision </li></ul><ul><ul><li>The first responds to spoken words (including the individual’s own) and other sounds. </li></ul></ul><ul><ul><li>The second responds only to words spoken by someone else but is also activated when the individual recalls a list of words. </li></ul></ul><ul><ul><li>The third sub-area seems more closely associated with producing speech than with perceiving it </li></ul></ul>
Language-related areas in the human brain: Damasio <ul><li>The implementation system is made up of several regions located around the left sylvian fissure. It includes the classical language areas (B = Broca's area; W = Wernicke's area) and the adjoining supramarginal gyrus (Sm), angular gyrus (AG), auditory cortex (A), motor cortex (M), and somatosensory cortex (Ss). The posterior and anterior components of the implementation system, respectively Wernicke's area and Broca's area, are interconnected by the arcuate fasciculus </li></ul><ul><li>The mediational system surrounds the implementation system like a belt (blue areas). The regions identified so far are located in the left temporal pole (TP), left inferotemporal cortex (It), and left prefrontal cortex (Pf). The left basal ganglia complex (not pictured) is an integral part of the language implementation system </li></ul><ul><li>H. Damasio </li></ul>
Marsel Mesulam Model of Language 1980 <ul><li>Simple language task like rhyming year, days of week require Motor and Premotor area </li></ul><ul><li>Hearing a word primary unimodel auditory cortex superior and anterior temporal lobe </li></ul><ul><li>Unimodel area send to </li></ul><ul><ul><li>Paralimbic system, for long term memory and emotional system. </li></ul></ul><ul><ul><li>Posteriorsuperior temporal sulcus for meaning. </li></ul></ul><ul><ul><li>The triangular and orbital portions of the inferior frontal gyrus also play a role in semantic processing . </li></ul></ul><ul><li>Two classical epicenters (B+W) still work for semantic processing </li></ul>
Geschwind’s territory <ul><li>the inferior parietal lobule is connected by large bundles of nerve fibres to both Broca’s area and Wernicke’s area. </li></ul><ul><li>Information might therefore travel between these last two areas either directly, via the arcuate fasciculus, or by a second, parallel route that passes through the inferior parietal lobule. </li></ul><ul><li>The inferior parietal lobule is one of the last structures of the human brain to have developed in the course of evolution </li></ul>
Inferior Parietal Lobule and Language <ul><li>The supramarginal gyrus seems to be involved in phonological and articulatory processing of words, </li></ul><ul><li>whereas the angular gyrus (together with the posterior cingulate gyrus) seems more involved in semantic processing. </li></ul><ul><li>The right angular gyrus appears to be active as well as the left, thus revealing that the right hemisphere also contributes to semantic processing of language. </li></ul>
Insula is important for planning or coordinating the articulatory movements <ul><li>A. Lesions of 25 patients with deficits in planning articulatory movements were computer-reconstructed and overlapped. All patients had lesions that included a small section of the insula, an area of cortex underneath the frontal, temporal, and parietal lobes. The area of infarction shared by all patients is depicted here in dark purple. </li></ul><ul><li>B. The lesions of 19 patients without a deficit in planning articulatory movements were also reconstructed and overlapped. Their lesions completely spare the precise area that was infarcted in the patients with the articulatory deficit. </li></ul>
Area 24: Initiation and maintenance of speech <ul><li>They are also important to attention and emotion and thus can influence many higher functions. </li></ul><ul><li>Damage to these areas does not cause an aphasia in the proper sense but impairs the initiation of movement (akinesia) and causes mutism , the complete absence of speech. </li></ul><ul><li>Mutism is a rarity in aphasic patients and is seen only during the very early stages of the condition. </li></ul><ul><li>Patients with akinesia and mutism fail to communicate by words, gestures, or facial expression. They have an impairment of the drive to communicate, rather than aphasia. </li></ul>
Manual “babbling” in deaf infants raised by deaf, signing parents compared to manual babble in hearing infants <ul><li>Babbling was judged by scoring hand positions and shapes that showed some resemblance to the components of American Sign Language. </li></ul><ul><li>In deaf infants, meaningful hand shapes increase as a percentage of manual activity between ages 10 and 14 months. </li></ul><ul><li>Hearing children raised by hearing, speaking parents do not produce similar hand shapes </li></ul>
Sign Language <ul><li>Signing deficits in congenitally deaf individuals who had learned sign language from birth and later suffered lesions of the language areas in the left hemisphere </li></ul><ul><li>Left hemisphere damage produced signing problems in these patients analogous to the aphasias seen after comparable lesions in hearing, speaking patients </li></ul>
The Right Cerebral Hemisphere Is Important for Prosody and Pragmatics <ul><li>Prosody refers to the intonation and stress with which the phonemes of a language are pronounced </li></ul><ul><li>Understand organization of discourse: Signs that establish the context for a communication, </li></ul><ul><li>To understand non literal language, irony or metaphors </li></ul>Woman deciding whether or Not certain words rhyme.
Women and Language <ul><li>Females’ speech is more fluid: they can pronounce more words or sentences in a given amount of time </li></ul><ul><li>Women have the reputation of being able to talk and listen while doing all sorts of things at the same time, </li></ul><ul><li>Men supposedly prefer to talk or hear about various things in succession rather than simultaneously. </li></ul><ul><li>Women language is more widespread in both hemisphere while in men more left lateralized (Brain scan studies) </li></ul><ul><li>Women also have more nerve fibres connecting the two hemispheres of their brains, which also suggests that more information is exchanged between them . </li></ul><ul><li>The males’ higher levels of testosterone, which delays the development of the left hemisphere </li></ul><ul><li>4 times more boys than girls suffer from stuttering, dyslexia, and autism. </li></ul>