They may differ in which outcomes they attribute to the
intentions
of conscious beings, some allowing only that artifacts are deliberately crafted, others believing that illnesses come from magical spells cast by enemies, still others believing that the entire world was brought into being by a creator.
Steven-Pinker-The-Blank-Slate 1
Here are five ideas from the cognitive revolution that have revamped how we think and talk about minds.
The first idea: The mental world can be grounded in the physical world by the concepts of information, computation, and feedback. A great divide between {32} mind and matter has always seemed natural because behavior appears to have a different kind of trigger than other physical events. Ordinary events have causes, it seems, but human behavior has reasons. I once participated in a BBC television debate on whether "science can explain human behavior. " Arguing against the resolution was a philosopher who asked how we might explain why someone was put in jail. Say it was for inciting racial hatred. The intention, the hatred, and even the prison, she said, cannot be described in the language of physics. There is simply no way to define "hatred" or "jail" in terms of the movements of particles. Explanations of behavior are like narratives, she argued, couched in the intentions of actors -- a plane completely separate from natural science. Or take a simpler example. How might we explain why Rex just walked over to the phone? We would not say that phone-shaped stimuli caused Rex's limbs to swing in certain arcs. Rather, we might say that he wanted to speak to his friend Cecile and knew that Cecile was home. No explanation has as much predictive power as that one. If Rex was no longer on speaking terms with Cecile, or if he remembered that Cecile was out bowling that night, his body would not have risen off the couch.
For millennia the gap between physical events, on the one hand, and meaning, content, ideas, reasons, and intentions, on the other, seemed to cleave the universe in two. How can something as ethereal as "inciting hatred" or "wanting to speak to Cecile" actually cause matter to move in space? But the cognitive revolution unified the world of ideas with the world of matter using a powerful new theory: that mental life can be explained in terms of information, computation, and feedback. Beliefs and memories are collections of information -- like facts in a database, but residing in patterns of activity and structure in the brain. Thinking and planning are systematic transformations of these patterns, like the operation of a computer program. Wanting and trying are feedback loops, like the principle behind a thermostat: they receive information about the discrepancy between a goal and the current state of the world, and then they execute operations that tend to reduce the difference. The mind is connected to the world by the sense organs, which transduce physical energy into data structures in the brain, and by motor programs, by which the brain controls the muscles.
This general idea may be called the computational theory of mind. It is not the same as the "computer metaphor" of the mind, the suggestion that the mind literally works like a human-made database, computer program, or thermostat.
? ? ? It says only that we can explain minds and human-made information processors using some of the same principles. It is just like other cases in which the natural world and human engineering overlap. A physiologist might invoke the same laws of optics to explain how the eye works and how a camera works without implying that the eye is like a camera in every detail.
The computational theory of mind does more than explain the existence {33} of knowing, thinking, and trying without invoking a ghost in the machine (though that would be enough of a feat). It also explains how those processes can be intelligent -- how rationality can emerge from a mindless physical process. If a sequence of transformations of information stored in a hunk of matter (such as brain tissue or silicon) mirrors a sequence of deductions that obey the laws of logic, probability, or cause and effect in the world, they will generate correct predictions about the world. And making correct predictions in pursuit of a goal is a pretty good definition of "intelligence. "3
Of course there is no new thing under the sun, and the computational theory of mind was foreshadowed by Hobbes when he described mental activity as tiny motions and wrote that "reasoning is but reckoning. " Three and a half centuries later, science has caught up to his vision. Perception, memory, imagery, reasoning, decision making, language, and motor control are being studied in the lab and successfully modeled as computational paraphernalia such as rules, strings, matrices, pointers, lists, files, trees, arrays, loops, propositions, and networks. For example, cognitive psychologists are studying the graphics system in the head and thereby explaining how people "see" the solution to a problem in a mental image. They are studying the web of concepts in long-term memory and explaining why some facts are easier to recall than others. They are studying the processor and memory used by the language system to learn why some sentences are a pleasure to read and others a difficult slog.
And if the proof is in the computing, then the sister field of artificial intelligence is confirming that ordinary matter can perform feats that were supposedly performable by mental stuff alone. In the 1950s computers were already being called "electronic brains" because they could calculate sums, organize data, and prove theorems. Soon they could correct spelling, set type, solve equations, and simulate experts on restricted topics such as picking stocks and diagnosing diseases. For decades we psychologists preserved human bragging rights by telling our classes that no computer could read text, decipher speech, or recognize faces, but these boasts are obsolete. Today software that can recognize printed letters and spoken words comes packaged with home computers. Rudimentary programs that understand or translate sentences are available in many search engines and Help programs, and they are steadily improving. Face-recognition systems have advanced to the point that civil libertarians are concerned about possible abuse when they are used with security cameras in public places.
Human chauvinists can still write off these low-level feats. Sure, they say, the input and output processing can be fobbed off onto computational modules, but you still need a human user with the capacity for judgment, reflection, and creativity. But according to the computational theory of mind, these capacities are themselves forms of information processing and can be implemented in a computational system. In 1997 an IBM computer called Deep
{34} Blue defeated the world chess champion Garry Kasparov, and unlike its predecessors, it did not just evaluate trillions of moves by brute force but was fitted with strategies that intelligently responded to patterns in the game. Newsweek called the match "The Brain's Last Stand. " Kasparov called the outcome "the end of mankind. "
You might still object that chess is an artificial world with discrete moves and a clear winner, perfectly suited to the rule-crunching of a computer. People, on the other hand, live in a messy world offering unlimited moves and nebulous goals. Surely this requires human creativity and intuition -- which is why everyone knows that computers will never compose a symphony, write a story, or paint a picture. But everyone may be wrong. Recent artificial intelligence systems have written credible short stories,4 composed convincing Mozart-like symphonies,5 drawn appealing pictures of people and landscapes,6 and conceived clever ideas for advertisements. 7
None of this is to say that the brain works like a digital computer, that artificial intelligence will ever duplicate the human mind, or that computers are conscious in the sense of having first-person subjective experience. But it does suggest that reasoning, intelligence, imagination, and creativity are forms of information processing, a well- understood physical process. Cognitive science, with the help of the computational theory of mind, has exorcised at least one ghost from the machine.
A second idea: The mind cannot be a blank slate, because blank slates don't do anything. As long as people had only the haziest concept of what a mind was or how it might work, the metaphor of a blank slate inscribed by the environment did not seem too outrageous. But as soon as one starts to think seriously about what kind of computation enables a system to see, think, speak, and plan, the problem with blank slates becomes all too obvious: they don't do anything. The inscriptions will sit there forever unless something notices patterns in them, combines them with patterns learned at other times, uses the combinations to scribble new thoughts onto the slate, and reads the results to guide behavior toward goals. Locke recognized this problem and alluded to something called "the understanding," which looked at the inscriptions on the white paper and carried out the recognizing, reflecting, and associating. But of
? ? ? ? ? ? ? course explaining how the mind understands by invoking something called "the understanding" is circular.
This argument against the Blank Slate was stated pithily by Gottfried Wilhelm Leibniz (1646-1716) in a reply to Locke. Leibniz repeated the empiricist motto "There is nothing in the intellect that was not first in the senses," then added, "except the intellect itself. "8 Something in the mind must be innate, if it is only the mechanisms that do the learning. Something has to see a world of objects rather than a kaleidoscope of shimmering pixels. Something has to infer the content of a sentence rather than parrot back the exact wording. {35} Something has to interpret other people's behavior as their attempts to achieve goals rather than as trajectories of jerking arms and legs.
In the spirit of Locke, one could attribute these feats to an abstract noun -- perhaps not to "the understanding" but to "learning," "intelligence," "plasticity," or "adaptiveness. " But as Leibniz remarked, to do so is to " [save appearances] by fabricating faculties or occult qualities,. . . and fancying them to be like little demons or imps which can without ado perform whatever is wanted, as though pocket watches told the time by a certain horological faculty without needing wheels, or as though mills crushed grain by a fractive faculty without needing anything in the way of millstones. "9 Leibniz, like Hobbes (who had influenced him), was ahead of his time in recognizing that intelligence is a form of information processing and needs complex machinery to carry it out. As we now know, computers don't understand speech or recognize text as they roll off the assembly line; someone has to install the right software first. The same is likely to be true of the far more demanding performance of the human being. Cognitive modelers have found that mundane challenges like walking around furniture, understanding a sentence, recalling a fact, or guessing someone's intentions are formidable engineering problems that are at or beyond the frontiers of artificial intelligence. The suggestion that they can be solved by a lump of Silly Putty that is passively molded by something called "culture" just doesn't cut the mustard.
This is not to say that cognitive scientists have put the nature-nurture debate completely behind them; they are still spread out along a continuum of opinion on how much standard equipment comes with the human mind. At one end are the philosopher Jerry Fodor, who has suggested that all concepts might be innate (even "doorknob" and "tweezers"), and the linguist Noam Chomsky, who believes that the word "learning" is misleading and we should say that children "grow" language instead. 10 At the other end are the connectionists, including Rumelhart, McClelland, Jeffrey Elman, and Elizabeth Bates, who build relatively simple computer models and train the living daylights out of them. 11 Fans locate the first extreme, which originated at the Massachusetts Institute of Technology, at the East Pole, the mythical place from which all directions are west. They locate the second extreme, which originated at the University of California, San Diego, at the West Pole, the mythical place from which all directions are east. (The names were suggested by Fodor during an MIT seminar at which he was fulminating against a "West Coast theorist" and someone pointed out that the theorist worked at Yale, which is, technically, on the East Coast. )12
But here is why the East Pole-West Pole debate is different from the ones that preoccupied philosophers for millennia: neither side believes in the Blank Slate. Everyone acknowledges that there can be no learning without innate circuitry to do the learning. In their West Pole manifesto Rethinking Innateness, {36} Bates and Elman and their coauthors cheerfully concede this point: "No learning rule can be entirely devoid of theoretical content nor can the tabula ever be completely rasa"13 They explain:
There is a widespread belief that connectionist models (and modelers) are committed to an extreme form of empiricism; and that any form of innate knowledge is to be avoided like the plague. . . . We obviously do not subscribe to this point of view. . . . There are good reasons to believe that some kinds of prior constraints [on learning models] are necessary. In fact, all connectionist models necessarily make some assumptions which must be regarded as constituting innate constraints. 14
The disagreements between the two poles, though significant, are over the details: how many innate learning networks there are, and how specifically engineered they are for particular jobs. (We will explore some of these disagreements in Chapter 5. )
A third idea: An infinite range of behavior can be generated by finite combinatorial programs in the mind. Cognitive science has undermined the Blank Slate and the Ghost in the Machine in another way. People can be forgiven for scoffing at the suggestion that human behavior is "in the genes" or "a product of evolution" in the senses familiar from the animal world. Human acts are not selected from a repertoire of knee-jerk reactions like a fish attacking a red spot or a hen sitting on eggs. Instead, people may worship goddesses, auction kitsch on the Internet, play air guitar, fast to atone for past sins, build forts out of lawn chairs, and so on, seemingly without limit. A glance at National Geographic shows that even the strangest acts in our own culture do not exhaust what our species is capable of. If anything goes, one might think, then perhaps we are Silly Putty, or unconstrained agents, after all.
But that impression has been made obsolete by the computational approach to the mind, which was barely conceivable in the era in which the Blank Slate arose. The clearest example is the Chomskyan revolution in
? ? ? ? ? ? ? ? ? ? language. 15 Language is the epitome of creative and variable behavior. Most utterances are brand-new combinations of words, never before uttered in the history of humankind. We are nothing like Tickle Me Elmo dolls who have a fixed list of verbal responses hard-wired in. But, Chomsky pointed out, for all its open-endedness language is not a free-for-all; it obeys rules and patterns. An English speaker can utter unprecedented strings of words such as Every day new universes come into existence, or He likes his toast with cream cheese and ketchup, or My car has been eaten by wolverines. But no one would say Car my been eaten has wolverines by or most of the other possible orderings of English words. Something in the head must be capable of generating not just any combinations of words but highly systematic ones. {37}
That something is a kind of software, a generative grammar that can crank out new arrangements of words. A battery of rules such as "An English sentence contains a subject and a predicate," "A predicate contains a verb, an object, and a complement," and "The subject of eat is the eater" can explain the boundless creativity of a human talker. With a few thousand nouns that can fill the subject slot and a few thousand verbs that can fill the predicate slot, one already has several million ways to open a sentence. The possible combinations quickly multiply out to unimaginably large numbers. Indeed, the repertoire of sentences is theoretically infinite, because the rules of language use a trick called recursion. A recursive rule allows a phrase to contain an example of itself, as in She thinks that he thinks that they think that he knows and so on, ad infinitum. And if the number of sentences is infinite, the number of possible thoughts and intentions is infinite too, because virtually every sentence expresses a different thought or intention. The combinatorial grammar for language meshes with other combinatorial programs in the head for thoughts and intentions. A fixed collection of machinery in the mind can generate an infinite range of behavior by the muscles. 16 Once one starts to think about mental software instead of physical behavior, the radical differences among human cultures become far smaller, and that leads to a fourth new idea: Universal mental mechanisms can underlie superficial variation across cultures. Again, we can use language as a paradigm case of the open-endedness of behavior. Humans speak some six thousand mutually unintelligible languages. Nonetheless, the grammatical programs in their minds differ far less than the actual speech coming out of their mouths. We have known for a long time that all human languages can convey the same kinds of ideas. The Bible has been translated into hundreds of non-Western languages, and during World War II the U. S. Marine Corps conveyed secret messages across the Pacific by having Navajo Indians translate them to and from their native language. The fact that any language can be used to convey any proposition, from theological parables to military directives, suggests that all languages are cut from the same cloth.
Chomsky proposed that the generative grammars of individual languages are variations on a single pattern, which he called Universal Grammar. For example, in English the verb comes before the object (drink beer) and the preposition comes before the noun phrase (from the bottle). In Japanese the object comes before the verb (beer drink) and the noun phrase comes before the preposition, or, more accurately, the postposition (the bottle from). But it is a significant discovery that both languages have verbs, objects, and pre- or postpositions to start with, as opposed to having the countless other conceivable kinds of apparatus that could power a communication system. And it is even more significant that unrelated languages build their phrases by assembling a head (such as a verb or preposition) and a complement (such as a noun {38} phrase) and assigning a consistent order to the two. In English the head comes first; in Japanese the head comes last. But everything else about the structure of phrases in the two languages is pretty much the same. And so it goes with phrase after phrase and language after language. The common kinds of heads and complements can be ordered in 128 logically possible ways, but 95 percent of the world's languages use one of two: either the English ordering or its mirror image the Japanese ordering. 17 A simple way to capture this uniformity is to say that all languages have the same grammar except for a parameter or switch that can be flipped to either the "head- first" or "head-last" setting. The linguist Mark Baker has recently summarized about a dozen of these parameters, which succinctly capture most of the known variation among the languages of the world. 18
Distilling the variation from the universal patterns is not just a way to tidy up a set of messy data. It can also provide clues about the innate circuitry that makes learning possible. If the universal part of a rule is embodied in the neural circuitry that guides babies when they first learn language, it could explain how children learn language so easily and uniformly and without the benefit of instruction. Rather than treating the sound coming out of Mom's mouth as just an interesting noise to mimic verbatim or to slice and dice in arbitrary ways, the baby listens for heads and complements, pays attention to how they are ordered, and builds a grammatical system consistent with that ordering. This idea can make sense of other kinds of variability across cultures. Many anthropologists sympathetic to social constructionism have claimed that emotions familiar to us, like anger, are absent from some cultures. 19 (A few anthropologists say there are cultures with no emotions at all! )20 For example, Catherine Lutz wrote that the Ifaluk (a Micronesian people) do not experience our "anger" but instead undergo an experience they call song. Song is a state of dudgeon triggered by a moral infraction such as breaking a taboo or acting in a cocky manner. It licenses one to shun, frown at, threaten, or gossip about the offender, though not to attack him physically. The target of song
? ? ? ? ? ? ? ? experiences another emotion allegedly unknown to Westerners: metagu, a state of dread that impels him to appease the song-ful one by apologizing, paying a fine, or offering a gift.
The philosophers Ron Mallon and Stephen Stich, inspired by Chomsky and other cognitive scientists, point out that the issue of whether to call Ifaluk song and Western anger the same emotion or different emotions is a quibble about the meaning of emotion words: whether they should be defined in terms of surface behavior or underlying mental computation. 21 If an emotion is defined by behavior, then emotions certainly do differ across cultures. The Ifaluk react emotionally to a woman working in the taro gardens while menstruating or to a man entering a birthing house, and we do not. We react emotionally to someone shouting a racial epithet or raising the middle finger, but {39} as far as we know, the Ifaluk do not. But if an emotion is defined by mental mechanisms -- what psychologists like Paul Ekman and Richard Lazarus call "affect programs" or "if-then formulas" (note the computational vocabulary) -- we and the Ifaluk are not so different after all. 22 We might all be equipped with a program that responds to an affront to our interests or our dignity with an unpleasant burning feeling that motivates us to punish or to exact compensation. But what counts as an affront, whether we feel it is permissible to glower in a particular setting, and what kinds of retribution we think we are entitled to, depend on our culture. The stimuli and responses may differ, but the mental states are the same, whether or not they are perfectly labeled by words in our language.
And as in the case of language, without some innate mechanism for mental computation, there would be no way to learn the parts of a culture that do have to be learned. It is no coincidence that the situations that provoke song among the Ifaluk include violating a taboo, being lazy or disrespectful, and refusing to share, but do not include respecting a taboo, being kind and deferential, and standing on one's head. The Ifaluk construe the first three as similar because they evoke the same affect program -- they are perceived as affronts. That makes it easier to learn that they call for the same reaction and makes it more likely that those three would be lumped together as the acceptable triggers for a single emotion.
The moral, then, is that familiar categories of behavior -- marriage customs, food taboos, folk superstitions, and so on -- certainly do vary across cultures and have to be learned, but the deeper mechanisms of mental computation that generate them may be universal and innate. People may dress differently, but they may all strive to flaunt their status via their appearance. They may respect the rights of the members of their clan exclusively or they may extend that respect to everyone in their tribe, nation-state, or species, but all divide the world into an in-group and an out-group.
They may differ in which outcomes they attribute to the intentions of conscious beings, some allowing only that artifacts are deliberately crafted, others believing that illnesses come from magical spells cast by enemies, still others believing that the entire world was brought into being by a creator. But all of them explain certain events by invoking the existence of entities with minds that strive to bring about goals. The behaviorists got it backwards: it is the mind, not behavior, that is lawful.
A fifth idea: The mind is a complex system composed of many interacting parts. The psychologists who study emotions in different cultures have made another important discovery. Candid facial expressions appear to be the same everywhere, but people in some cultures learn to keep a poker face in polite company. 23 A simple explanation is that the affect programs fire up facial expressions in the same way in all people, but a separate system of "display rules" governs when they can be shown. {40}
The difference between these two mechanisms underscores another insight of the cognitive revolution. Before the revolution, commentators invoked enormous black boxes such as "the intellect" or "the understanding," and they made sweeping pronouncements about human nature, such as that we are essentially noble or essentially nasty. But we now know that the mind is not a homogeneous orb invested with unitary powers or across-the-board traits. The mind is modular, with many parts cooperating to generate a train of thought or an organized action. It has distinct information-processing systems for filtering out distractions, learning skills, controlling the body, remembering facts, holding information temporarily, and storing and executing rules. Cutting across these data-processing systems are mental faculties (sometimes called multiple intelligences) dedicated to different kinds of content, such as language, number, space, tools, and living things. Cognitive scientists at the East Pole suspect that the content-based modules are differentiated largely by the genes;24 those at the West Pole suspect they begin as small innate biases in attention and then coagulate out of statistical patterns in the sensory input. 25 But those at both poles agree that the brain is not a uniform meatloaf. Still another layer of information-processing systems can be found in the affect programs, that is, the systems for motivation and emotion.
The upshot is that an urge or habit coming out of one module can be translated into behavior in different ways -- or suppressed altogether -- by some other module. To take a simple example, cognitive psychologists believe that a module called the "habit system" underlies our tendency to produce certain responses habitually, such as responding to a printed word by pronouncing it silently. But another module, called the "supervisory attention system," can override it and focus on the information relevant to a stated problem, such as naming the color of the ink the word is printed in, or thinking up an action that goes with the word. 26 More generally, the interplay of mental systems can
? ? ? ? ? ? ? ? explain how people can entertain revenge fantasies that they never act on, or can commit adultery only in their hearts. In this way the theory of human nature coming out of the cognitive revolution has more in common with the Judeo- Christian theory of human nature, and with the psychoanalytic theory proposed by Sigmund Freud, than with behaviorism, social constructionism, and other versions of the Blank Slate. Behavior is not just emitted or elicited, nor does it come directly out of culture or society. It comes from an internal struggle among mental modules with differing agendas and goals.
The idea from the cognitive revolution that the mind is a system of universal, generative computational modules obliterates the way that debates on human nature have been framed for centuries. It is now simply misguided to ask whether humans are flexible or programmed, whether behavior is universal or varies across cultures, whether acts are learned or innate, whether we are essentially good or essentially evil. Humans behave flexibly because they are
{41} programmed: their minds are packed with combinatorial software that can generate an unlimited set of thoughts and behavior. Behavior may vary across cultures, but the design of the mental programs that generate it need not vary. Intelligent behavior is learned successfully because we have innate systems that do the learning. And all people may have good and evil motives, but not everyo~ne may translate them into behavior in the same way.
The second bridge between mind and matter is neuroscience, especially cognitive neuroscience, the study of how cognition and emotion are implemented in the brain. 27 Francis Crick wrote a book about the brain called The Astonishing Hypothesis, alluding to the idea that all our thoughts and feelings, joys and aches, dreams and wishes consist in the physiological activity of the brain. 28 Jaded neuroscientists, who take the idea for granted, snickered at the title, but Crick was right: the hypothesis is astonishing to most people the first time they stop to ponder it. Who cannot sympathize with the imprisoned Dmitri Karamazov as he tries to make sense of what he has just learned from a visiting academic?
Imagine: inside, in the nerves, in the head -- that is, these nerves are there in the brain . . . (damn them! ) there are sort of little tails, the little tails of those nerves, and as soon as they begin quivering . . . that is, you see, I look at something with my eyes and then they begin quivering, those little tails . . . and when they quiver, then an image appears . . . it doesn't appear at once, but an instant, a second, passes . . . and then something like a moment appears; that is, not a moment -- devil take the moment! -- but an image; that is, an object, or an action, damn it! That's why I see and then think, because of those tails, not at all because I've got a soul, and that I am some sort of image and likeness. All that is nonsense! Rakitin explained it all to me yesterday, brother, and it simply bowled me over. It's magnificent, Alyosha, this science! A new man's arising -- that I understand. . . . And yet I am sorry to lose God! 29
Dostoevsky's prescience is itself astonishing, because in 1880 only the rudiments of neural functioning were understood, and a reasonable person could have doubted that all experience arises from quivering nerve tails. But no longer. One can say that the information-processing activity of the brain causes the mind, or one can say that it is the mind, but in either case the evidence is overwhelming that every aspect of our mental lives depends entirely on physiological events in the tissues of the brain.
When a surgeon sends an electrical current into the brain, the person can have a vivid, lifelike experience. When chemicals seep into the brain, they can alter the person's perception, mood, personality, and reasoning. When a patch
{42} of brain tissue dies, a part of the mind can disappear: a neurological patient may lose the ability to name tools, recognize faces, anticipate the outcome of his behavior, empathize with others, or keep in mind a region of space or of his own body. (Descartes was thus wrong when he said that "the mind is entirely indivisible" and concluded that it must be completely different from the body. ) Every emotion and thought gives off physical signals, and the new technologies for detecting them are so accurate that they can literally read a person's mind and tell a cognitive neuroscientist whether the person is imagining a face or a place. Neuroscientists can knock a gene out of a mouse (a gene also found in humans) and prevent the mouse from learning, or insert extra copies and make the mouse learn faster. Under the microscope, brain tissue shows a staggering complexity -- a hundred billion neurons connected by a hundred trillion synapses -- that is commensurate with the staggering complexity of human thought and experience. Neural network modelers have begun to show how the building blocks of mental computation, such as storing and retrieving a pattern, can be implemented in neural circuitry. And when the brain dies, the person goes out of existence. Despite concerted efforts by Alfred Russel Wallace and other Victorian scientists, it is apparently not possible to communicate with the dead.
Educated people, of course, know that perception, cognition, language, and emotion are rooted in the brain. But it is still tempting to think of the brain as it was shown in old educational cartoons, as a control panel with gauges and levers operated by a user -- the self, the soul, the ghost, the person, the "me. " But cognitive neuroscience is showing
? ? ? ? ? that the self, too, is just another network of brain systems.
The first hint came from Phineas Gage, the nineteenth-century railroad worker familiar to generations of psychology students. Gage was using a yard-long spike to tamp explosive powder into a hole in a rock when a spark ignited the powder and sent the spike into his cheekbone, through his brain, and out the top of his skull. Phineas survived with his perception, memory, language, and motor functions intact. But in the famous understatement of a co-worker, "Gage was no longer Gage. " A piece of iron had literally turned him into a different person, from courteous, responsible, and ambitious to rude, unreliable, and shiftless. It did this by impaling his ventromedial prefrontal cortex, the region of the brain above the eyes now known to be involved in reasoning about other people. Together with other areas of the prefrontal lobes and the limbic system (the seat of the emotions), it anticipates the consequences of one's actions and selects behavior consonant with one's goals. 30
Cognitive neuroscientists have not only exorcised the ghost but have shown that the brain does not even have a part that does exactly what the ghost is supposed to do: review all the facts and make a decision for the rest of the brain to carry out. 31 Each of us feels that there is a single "I" in control. But that {43} is an illusion that the brain works hard to produce, like the impression that our visual fields are rich in detail from edge to edge. (In fact, we are blind to detail outside the fixation point. We quickly move our eyes to whatever looks interesting, and that fools us into thinking that the detail was there all along. ) The rain does have supervisory systems in the prefrontal lobes and anterior cingulate cortex, which can push the buttons of behavior and override habits and urges. But those systems are gadgets with specific quirks and limitations; they are not implementations of the rational free agent traditionally identified with the soul or the self.
One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus callosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one's advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without its knowledge by the right. For example, if an experimenter flashes the command "WALK" to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person's left hemisphere) is asked why he just got up, he will say, in all sincerity, "To get a Coke" -- rather than "I don't really know" or "The urge just came over me" or "You've been testing me for years since I had the surgery, and sometimes you get me to do things but I don't know exactly what you asked me to do. " Similarly, if the patient's left hemisphere is shown a chicken and his right hemisphere is shown a snowfall, and both hemispheres have to select a picture that goes with what they see (each using a different hand), the left hemisphere picks a claw (correctly) and the right picks a shovel (also correctly). But when the left hemisphere is asked why the whole person made those choices, it blithely says, "Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed. "32
The spooky part is that we have no reason to think that the baloney-generator in the patient's left hemisphere is behaving any differently from ours as we make sense of the inclinations emanating from the rest of our brains. The conscious mind -- the self or soul -- is a spin doctor, not the commander in chief. Sigmund Freud immodestly wrote that "humanity has in the course of time had to endure from the hands of science three great outrages upon its nai? ve self-love": the discovery that our world is not the center of the celestial spheres but rather a speck in a vast universe, the discovery that we were not specially created but instead descended from animals, and the discovery that often our conscious minds do not control how we act but merely tell us a story about our actions. He was right about the cumulative impact, but it was {44} cognitive neuroscience rather than psychoanalysis that conclusively delivered the third blow.
Cognitive neuroscience is undermining not just the Ghost in the Machine but also the Noble Savage. Damage to the frontal lobes does not only dull the person or subtract from his behavioral repertoire but can unleash aggressive attacks. 33 That happens because the damaged lobes no longer serve as inhibitory brakes on parts of the limbic system, particularly a circuit that links the amygdala to the hypothalamus via a pathway called the stria terminalis. Connections between the frontal lobe in each hemisphere and the limbic system provide a lever by which a person's knowledge and goals can override other mechanisms, and among those mechanisms appears to be one designed to generate behavior that harms other people. 34
Nor is the physical structure of the brain a blank slate. In the mid-nineteenth century the neurologist Paul Broca discovered that the folds and wrinkles of the cerebral cortex do not squiggle randomly like fingerprints but have a recognizable geometry. Indeed, the arrangement is so consistent from brain to brain that each fold and wrinkle can be given a name. Since that time neuroscientists have discovered that the gross anatomy of the brain -- the sizes, shapes, and connectivity of its lobes and nuclei, and the basic plan of the cerebral cortex -- is largely shaped by the genes in normal prenatal development. 35 So is the quantity of gray matter in the different regions of the brains of
? ? ? ? ? ? ? ? different people, including the regions that underlie language and reasoning. 36
This innate geometry and cabling can have real consequences for thinking, feeling, and behavior. As we shall see in a later chapter, babies who suffer damage to particular areas of the brain often grow up with permanent deficits in particular mental faculties. And people born with variations on the typical plan have variations in the way their minds work. According to a recent study of the brains of identical and fraternal twins, differences in the amount of gray matter in the frontal lobes are not only genetically influenced but are significantly correlated with differences in intelligence. 37 A study of Albert Einstein's brain revealed that he had large, unusually shaped inferior parietal lobules, which participate in spatial reasoning and intuitions about number. 38 Gay men are likely to have a smaller third interstitial nucleus in the anterior hypothalamus, a nucleus known to have a role in sex differences. 39 And convicted murderers and other violent, antisocial people are likely to have a smaller and less active prefrontal cortex, the part of the brain that governs decision making and inhibits impulses. 40 These gross features of the brain are almost certainly not sculpted by information coming in from the senses, which implies that differences in intelligence, scientific genius, sexual orientation, and impulsive violence are not entirely learned.
Indeed, until recently the innateness of brain structure was an embarrassment {45} for neuroscience. The brain could not possibly be wired by the genes down to the last synapse, because there isn't nearly enough information in the genome to do so. And we know that people learn throughout their lives, and products of that learning have to be stored in the brain somehow. Unless you believe in a ghost in the machine, everything a person learns has to affect some part of the brain; more accurately, learning is a change in some part of the brain. But it was difficult to find the features of the brain that reflected those changes amid all that innate structure. Becoming stronger in math or motor coordination or visual discrimination does not bulk up the brain the way becoming stronger at weightlifting bulks up the muscles.
Now, at last, neuroscience is beginning to catch up with psychology by discovering changes in the brain that underlie learning. As we shall see, the boundaries between swatches of cortex devoted to different body parts, talents, and even physical senses can be adjusted by learning and practice. Some neuroscientists are so excited by these discoveries that they are trying to push the pendulum in the other direction, emphasizing the plasticity of the cerebral cortex. But for reasons that I will review in Chapter 5, most neuroscientists believe that these changes take place within a matrix of genetically organized structure. There is much we don't understand about how the brain is laid out in development, but we know that it is not indefinitely ma~lleable by experience.
THE THIRD BRIDGE between the biological and the mental is behavioral genetics, the study of how genes affect behavior. 41 All the potential for thinking, learning, and feeling that distinguishes humans from other animals lies in
the information contained in the DNA of the fertilized ovum. This is most obvious when we compare species. Chimpanzees brought up in a human home do not speak, think, or act like people, and that is because of the information in the ten megabytes of DNA that differ between us. Even the two species of chimpanzees, common chimps and bonobos, which differ in just a few tenths of one percent of their genomes, part company in their behavior, as zookeepers first discovered when they inadvertently mixed the two. Common chimps are among the most aggressive mammals known to zoology, bonobos among the most peaceable; in common chimps the males dominate the females, in bonobos the females have the upper hand; common chimps have sex for procreation, bonobos for recreation. Small differences in the genes can lead to large differences in behavior. They can affect the size and shape of the different parts of the brain, their wiring, and the nanotechnology that releases, binds, and recycles hormones and neurotransmitters.
The importance of genes in organizing the normal brain is underscored by the many ways in which nonstandard genes can give rise to nonstandard minds. When I was an undergraduate an exam question in Abnormal Psychology asked, "What is the best predictor that a person will become schizophrenic? " {46} The answer was, "Having an identical twin who is schizophrenic. " At the time it was a trick question, because the reigning theories of schizophrenia pointed to societal stress, "schizophrenogenic mothers," double binds, and other life experiences (none of which turned out to have much, if any, importance); hardly anyone thought about genes as a possible cause. But even then the evidence was there: schizophrenia is highly concordant within pairs of identical twins, who share all their DNA and most of their environment, but far less concordant within pairs of fraternal twins, who share only half their DNA (of the DNA that varies in the population) and most of their environment. The trick question could be asked -- and would have the same answer -- for virtually every cognitive and emotional disorder or difference ever observed. Autism, dyslexia, language delay, language impairment, learning disability, left-handedness, major depressions, bipolar illness, obsessive-compulsive disorder, sexual orientation, and many other conditions run in families, are more concordant in identical than in fraternal twins, are better predicted by people's biological relatives than by their adoptive relatives, and are poorly predicted by any measurable feature of the environment. 42
? ? ? ? ? ? ? ? ? ? Genes not only push us toward exceptional conditions of mental functioning but scatter us within the normal range, producing much of the variation in ability and temperament that we notice in the people around us. The famous Chas Addams cartoon from The New Yorker is only a slight exaggeration:
(C) The New Yorker Collection 1981. Charles Addams from cartoonbank. com. All rights reserved.
Identical twins think and feel in such similar ways that they sometimes suspect they are linked by telepathy. When separated at birth and reunited as adults they say they feel they have known each other all their lives. Testing confirms that identical twins, whether separated at birth or not, are eerily alike (though far from identical) in just about any trait one can measure. They are similar in verbal, mathematical, and general intelligence, in their degree of life satisfaction, and in personality traits such as introversion, agreeableness, neucriticism, conscientiousness, and openness to experience. They have similar attitudes toward controversial issues such as the death penalty, religion, and modern music. They resemble each other not just in paper-and-pencil tests but in consequential behavior such as gambling, divorcing, committing crimes, getting into accidents, and watching television. And they boast dozens of shared idiosyncrasies such as giggling incessantly, giving interminable answers to simple questions, dipping buttered toast in coffee, and -- in the case of Abigail van Buren and Ann Landers -- writing indistinguishable syndicated advice columns. The crags and valleys of their electroencephalograms (brainwaves) are as alike as those of a single person recorded on two occasions, and the wrinkles of their brains and distribution of gray matter across cortical areas are also similar. 43
The effects of differences in genes on differences in minds can be measured, and the same rough estimate -- substantially greater than zero, but substantially less than 100 percent -- pops out of the data no matter what measuring stick is used. Identical twins are far more similar than fraternal twins, whether they are raised apart or together; identical twins raised apart are highly similar; biological siblings, whether raised together or apart, are far more similar than adoptive siblings. Many of these conclusions come from massive studies in Scandinavian countries where governments keep huge databases on their citizens, and they employ the best-validated measuring instruments known to psychology. Skeptics have offered alternative explanations that try to push the effects of the genes to zero -- they suggest that identical twins separated at birth may have been placed in similar adoptive homes, that they may have contacted each other before being tested, that they look alike and hence may have been treated alike, and that they shared a womb in addition to their genes. But as we shall see in the chapter on children, these explanations have all been tested and rejected. Recently a new kind of evidence may be piled on the heap. "Virtual twins" are the mirror
? {47}
? ? image of identical twins raised apart: they are unrelated siblings, one or both adopted, who are raised together from infancy. Though they are the same age and are growing up in the same family, the psychologist Nancy Segal found that their IQ scores are barely correlated. 44 One father in the study said that despite efforts to treat them alike, the virtual twins are "like night and day.
The first idea: The mental world can be grounded in the physical world by the concepts of information, computation, and feedback. A great divide between {32} mind and matter has always seemed natural because behavior appears to have a different kind of trigger than other physical events. Ordinary events have causes, it seems, but human behavior has reasons. I once participated in a BBC television debate on whether "science can explain human behavior. " Arguing against the resolution was a philosopher who asked how we might explain why someone was put in jail. Say it was for inciting racial hatred. The intention, the hatred, and even the prison, she said, cannot be described in the language of physics. There is simply no way to define "hatred" or "jail" in terms of the movements of particles. Explanations of behavior are like narratives, she argued, couched in the intentions of actors -- a plane completely separate from natural science. Or take a simpler example. How might we explain why Rex just walked over to the phone? We would not say that phone-shaped stimuli caused Rex's limbs to swing in certain arcs. Rather, we might say that he wanted to speak to his friend Cecile and knew that Cecile was home. No explanation has as much predictive power as that one. If Rex was no longer on speaking terms with Cecile, or if he remembered that Cecile was out bowling that night, his body would not have risen off the couch.
For millennia the gap between physical events, on the one hand, and meaning, content, ideas, reasons, and intentions, on the other, seemed to cleave the universe in two. How can something as ethereal as "inciting hatred" or "wanting to speak to Cecile" actually cause matter to move in space? But the cognitive revolution unified the world of ideas with the world of matter using a powerful new theory: that mental life can be explained in terms of information, computation, and feedback. Beliefs and memories are collections of information -- like facts in a database, but residing in patterns of activity and structure in the brain. Thinking and planning are systematic transformations of these patterns, like the operation of a computer program. Wanting and trying are feedback loops, like the principle behind a thermostat: they receive information about the discrepancy between a goal and the current state of the world, and then they execute operations that tend to reduce the difference. The mind is connected to the world by the sense organs, which transduce physical energy into data structures in the brain, and by motor programs, by which the brain controls the muscles.
This general idea may be called the computational theory of mind. It is not the same as the "computer metaphor" of the mind, the suggestion that the mind literally works like a human-made database, computer program, or thermostat.
? ? ? It says only that we can explain minds and human-made information processors using some of the same principles. It is just like other cases in which the natural world and human engineering overlap. A physiologist might invoke the same laws of optics to explain how the eye works and how a camera works without implying that the eye is like a camera in every detail.
The computational theory of mind does more than explain the existence {33} of knowing, thinking, and trying without invoking a ghost in the machine (though that would be enough of a feat). It also explains how those processes can be intelligent -- how rationality can emerge from a mindless physical process. If a sequence of transformations of information stored in a hunk of matter (such as brain tissue or silicon) mirrors a sequence of deductions that obey the laws of logic, probability, or cause and effect in the world, they will generate correct predictions about the world. And making correct predictions in pursuit of a goal is a pretty good definition of "intelligence. "3
Of course there is no new thing under the sun, and the computational theory of mind was foreshadowed by Hobbes when he described mental activity as tiny motions and wrote that "reasoning is but reckoning. " Three and a half centuries later, science has caught up to his vision. Perception, memory, imagery, reasoning, decision making, language, and motor control are being studied in the lab and successfully modeled as computational paraphernalia such as rules, strings, matrices, pointers, lists, files, trees, arrays, loops, propositions, and networks. For example, cognitive psychologists are studying the graphics system in the head and thereby explaining how people "see" the solution to a problem in a mental image. They are studying the web of concepts in long-term memory and explaining why some facts are easier to recall than others. They are studying the processor and memory used by the language system to learn why some sentences are a pleasure to read and others a difficult slog.
And if the proof is in the computing, then the sister field of artificial intelligence is confirming that ordinary matter can perform feats that were supposedly performable by mental stuff alone. In the 1950s computers were already being called "electronic brains" because they could calculate sums, organize data, and prove theorems. Soon they could correct spelling, set type, solve equations, and simulate experts on restricted topics such as picking stocks and diagnosing diseases. For decades we psychologists preserved human bragging rights by telling our classes that no computer could read text, decipher speech, or recognize faces, but these boasts are obsolete. Today software that can recognize printed letters and spoken words comes packaged with home computers. Rudimentary programs that understand or translate sentences are available in many search engines and Help programs, and they are steadily improving. Face-recognition systems have advanced to the point that civil libertarians are concerned about possible abuse when they are used with security cameras in public places.
Human chauvinists can still write off these low-level feats. Sure, they say, the input and output processing can be fobbed off onto computational modules, but you still need a human user with the capacity for judgment, reflection, and creativity. But according to the computational theory of mind, these capacities are themselves forms of information processing and can be implemented in a computational system. In 1997 an IBM computer called Deep
{34} Blue defeated the world chess champion Garry Kasparov, and unlike its predecessors, it did not just evaluate trillions of moves by brute force but was fitted with strategies that intelligently responded to patterns in the game. Newsweek called the match "The Brain's Last Stand. " Kasparov called the outcome "the end of mankind. "
You might still object that chess is an artificial world with discrete moves and a clear winner, perfectly suited to the rule-crunching of a computer. People, on the other hand, live in a messy world offering unlimited moves and nebulous goals. Surely this requires human creativity and intuition -- which is why everyone knows that computers will never compose a symphony, write a story, or paint a picture. But everyone may be wrong. Recent artificial intelligence systems have written credible short stories,4 composed convincing Mozart-like symphonies,5 drawn appealing pictures of people and landscapes,6 and conceived clever ideas for advertisements. 7
None of this is to say that the brain works like a digital computer, that artificial intelligence will ever duplicate the human mind, or that computers are conscious in the sense of having first-person subjective experience. But it does suggest that reasoning, intelligence, imagination, and creativity are forms of information processing, a well- understood physical process. Cognitive science, with the help of the computational theory of mind, has exorcised at least one ghost from the machine.
A second idea: The mind cannot be a blank slate, because blank slates don't do anything. As long as people had only the haziest concept of what a mind was or how it might work, the metaphor of a blank slate inscribed by the environment did not seem too outrageous. But as soon as one starts to think seriously about what kind of computation enables a system to see, think, speak, and plan, the problem with blank slates becomes all too obvious: they don't do anything. The inscriptions will sit there forever unless something notices patterns in them, combines them with patterns learned at other times, uses the combinations to scribble new thoughts onto the slate, and reads the results to guide behavior toward goals. Locke recognized this problem and alluded to something called "the understanding," which looked at the inscriptions on the white paper and carried out the recognizing, reflecting, and associating. But of
? ? ? ? ? ? ? course explaining how the mind understands by invoking something called "the understanding" is circular.
This argument against the Blank Slate was stated pithily by Gottfried Wilhelm Leibniz (1646-1716) in a reply to Locke. Leibniz repeated the empiricist motto "There is nothing in the intellect that was not first in the senses," then added, "except the intellect itself. "8 Something in the mind must be innate, if it is only the mechanisms that do the learning. Something has to see a world of objects rather than a kaleidoscope of shimmering pixels. Something has to infer the content of a sentence rather than parrot back the exact wording. {35} Something has to interpret other people's behavior as their attempts to achieve goals rather than as trajectories of jerking arms and legs.
In the spirit of Locke, one could attribute these feats to an abstract noun -- perhaps not to "the understanding" but to "learning," "intelligence," "plasticity," or "adaptiveness. " But as Leibniz remarked, to do so is to " [save appearances] by fabricating faculties or occult qualities,. . . and fancying them to be like little demons or imps which can without ado perform whatever is wanted, as though pocket watches told the time by a certain horological faculty without needing wheels, or as though mills crushed grain by a fractive faculty without needing anything in the way of millstones. "9 Leibniz, like Hobbes (who had influenced him), was ahead of his time in recognizing that intelligence is a form of information processing and needs complex machinery to carry it out. As we now know, computers don't understand speech or recognize text as they roll off the assembly line; someone has to install the right software first. The same is likely to be true of the far more demanding performance of the human being. Cognitive modelers have found that mundane challenges like walking around furniture, understanding a sentence, recalling a fact, or guessing someone's intentions are formidable engineering problems that are at or beyond the frontiers of artificial intelligence. The suggestion that they can be solved by a lump of Silly Putty that is passively molded by something called "culture" just doesn't cut the mustard.
This is not to say that cognitive scientists have put the nature-nurture debate completely behind them; they are still spread out along a continuum of opinion on how much standard equipment comes with the human mind. At one end are the philosopher Jerry Fodor, who has suggested that all concepts might be innate (even "doorknob" and "tweezers"), and the linguist Noam Chomsky, who believes that the word "learning" is misleading and we should say that children "grow" language instead. 10 At the other end are the connectionists, including Rumelhart, McClelland, Jeffrey Elman, and Elizabeth Bates, who build relatively simple computer models and train the living daylights out of them. 11 Fans locate the first extreme, which originated at the Massachusetts Institute of Technology, at the East Pole, the mythical place from which all directions are west. They locate the second extreme, which originated at the University of California, San Diego, at the West Pole, the mythical place from which all directions are east. (The names were suggested by Fodor during an MIT seminar at which he was fulminating against a "West Coast theorist" and someone pointed out that the theorist worked at Yale, which is, technically, on the East Coast. )12
But here is why the East Pole-West Pole debate is different from the ones that preoccupied philosophers for millennia: neither side believes in the Blank Slate. Everyone acknowledges that there can be no learning without innate circuitry to do the learning. In their West Pole manifesto Rethinking Innateness, {36} Bates and Elman and their coauthors cheerfully concede this point: "No learning rule can be entirely devoid of theoretical content nor can the tabula ever be completely rasa"13 They explain:
There is a widespread belief that connectionist models (and modelers) are committed to an extreme form of empiricism; and that any form of innate knowledge is to be avoided like the plague. . . . We obviously do not subscribe to this point of view. . . . There are good reasons to believe that some kinds of prior constraints [on learning models] are necessary. In fact, all connectionist models necessarily make some assumptions which must be regarded as constituting innate constraints. 14
The disagreements between the two poles, though significant, are over the details: how many innate learning networks there are, and how specifically engineered they are for particular jobs. (We will explore some of these disagreements in Chapter 5. )
A third idea: An infinite range of behavior can be generated by finite combinatorial programs in the mind. Cognitive science has undermined the Blank Slate and the Ghost in the Machine in another way. People can be forgiven for scoffing at the suggestion that human behavior is "in the genes" or "a product of evolution" in the senses familiar from the animal world. Human acts are not selected from a repertoire of knee-jerk reactions like a fish attacking a red spot or a hen sitting on eggs. Instead, people may worship goddesses, auction kitsch on the Internet, play air guitar, fast to atone for past sins, build forts out of lawn chairs, and so on, seemingly without limit. A glance at National Geographic shows that even the strangest acts in our own culture do not exhaust what our species is capable of. If anything goes, one might think, then perhaps we are Silly Putty, or unconstrained agents, after all.
But that impression has been made obsolete by the computational approach to the mind, which was barely conceivable in the era in which the Blank Slate arose. The clearest example is the Chomskyan revolution in
? ? ? ? ? ? ? ? ? ? language. 15 Language is the epitome of creative and variable behavior. Most utterances are brand-new combinations of words, never before uttered in the history of humankind. We are nothing like Tickle Me Elmo dolls who have a fixed list of verbal responses hard-wired in. But, Chomsky pointed out, for all its open-endedness language is not a free-for-all; it obeys rules and patterns. An English speaker can utter unprecedented strings of words such as Every day new universes come into existence, or He likes his toast with cream cheese and ketchup, or My car has been eaten by wolverines. But no one would say Car my been eaten has wolverines by or most of the other possible orderings of English words. Something in the head must be capable of generating not just any combinations of words but highly systematic ones. {37}
That something is a kind of software, a generative grammar that can crank out new arrangements of words. A battery of rules such as "An English sentence contains a subject and a predicate," "A predicate contains a verb, an object, and a complement," and "The subject of eat is the eater" can explain the boundless creativity of a human talker. With a few thousand nouns that can fill the subject slot and a few thousand verbs that can fill the predicate slot, one already has several million ways to open a sentence. The possible combinations quickly multiply out to unimaginably large numbers. Indeed, the repertoire of sentences is theoretically infinite, because the rules of language use a trick called recursion. A recursive rule allows a phrase to contain an example of itself, as in She thinks that he thinks that they think that he knows and so on, ad infinitum. And if the number of sentences is infinite, the number of possible thoughts and intentions is infinite too, because virtually every sentence expresses a different thought or intention. The combinatorial grammar for language meshes with other combinatorial programs in the head for thoughts and intentions. A fixed collection of machinery in the mind can generate an infinite range of behavior by the muscles. 16 Once one starts to think about mental software instead of physical behavior, the radical differences among human cultures become far smaller, and that leads to a fourth new idea: Universal mental mechanisms can underlie superficial variation across cultures. Again, we can use language as a paradigm case of the open-endedness of behavior. Humans speak some six thousand mutually unintelligible languages. Nonetheless, the grammatical programs in their minds differ far less than the actual speech coming out of their mouths. We have known for a long time that all human languages can convey the same kinds of ideas. The Bible has been translated into hundreds of non-Western languages, and during World War II the U. S. Marine Corps conveyed secret messages across the Pacific by having Navajo Indians translate them to and from their native language. The fact that any language can be used to convey any proposition, from theological parables to military directives, suggests that all languages are cut from the same cloth.
Chomsky proposed that the generative grammars of individual languages are variations on a single pattern, which he called Universal Grammar. For example, in English the verb comes before the object (drink beer) and the preposition comes before the noun phrase (from the bottle). In Japanese the object comes before the verb (beer drink) and the noun phrase comes before the preposition, or, more accurately, the postposition (the bottle from). But it is a significant discovery that both languages have verbs, objects, and pre- or postpositions to start with, as opposed to having the countless other conceivable kinds of apparatus that could power a communication system. And it is even more significant that unrelated languages build their phrases by assembling a head (such as a verb or preposition) and a complement (such as a noun {38} phrase) and assigning a consistent order to the two. In English the head comes first; in Japanese the head comes last. But everything else about the structure of phrases in the two languages is pretty much the same. And so it goes with phrase after phrase and language after language. The common kinds of heads and complements can be ordered in 128 logically possible ways, but 95 percent of the world's languages use one of two: either the English ordering or its mirror image the Japanese ordering. 17 A simple way to capture this uniformity is to say that all languages have the same grammar except for a parameter or switch that can be flipped to either the "head- first" or "head-last" setting. The linguist Mark Baker has recently summarized about a dozen of these parameters, which succinctly capture most of the known variation among the languages of the world. 18
Distilling the variation from the universal patterns is not just a way to tidy up a set of messy data. It can also provide clues about the innate circuitry that makes learning possible. If the universal part of a rule is embodied in the neural circuitry that guides babies when they first learn language, it could explain how children learn language so easily and uniformly and without the benefit of instruction. Rather than treating the sound coming out of Mom's mouth as just an interesting noise to mimic verbatim or to slice and dice in arbitrary ways, the baby listens for heads and complements, pays attention to how they are ordered, and builds a grammatical system consistent with that ordering. This idea can make sense of other kinds of variability across cultures. Many anthropologists sympathetic to social constructionism have claimed that emotions familiar to us, like anger, are absent from some cultures. 19 (A few anthropologists say there are cultures with no emotions at all! )20 For example, Catherine Lutz wrote that the Ifaluk (a Micronesian people) do not experience our "anger" but instead undergo an experience they call song. Song is a state of dudgeon triggered by a moral infraction such as breaking a taboo or acting in a cocky manner. It licenses one to shun, frown at, threaten, or gossip about the offender, though not to attack him physically. The target of song
? ? ? ? ? ? ? ? experiences another emotion allegedly unknown to Westerners: metagu, a state of dread that impels him to appease the song-ful one by apologizing, paying a fine, or offering a gift.
The philosophers Ron Mallon and Stephen Stich, inspired by Chomsky and other cognitive scientists, point out that the issue of whether to call Ifaluk song and Western anger the same emotion or different emotions is a quibble about the meaning of emotion words: whether they should be defined in terms of surface behavior or underlying mental computation. 21 If an emotion is defined by behavior, then emotions certainly do differ across cultures. The Ifaluk react emotionally to a woman working in the taro gardens while menstruating or to a man entering a birthing house, and we do not. We react emotionally to someone shouting a racial epithet or raising the middle finger, but {39} as far as we know, the Ifaluk do not. But if an emotion is defined by mental mechanisms -- what psychologists like Paul Ekman and Richard Lazarus call "affect programs" or "if-then formulas" (note the computational vocabulary) -- we and the Ifaluk are not so different after all. 22 We might all be equipped with a program that responds to an affront to our interests or our dignity with an unpleasant burning feeling that motivates us to punish or to exact compensation. But what counts as an affront, whether we feel it is permissible to glower in a particular setting, and what kinds of retribution we think we are entitled to, depend on our culture. The stimuli and responses may differ, but the mental states are the same, whether or not they are perfectly labeled by words in our language.
And as in the case of language, without some innate mechanism for mental computation, there would be no way to learn the parts of a culture that do have to be learned. It is no coincidence that the situations that provoke song among the Ifaluk include violating a taboo, being lazy or disrespectful, and refusing to share, but do not include respecting a taboo, being kind and deferential, and standing on one's head. The Ifaluk construe the first three as similar because they evoke the same affect program -- they are perceived as affronts. That makes it easier to learn that they call for the same reaction and makes it more likely that those three would be lumped together as the acceptable triggers for a single emotion.
The moral, then, is that familiar categories of behavior -- marriage customs, food taboos, folk superstitions, and so on -- certainly do vary across cultures and have to be learned, but the deeper mechanisms of mental computation that generate them may be universal and innate. People may dress differently, but they may all strive to flaunt their status via their appearance. They may respect the rights of the members of their clan exclusively or they may extend that respect to everyone in their tribe, nation-state, or species, but all divide the world into an in-group and an out-group.
They may differ in which outcomes they attribute to the intentions of conscious beings, some allowing only that artifacts are deliberately crafted, others believing that illnesses come from magical spells cast by enemies, still others believing that the entire world was brought into being by a creator. But all of them explain certain events by invoking the existence of entities with minds that strive to bring about goals. The behaviorists got it backwards: it is the mind, not behavior, that is lawful.
A fifth idea: The mind is a complex system composed of many interacting parts. The psychologists who study emotions in different cultures have made another important discovery. Candid facial expressions appear to be the same everywhere, but people in some cultures learn to keep a poker face in polite company. 23 A simple explanation is that the affect programs fire up facial expressions in the same way in all people, but a separate system of "display rules" governs when they can be shown. {40}
The difference between these two mechanisms underscores another insight of the cognitive revolution. Before the revolution, commentators invoked enormous black boxes such as "the intellect" or "the understanding," and they made sweeping pronouncements about human nature, such as that we are essentially noble or essentially nasty. But we now know that the mind is not a homogeneous orb invested with unitary powers or across-the-board traits. The mind is modular, with many parts cooperating to generate a train of thought or an organized action. It has distinct information-processing systems for filtering out distractions, learning skills, controlling the body, remembering facts, holding information temporarily, and storing and executing rules. Cutting across these data-processing systems are mental faculties (sometimes called multiple intelligences) dedicated to different kinds of content, such as language, number, space, tools, and living things. Cognitive scientists at the East Pole suspect that the content-based modules are differentiated largely by the genes;24 those at the West Pole suspect they begin as small innate biases in attention and then coagulate out of statistical patterns in the sensory input. 25 But those at both poles agree that the brain is not a uniform meatloaf. Still another layer of information-processing systems can be found in the affect programs, that is, the systems for motivation and emotion.
The upshot is that an urge or habit coming out of one module can be translated into behavior in different ways -- or suppressed altogether -- by some other module. To take a simple example, cognitive psychologists believe that a module called the "habit system" underlies our tendency to produce certain responses habitually, such as responding to a printed word by pronouncing it silently. But another module, called the "supervisory attention system," can override it and focus on the information relevant to a stated problem, such as naming the color of the ink the word is printed in, or thinking up an action that goes with the word. 26 More generally, the interplay of mental systems can
? ? ? ? ? ? ? ? explain how people can entertain revenge fantasies that they never act on, or can commit adultery only in their hearts. In this way the theory of human nature coming out of the cognitive revolution has more in common with the Judeo- Christian theory of human nature, and with the psychoanalytic theory proposed by Sigmund Freud, than with behaviorism, social constructionism, and other versions of the Blank Slate. Behavior is not just emitted or elicited, nor does it come directly out of culture or society. It comes from an internal struggle among mental modules with differing agendas and goals.
The idea from the cognitive revolution that the mind is a system of universal, generative computational modules obliterates the way that debates on human nature have been framed for centuries. It is now simply misguided to ask whether humans are flexible or programmed, whether behavior is universal or varies across cultures, whether acts are learned or innate, whether we are essentially good or essentially evil. Humans behave flexibly because they are
{41} programmed: their minds are packed with combinatorial software that can generate an unlimited set of thoughts and behavior. Behavior may vary across cultures, but the design of the mental programs that generate it need not vary. Intelligent behavior is learned successfully because we have innate systems that do the learning. And all people may have good and evil motives, but not everyo~ne may translate them into behavior in the same way.
The second bridge between mind and matter is neuroscience, especially cognitive neuroscience, the study of how cognition and emotion are implemented in the brain. 27 Francis Crick wrote a book about the brain called The Astonishing Hypothesis, alluding to the idea that all our thoughts and feelings, joys and aches, dreams and wishes consist in the physiological activity of the brain. 28 Jaded neuroscientists, who take the idea for granted, snickered at the title, but Crick was right: the hypothesis is astonishing to most people the first time they stop to ponder it. Who cannot sympathize with the imprisoned Dmitri Karamazov as he tries to make sense of what he has just learned from a visiting academic?
Imagine: inside, in the nerves, in the head -- that is, these nerves are there in the brain . . . (damn them! ) there are sort of little tails, the little tails of those nerves, and as soon as they begin quivering . . . that is, you see, I look at something with my eyes and then they begin quivering, those little tails . . . and when they quiver, then an image appears . . . it doesn't appear at once, but an instant, a second, passes . . . and then something like a moment appears; that is, not a moment -- devil take the moment! -- but an image; that is, an object, or an action, damn it! That's why I see and then think, because of those tails, not at all because I've got a soul, and that I am some sort of image and likeness. All that is nonsense! Rakitin explained it all to me yesterday, brother, and it simply bowled me over. It's magnificent, Alyosha, this science! A new man's arising -- that I understand. . . . And yet I am sorry to lose God! 29
Dostoevsky's prescience is itself astonishing, because in 1880 only the rudiments of neural functioning were understood, and a reasonable person could have doubted that all experience arises from quivering nerve tails. But no longer. One can say that the information-processing activity of the brain causes the mind, or one can say that it is the mind, but in either case the evidence is overwhelming that every aspect of our mental lives depends entirely on physiological events in the tissues of the brain.
When a surgeon sends an electrical current into the brain, the person can have a vivid, lifelike experience. When chemicals seep into the brain, they can alter the person's perception, mood, personality, and reasoning. When a patch
{42} of brain tissue dies, a part of the mind can disappear: a neurological patient may lose the ability to name tools, recognize faces, anticipate the outcome of his behavior, empathize with others, or keep in mind a region of space or of his own body. (Descartes was thus wrong when he said that "the mind is entirely indivisible" and concluded that it must be completely different from the body. ) Every emotion and thought gives off physical signals, and the new technologies for detecting them are so accurate that they can literally read a person's mind and tell a cognitive neuroscientist whether the person is imagining a face or a place. Neuroscientists can knock a gene out of a mouse (a gene also found in humans) and prevent the mouse from learning, or insert extra copies and make the mouse learn faster. Under the microscope, brain tissue shows a staggering complexity -- a hundred billion neurons connected by a hundred trillion synapses -- that is commensurate with the staggering complexity of human thought and experience. Neural network modelers have begun to show how the building blocks of mental computation, such as storing and retrieving a pattern, can be implemented in neural circuitry. And when the brain dies, the person goes out of existence. Despite concerted efforts by Alfred Russel Wallace and other Victorian scientists, it is apparently not possible to communicate with the dead.
Educated people, of course, know that perception, cognition, language, and emotion are rooted in the brain. But it is still tempting to think of the brain as it was shown in old educational cartoons, as a control panel with gauges and levers operated by a user -- the self, the soul, the ghost, the person, the "me. " But cognitive neuroscience is showing
? ? ? ? ? that the self, too, is just another network of brain systems.
The first hint came from Phineas Gage, the nineteenth-century railroad worker familiar to generations of psychology students. Gage was using a yard-long spike to tamp explosive powder into a hole in a rock when a spark ignited the powder and sent the spike into his cheekbone, through his brain, and out the top of his skull. Phineas survived with his perception, memory, language, and motor functions intact. But in the famous understatement of a co-worker, "Gage was no longer Gage. " A piece of iron had literally turned him into a different person, from courteous, responsible, and ambitious to rude, unreliable, and shiftless. It did this by impaling his ventromedial prefrontal cortex, the region of the brain above the eyes now known to be involved in reasoning about other people. Together with other areas of the prefrontal lobes and the limbic system (the seat of the emotions), it anticipates the consequences of one's actions and selects behavior consonant with one's goals. 30
Cognitive neuroscientists have not only exorcised the ghost but have shown that the brain does not even have a part that does exactly what the ghost is supposed to do: review all the facts and make a decision for the rest of the brain to carry out. 31 Each of us feels that there is a single "I" in control. But that {43} is an illusion that the brain works hard to produce, like the impression that our visual fields are rich in detail from edge to edge. (In fact, we are blind to detail outside the fixation point. We quickly move our eyes to whatever looks interesting, and that fools us into thinking that the detail was there all along. ) The rain does have supervisory systems in the prefrontal lobes and anterior cingulate cortex, which can push the buttons of behavior and override habits and urges. But those systems are gadgets with specific quirks and limitations; they are not implementations of the rational free agent traditionally identified with the soul or the self.
One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus callosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one's advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without its knowledge by the right. For example, if an experimenter flashes the command "WALK" to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person's left hemisphere) is asked why he just got up, he will say, in all sincerity, "To get a Coke" -- rather than "I don't really know" or "The urge just came over me" or "You've been testing me for years since I had the surgery, and sometimes you get me to do things but I don't know exactly what you asked me to do. " Similarly, if the patient's left hemisphere is shown a chicken and his right hemisphere is shown a snowfall, and both hemispheres have to select a picture that goes with what they see (each using a different hand), the left hemisphere picks a claw (correctly) and the right picks a shovel (also correctly). But when the left hemisphere is asked why the whole person made those choices, it blithely says, "Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed. "32
The spooky part is that we have no reason to think that the baloney-generator in the patient's left hemisphere is behaving any differently from ours as we make sense of the inclinations emanating from the rest of our brains. The conscious mind -- the self or soul -- is a spin doctor, not the commander in chief. Sigmund Freud immodestly wrote that "humanity has in the course of time had to endure from the hands of science three great outrages upon its nai? ve self-love": the discovery that our world is not the center of the celestial spheres but rather a speck in a vast universe, the discovery that we were not specially created but instead descended from animals, and the discovery that often our conscious minds do not control how we act but merely tell us a story about our actions. He was right about the cumulative impact, but it was {44} cognitive neuroscience rather than psychoanalysis that conclusively delivered the third blow.
Cognitive neuroscience is undermining not just the Ghost in the Machine but also the Noble Savage. Damage to the frontal lobes does not only dull the person or subtract from his behavioral repertoire but can unleash aggressive attacks. 33 That happens because the damaged lobes no longer serve as inhibitory brakes on parts of the limbic system, particularly a circuit that links the amygdala to the hypothalamus via a pathway called the stria terminalis. Connections between the frontal lobe in each hemisphere and the limbic system provide a lever by which a person's knowledge and goals can override other mechanisms, and among those mechanisms appears to be one designed to generate behavior that harms other people. 34
Nor is the physical structure of the brain a blank slate. In the mid-nineteenth century the neurologist Paul Broca discovered that the folds and wrinkles of the cerebral cortex do not squiggle randomly like fingerprints but have a recognizable geometry. Indeed, the arrangement is so consistent from brain to brain that each fold and wrinkle can be given a name. Since that time neuroscientists have discovered that the gross anatomy of the brain -- the sizes, shapes, and connectivity of its lobes and nuclei, and the basic plan of the cerebral cortex -- is largely shaped by the genes in normal prenatal development. 35 So is the quantity of gray matter in the different regions of the brains of
? ? ? ? ? ? ? ? different people, including the regions that underlie language and reasoning. 36
This innate geometry and cabling can have real consequences for thinking, feeling, and behavior. As we shall see in a later chapter, babies who suffer damage to particular areas of the brain often grow up with permanent deficits in particular mental faculties. And people born with variations on the typical plan have variations in the way their minds work. According to a recent study of the brains of identical and fraternal twins, differences in the amount of gray matter in the frontal lobes are not only genetically influenced but are significantly correlated with differences in intelligence. 37 A study of Albert Einstein's brain revealed that he had large, unusually shaped inferior parietal lobules, which participate in spatial reasoning and intuitions about number. 38 Gay men are likely to have a smaller third interstitial nucleus in the anterior hypothalamus, a nucleus known to have a role in sex differences. 39 And convicted murderers and other violent, antisocial people are likely to have a smaller and less active prefrontal cortex, the part of the brain that governs decision making and inhibits impulses. 40 These gross features of the brain are almost certainly not sculpted by information coming in from the senses, which implies that differences in intelligence, scientific genius, sexual orientation, and impulsive violence are not entirely learned.
Indeed, until recently the innateness of brain structure was an embarrassment {45} for neuroscience. The brain could not possibly be wired by the genes down to the last synapse, because there isn't nearly enough information in the genome to do so. And we know that people learn throughout their lives, and products of that learning have to be stored in the brain somehow. Unless you believe in a ghost in the machine, everything a person learns has to affect some part of the brain; more accurately, learning is a change in some part of the brain. But it was difficult to find the features of the brain that reflected those changes amid all that innate structure. Becoming stronger in math or motor coordination or visual discrimination does not bulk up the brain the way becoming stronger at weightlifting bulks up the muscles.
Now, at last, neuroscience is beginning to catch up with psychology by discovering changes in the brain that underlie learning. As we shall see, the boundaries between swatches of cortex devoted to different body parts, talents, and even physical senses can be adjusted by learning and practice. Some neuroscientists are so excited by these discoveries that they are trying to push the pendulum in the other direction, emphasizing the plasticity of the cerebral cortex. But for reasons that I will review in Chapter 5, most neuroscientists believe that these changes take place within a matrix of genetically organized structure. There is much we don't understand about how the brain is laid out in development, but we know that it is not indefinitely ma~lleable by experience.
THE THIRD BRIDGE between the biological and the mental is behavioral genetics, the study of how genes affect behavior. 41 All the potential for thinking, learning, and feeling that distinguishes humans from other animals lies in
the information contained in the DNA of the fertilized ovum. This is most obvious when we compare species. Chimpanzees brought up in a human home do not speak, think, or act like people, and that is because of the information in the ten megabytes of DNA that differ between us. Even the two species of chimpanzees, common chimps and bonobos, which differ in just a few tenths of one percent of their genomes, part company in their behavior, as zookeepers first discovered when they inadvertently mixed the two. Common chimps are among the most aggressive mammals known to zoology, bonobos among the most peaceable; in common chimps the males dominate the females, in bonobos the females have the upper hand; common chimps have sex for procreation, bonobos for recreation. Small differences in the genes can lead to large differences in behavior. They can affect the size and shape of the different parts of the brain, their wiring, and the nanotechnology that releases, binds, and recycles hormones and neurotransmitters.
The importance of genes in organizing the normal brain is underscored by the many ways in which nonstandard genes can give rise to nonstandard minds. When I was an undergraduate an exam question in Abnormal Psychology asked, "What is the best predictor that a person will become schizophrenic? " {46} The answer was, "Having an identical twin who is schizophrenic. " At the time it was a trick question, because the reigning theories of schizophrenia pointed to societal stress, "schizophrenogenic mothers," double binds, and other life experiences (none of which turned out to have much, if any, importance); hardly anyone thought about genes as a possible cause. But even then the evidence was there: schizophrenia is highly concordant within pairs of identical twins, who share all their DNA and most of their environment, but far less concordant within pairs of fraternal twins, who share only half their DNA (of the DNA that varies in the population) and most of their environment. The trick question could be asked -- and would have the same answer -- for virtually every cognitive and emotional disorder or difference ever observed. Autism, dyslexia, language delay, language impairment, learning disability, left-handedness, major depressions, bipolar illness, obsessive-compulsive disorder, sexual orientation, and many other conditions run in families, are more concordant in identical than in fraternal twins, are better predicted by people's biological relatives than by their adoptive relatives, and are poorly predicted by any measurable feature of the environment. 42
? ? ? ? ? ? ? ? ? ? Genes not only push us toward exceptional conditions of mental functioning but scatter us within the normal range, producing much of the variation in ability and temperament that we notice in the people around us. The famous Chas Addams cartoon from The New Yorker is only a slight exaggeration:
(C) The New Yorker Collection 1981. Charles Addams from cartoonbank. com. All rights reserved.
Identical twins think and feel in such similar ways that they sometimes suspect they are linked by telepathy. When separated at birth and reunited as adults they say they feel they have known each other all their lives. Testing confirms that identical twins, whether separated at birth or not, are eerily alike (though far from identical) in just about any trait one can measure. They are similar in verbal, mathematical, and general intelligence, in their degree of life satisfaction, and in personality traits such as introversion, agreeableness, neucriticism, conscientiousness, and openness to experience. They have similar attitudes toward controversial issues such as the death penalty, religion, and modern music. They resemble each other not just in paper-and-pencil tests but in consequential behavior such as gambling, divorcing, committing crimes, getting into accidents, and watching television. And they boast dozens of shared idiosyncrasies such as giggling incessantly, giving interminable answers to simple questions, dipping buttered toast in coffee, and -- in the case of Abigail van Buren and Ann Landers -- writing indistinguishable syndicated advice columns. The crags and valleys of their electroencephalograms (brainwaves) are as alike as those of a single person recorded on two occasions, and the wrinkles of their brains and distribution of gray matter across cortical areas are also similar. 43
The effects of differences in genes on differences in minds can be measured, and the same rough estimate -- substantially greater than zero, but substantially less than 100 percent -- pops out of the data no matter what measuring stick is used. Identical twins are far more similar than fraternal twins, whether they are raised apart or together; identical twins raised apart are highly similar; biological siblings, whether raised together or apart, are far more similar than adoptive siblings. Many of these conclusions come from massive studies in Scandinavian countries where governments keep huge databases on their citizens, and they employ the best-validated measuring instruments known to psychology. Skeptics have offered alternative explanations that try to push the effects of the genes to zero -- they suggest that identical twins separated at birth may have been placed in similar adoptive homes, that they may have contacted each other before being tested, that they look alike and hence may have been treated alike, and that they shared a womb in addition to their genes. But as we shall see in the chapter on children, these explanations have all been tested and rejected. Recently a new kind of evidence may be piled on the heap. "Virtual twins" are the mirror
? {47}
? ? image of identical twins raised apart: they are unrelated siblings, one or both adopted, who are raised together from infancy. Though they are the same age and are growing up in the same family, the psychologist Nancy Segal found that their IQ scores are barely correlated. 44 One father in the study said that despite efforts to treat them alike, the virtual twins are "like night and day.
