15
Schooling also requires pupils to expose and reinforce skills that are ordinarily buried in unconscious black boxes.
Schooling also requires pupils to expose and reinforce skills that are ordinarily buried in unconscious black boxes.
Steven-Pinker-The-Blank-Slate 1
This accords with a common view of contemporary entertainment and politics as being all a matter of "image," or appearance, rather than of substantial content.
Actually, the doctrine of hyperreality contradicts the common view of contemporary politics and entertainment as being a matter of image and appearance. The whole point of the common view is that there is a reality separate from images, and that is what allows us to decry the images that are misleading. We can, for example, criticize an old movie that shows slaves leading happy lives, or an ad that shows a corrupt politician pretending to defend the environment. If there were no such thing as substantial content, we would have no basis for preferring an accurate documentary about slavery to an apologia for it, or preferring a good expose of a politician to a slick campaign ad. The entry notes that images are associated with the world of publicity, advertising, and fashion, and thereby with business and profits. An image may thus be tied to "an imposed stereotype or an alternative subjective or cultural identity. " Media images become mental images: people cannot help but think that women or politicians or African Americans conform to the depictions in movies and advertisements. And this elevates cultural studies and postmodernist art into forces for personal and political liberation:
The study of "images of women" or "women's images" sees this field as one in which stereotypes of women can be reinforced, parodied, or actively contested through critical analysis, alternative histories, or creative work in writing and the media committed to the production of positive counter-images. 39
I have not hidden my view that this entire line of thinking is a conceptual mess. If we want to understand how politicians or advertisers manipulate us, the last thing we should do is blur distinctions among things in the world, our perception of those things when they are in front of our eyes, the mental images of those things that we construct from memory, and physical images such as photographs and drawings.
As we saw at the beginning of this chapter, the visual brain is an immensely complicated system that was designed by the forces of evolution to give us an accurate reading of the consequential things in front of us. The "intelligent eye," as perceptual psychologists call it, does not just compute the shapes and motions of people before us. It also guesses their thoughts and intentions by noticing how they gaze at, approach, avoid, help, or hinder other objects and people. And these guesses are then measured against everything else we know about people -- what we infer from gossip, from a person's words and deeds, and from Sherlock Holmes-style deductions. The result is {215} the knowledge base or semantic memory that also underlies our use of language.
Physical images such as photographs and paintings are devices that reflect light in patterns similar to those coming off real objects, thereby making the visual system respond as if it were really seeing those objects. Though people have long dreamed of illusions that completely fool the brain -- Descartes's evil demon, the philosopher's thought experiment in which a person does not realize he is a brain in a vat, the science-fiction writer's prophecy of perfect virtual reality like in The Matrix -- in actuality the illusions foisted upon us by physical images are never more than partially effective. Our perceptual systems pick up on the imperfections of an image -- the brush strokes, pixels, or frame -- and our conceptual systems pick up on the fact that we are entertaining a hypothetical world that is separate from the real world. It's not that people invariably distinguish fiction from reality: they can lose themselves in fiction, or misremember something they read in a novel as something they read in the newspapers or that happened to a friend, or mistakenly believe that a stylized portrayal of a time and place is an accurate portrayal. But all of us are capable of distinguishing fictitious worlds from real ones, as we see when a two-year-old pretends that a banana is a telephone for the fun of it but at the same time understands that a banana is not literally a telephone. 40 Cognitive scientists believe that the ability to entertain propositions without necessarily believing them -- to distinguish "John believes there is a Santa Claus" from "There is a Santa Claus" -- is a fundamental ability of human cognition. 41 Many believe that a breakdown of this ability underlies the thought disorder in the syndrome called schizophrenia. 42 Finally, there are mental images, the visualizations of objects and scenes in the mind's eye. The psychologist Stephen Kosslyn has shown that the brain is equipped with a system capable of reactivating and manipulating memories of perceptual experience, a bit like Photoshop with its tools for assembling, rotating, and coloring images. 43 Like language, imagery may be used as a slave system -- a "visuospatial sketchpad" -- by the central executive of the brain, making it a valuable form of mental representation. We use mental imagery, for example, when we visualize how a chair might fit in a living room or whether a sweater would look good on a relative. Imagery is also an invaluable tool to novelists, who imagine scenes before describing them in words, and to scientists, who rotate molecules or play out forces and motions in their imagination.
Though mental images allow our experiences (including our experience of media images) to affect our thoughts and attitudes long after the original objects have gone, it is a mistake to think that raw images are downloaded into our minds and then constitute our mental lives. Images are not stored in the mind like snapshots in a shoebox; if they
? ? ? ? ? ? ? were, how could you ever find the one you want? Rather, they are labeled and linked to a vast database of {216} knowledge, which allows them to be evaluated and interpreted in terms of what they stand for. 44 Chess masters, for
example, are famous for their ability to remember games in progress, but their mental images of the board are not raw photographs. Rather, they are saturated with abstract information about the game, such as which piece is threatening which other one and which clusters of pieces form viable defenses. We know this because when a chessboard is sprinkled with pieces at random, chess masters are no better at remembering the arrangement than amateurs are. 45 When images represent real people, not just chessmen, there are even more possibilities for organizing and annotating them with information about people's goals and motives -- for example, whether the person in an image is sincere or just acting.
The reason that images cannot constitute the contents of our thoughts is that images, like words, are inherently ambiguous. An image of Lassie could stand for Lassie, collies, dogs, animals, television stars, or family values. Some other, more abstract form of information must pick out the concept that an image is taken to exemplify. Or consider the sentence Yesterday my uncle fired his lawyer (an example suggested by Dan Dennett). When understanding the sentence, Brad might visualize his own ordeals of the day before and glimpse the "uncle" slot in a family tree, then picture courthouse steps and an angry man. Irene might have no image for "yesterday" but might visualize her uncle Bob's face, a slamming door, and a power-suited woman. Yet despite these very different image sequences, both people have understood the sentence in the same way, as we could see by questioning them or asking them to paraphrase the sentence. "Imagery couldn't be the key to comprehension," Dennett points out, "because you can't draw a picture of an uncle, or of yesterday, or firing, or a lawyer. Uncles, unlike clowns and firemen, don't look different in any characteristic way that can be visually represented, and yesterdays don't look like anything at all. "46 Since images are interpreted in the context of a deeper understanding of people and their relationships, the "crisis of representation," with its paranoia about the manipulation of our mind by media images, is overblown. People are not helplessly programmed with images; they can evaluate and interpret what they see using everything else they know, such as the credibility and motives of the source.
The postmodernist equating of images with thoughts has not only made a hash of several scholarly disciplines but has laid waste to the world of contemporary art. If images are the disease, the reasoning goes, then art is the cure. Artists can neutralize the power of media images by distorting them or reproducing them in odd contexts (like the ad parodies in Mad magazine or on Saturday Night Live, only not funny). Anyone familiar with contemporary art has seen the countless works in which stereotypes of women, minorities, or gay {217} people are "reinforced, parodied, or actively contested. " A prototypical example is a 1994 exhibit at the Whitney Museum in New York called "Black Male: Representations of Masculinity in Contemporary Art. " It aimed to take apart the way that African American men are culturally constructed in demonizing and marginalizing visual stereotypes such as the sex symbol, the athlete, the Sambo, and the photograph in a Wanted poster. According to the catalogue essay, "The real struggle is over the power to control images. " The art critic Adam Gopnik (whose mother and sister are cognitive scientists) called attention to the simplistic theory of cognition behind this tedious formula:
The show is intended to be socially therapeutic: its aim is to make you face the socially constructed images of black men, so that by confronting them -- or, rather, seeing artists confront them on your behalf -- you can make them go away. The trouble is that the entire enterprise of "disassembling social images" rests on an ambiguity in the way we use the word "image. " Mental images are not really images at all, but instead consist of complicated opinions, positions, doubts, and passionately held convictions, rooted in experience and amendable by argument, by more experience, or by coercion. Our mental images of black men, white judges, the press, and so on do not take the form of pictures of the kind that you can hang up (or "deconstruct") on a museum wall. . . . Hitler did not hate Jews because there were pictures of swarthy Semites with big noses imprinted on his cerebellum; racism does not exist in America because the picture of O. J. Simpson on the cover of Time is too dark. The view that visual cliches shape beliefs is both too pessimistic, in that it supposes that people are helplessly imprisoned by received stereotypes, and too optimistic, in that it supposes that if you could change the images you could change the beliefs. 47
Recognizing that we are equipped with sophisticated faculties that keep us in touch with reality does not entail ignoring the ways in which our faculties can be turned against us. People lie, sometimes baldly, sometimes through insinuation and presupposition (as in the question "When did you stop beating your wife? "). People disseminate disinformation about ethnic groups, not just pejorative stereotypes but tales of exploitation and perfidy that serve to stoke moralistic outrage against them. People try to manipulate social realities like status (which exist in the mind of the beholder) to make themselves look good or to sell products.
But we can best protect ourselves against such manipulation by pinpointing the vulnerabilities of our faculties of
? ? ? ? ? ? categorization, language, and imagery, not by denying their complexity. The view that humans are passive {218} receptacles of stereotypes, words, and images is condescending to ordinary people and gives unearned importance to the pretensions of cultural and academic elites. And exotic pronouncements about the limitations of our faculties, such as that there is nothing outside the text or that we inhabit a world of images rather than a real world, make it impossible even to identify lies and misrepresentations, let alone to understand how they are promulgated.
<< {219} >> Chapter 13
Out of Our Depths
A man has got to know his limitations. -- Clint Eastwood in Magnum Force
Most people are familiar with the idea that some of our ordeals come from a mismatch between the source of our passions in evolutionary history and the goals we set for ourselves today. People gorge themselves in anticipation of a famine that never comes, engage in dangerous liaisons that conceive babies they don't want, and rev up their bodies in response to stressors from which they cannot run away.
What is true for the emotions may also be true for the intellect. Some of our perplexities may come from a mismatch between the purposes for which our cognitive faculties evolved and the purposes to which we put them today. This is obvious enough when it comes to raw data processing. People do not try to multiply six-digit numbers in their heads or remember the phone number of everyone they meet, because they know their minds were not designed for the job. But it is not as obvious when it comes to the way we conceptualize the world. Our minds keep us in touch with aspects of reality -- such as objects, animals, and people -- that our ancestors dealt with for millions of years. But as science and technology open up new and hidden worlds, our untutored intuitions may find themselves at sea.
What are these intuitions? Many cognitive scientists believe that human reasoning is not accomplished by a single, general-purpose computer in the head. The world is a heterogeneous place, and we are equipped with different kinds of intuitions and logics, each appropriate to one department of reality. These ways of knowing have been called systems, modules, stances, faculties, mental organs, multiple intelligences, and reasoning engines. 1 They emerge early in life, are present in every normal person, and appear to be computed in partly distinct sets of networks in the brain. They may be installed by different {220} combinations of genes, or they may emerge when brain tissue self- organizes in response to different problems to be solved and different patterns in the sensory input. Most likely they develop by some combination of these forces.
What makes our reasoning faculties different from the departments in a university is that they are not just broad areas of knowledge, analyzed with whatever tools work best. Each faculty is based on a core intuition that was suitable for analyzing the world in which we evolved. Though cognitive scientists have not agreed on a Gray's Anatomy of the mind, here is a tentative but defensible list of cognitive faculties and the core intuitions on which they are based:
? An intuitive physics, which we use to keep track of how objects fall, bounce, and bend. Its core intuition is the concept of the object, which occupies one place, exists for a continuous span of time, and follows laws of motion and force. These are not Newton's laws but something closer to the medieval conception of impetus, an "oomph" that keeps an object in motion and gradually dissipates. 2 ? An intuitive version of biology or natural history, which we use to understand the living world. Its core intuition is that living things house a hidden essence that gives them their form and powers and drives their growth and bodily functions. 3
? An intuitive engineering, which we use to make and understand tools and other artifacts. Its core intuition is that a tool is an object with a purpose -- an object designed by a person to achieve a goal. 4
? An intuitive psychology, which we use to understand other people. Its core intuition is that other people are not objects or machines but are animated by the invisible entity we call the mind or the soul. Minds contain beliefs and desires and are the immediate cause of behavior.
? A spatial sense, which we use to navigate the world and keep track of where things are. It is based on a dead reckoner, which updates coordinates of the body's location as it moves and turns, and a network of mental maps. Each map is organized by a different reference frame: the eyes, the head, the body, or salient objects and places in the world. 5
? A number sense, which we use to think about quantities and amounts. It is based on an ability to
? ? ? ? ? ? ? ? ? ? ? register exact quantities for small numbers of objects (one, two, and three) and to make rough relative estimates for larger numbers. 6
? A sense of probability, which we use to reason about the likelihood of uncertain events. It is based on the ability to track the relative frequencies of events, that is, the proportion of events of some kind that turn out one way or the other. 7 {221}
? An intuitive economics, which we use to exchange goods and favors. It is based on the concept of reciprocal exchange, in which one party confers a benefit on another and is entitled to an equivalent benefit in return.
? A mental database and logic, which we use to represent ideas and to infer new ideas from old ones. It is based on assertions about what's what, what's where, or who did what to whom, when, where, and why. The assertions are linked in a mind-wide web and can be recombined with logical and causal operators such as and, or, not, all, some, necessary, possible, and cause. 8
? Language, which we use to share the ideas from our mental logic. It is based on a mental dictionary of memorized words and a mental grammar of combinatorial rules. The rules organize vowels and consonants into words, words into bigger words and phrases, and phrases into sentences, in such a way that the meaning of the combination can be computed from the meanings of the parts and the way they are arranged. 9
The mind also has components for which it is hard to tell where cognition leaves off and emotion begins. These include a system for assessing danger, coupled with the emotion called fear, a system for assessing contamination, coupled with the emotion called disgust, and a moral sense, which is complex enough to deserve a chapter of its own. These ways of knowing and core intuitions are suitable for the lifestyle of small groups of illiterate, stateless people who live off the land, survive by their wits, and depend on what they can carry. Our ancestors left this lifestyle for a settled existence only a few millennia ago, too recently for evolution to have done much, if anything, to our brains. Conspicuous by their absence are faculties suited to the stunning new understanding of the world wrought by science and technology. For many domains of knowledge, the mind could not have evolved dedicated machinery, the brain and genome show no hints of specialization, and people show no spontaneous intuitive understanding either in the crib or afterward. They include modern physics, cosmology, genetics, evolution, neuroscience, embryology, economics, and mathematics.
It's not just that we have to go to school or read books to learn these subjects. It's that we have no mental tools to grasp them intuitively. We depend on analogies that press an old mental faculty into service, or on jerry-built mental contraptions that wire together bits and pieces of other faculties. Understanding in these domains is likely to be uneven, shallow, and contaminated by primitive intuitions. And that can shape debates in the border disputes in which science and technology make contact with everyday life. The point of this chapter is that together with all the moral, empirical, and political factors that go into these debates, we should add the cognitive factors: the way our
{222} minds naturally frame issues. Our own cognitive makeup is a missing piece of many puzzles, including education, bioethics, food safety, economics, and human understanding itself.
~
The most obvious arena in which we confront native ways of thinking is the schoolhouse. Any theory of education must be based on a theory of human nature, and in the twentieth century that theory was often the Blank Slate or the Noble Savage.
Traditional education is based in large part on the Blank Slate: children come to school empty and have knowledge deposited in them, to be reproduced later on tests. (Critics of traditional education call this the "savings and loan" model. ) The Blank Slate also underlies the common philosophy that the early school-age years are an opportunity zone in which social values are shaped for life. Many schools today use the early grades to instill desirable attitudes toward the environment, gender, sexuality, and ethnic diversity.
Progressive educational practice, for its part, is based on the Noble Savage. As A. S. Neill wrote in his influential book Summerhill, "A child is innately wise and realistic. If left to himself without adult suggestion of any kind, he will develop as far as he is capable of developing. "10 Neill and other progressive theorists of the 1960s and 1970s argued that schools should do away with examinations, grades, curricula, and even books. Though few schools went that far, the movement left a mark on educational practice. In the method of reading instruction known as Whole Language, children are not taught which letter goes with which sound but are immersed in a book-rich environment where reading skills are expected to blossom spontaneously. 11 In the philosophy of mathematics instruction known as constructivism, children are not drilled with arithmetic tables but are enjoined to rediscover mathematical truths themselves by solving problems in groups. 12 Both methods fare badly when students' learning is assessed objectively,
? ? ? ? ? ? ? ? ? but advocates of the methods tend to disdain standardized testing.
An understanding of the mind as a complex system shaped by evolution runs against these philosophies. The alternative has emerged from the work of cognitive scientists such as Susan Carey, Howard Gardner, and David Geary. 13 Education is neither writing on a blank slate nor allowing the child's nobility to come into flower. Rather, education is a technology that tries to make up for what the human mind is innately bad at. Children don't have to go to school to learn to walk, talk, recognize objects, or remember the personalities of their friends, even though these tasks are much harder than reading, adding, or remembering dates in history. They do have to go to school to learn written language, arithmetic, and science, because those bodies of knowledge and skill were invented too recently for any species-wide knack for them to have evolved. {223}
Far from being empty receptacles or universal learners, then, children are equipped with a toolbox of implements for reasoning and learning in particular ways, and those implements must be cleverly recruited to master problems for which they were not designed. That requires not just inserting new facts and skills in children's minds but debugging and disabling old ones. Students cannot learn Newtonian physics until they unlearn their intuitive impetus-based physics. 14 They cannot learn modern biology until they unlearn their intuitive biology, which thinks in terms of vital essences. And they cannot learn evolution until they unlearn their intuitive engineering, which attributes design to the intentions of a designer.
15
Schooling also requires pupils to expose and reinforce skills that are ordinarily buried in unconscious black boxes. When children learn to read, the vowels and consonants that are seamlessly woven together in speech must be forced into children's awareness before they can associate them with squiggles on a page. 16 Effective education may also require co-opting old faculties to deal with new demands. Snatches of language can be pressed into service to do calculation, as when we recall the stanza "Five times five is twenty-five. "17 The logic of grammar can be used to grasp large numbers: the expression four thousand three hundred and fifty-seven has the grammatical structure of an English noun phrase like hat, coat, and mittens. When a student parses the number phrase she can call to mind the mental operation of aggregation, which is related to the mathematical operation of addition. 18 Spatial cognition is drafted into understanding mathematical relationships through the use of graphs, which turn data or equations into shapes. 19 Intuitive engineering supports the learning of anatomy and physiology (organs are understood as gadgets with functions), and intuitive physics supports the learning of chemistry and biology (stuff, including living stuff, is made out of tiny, bouncy, sticky objects). 20
Geary points out a final implication. Because much of the content of education is not cognitively natural, the process of mastering it may not always be easy and pleasant, notwithstanding the mantra that learning is fun. Children may be innately motivated to make friends, acquire status, hone motor skills, and explore the physical world, but they are not necessarily motivated to adapt their cognitive faculties to unnatural tasks like formal mathematics. A family, peer group, and culture that ascribe high status to school achievement may be needed to give a child the motive to persevere toward effortful feats of learning whose rewards are apparent only over the long term. 21
~
The layperson's intuitive psychology or "theory of mind" is one of the brain's most striking abilities. We do not treat other people as wind-up dolls but think of them as being animated by minds: nonphysical entities we cannot see or touch but that are as real to us as bodies and objects. Aside from {224} allowing us to predict people's behavior from their beliefs and desires, our theory of mind is tied to our ability to empathize and to our conception of life and death. The difference between a dead body and a living one is that a dead body no longer contains the vital force we call a mind. Our theory of mind is the source of the concept of the soul. The ghost in the machine is deeply rooted in our way of thinking about people.
A belief in the soul, in turn, meshes with our moral convictions. The core of morality is the recognition that others have interests as we do -- that they "feel want, taste grief, need friends," as Shakespeare put it -- and therefore that they have a right to life, liberty, and the pursuit of their interests. But who are those "others"? We need a boundary that allows us to be callous to rocks and plants but forces us to treat other humans as "persons" that possess inalienable rights. Otherwise, it seems, we would place ourselves on a slippery slope that ends in the disposal of inconvenient people or in grotesque deliberations on the value of individual lives. As Pope John Paul II pointed out, the notion that every human carries infinite value by virtue of possessing a soul would seem to give us that boundary. Until recently the intuitive concept of the soul served us pretty well. Living people had souls, which come into existence at the moment of conception and leave their bodies when they die. Animals, plants, and inanimate objects do not have souls at all. But science is showing that what we call the soul -- the locus of sentience, reason, and will -- consists of the information-processing activity of the brain, an organ governed by the laws of biology. In an individual person it comes into existence gradually through the differentiation of tissues growing from a single cell.
? ? ? ? ? ? ? ? ? ? ? In the species it came into existence gradually as the forces of evolution modified the brains of simpler animals. And though our concept of souls used to fit pretty well with natural phenomena -- a woman was either pregnant or not, a person was either dead or alive -- bio-medical research is now presenting us with cases where the two are out of register. These cases are not just scientific curiosities but are intertwined with pressing issues such as contraception, abortion, infanticide, animal rights, cloning, euthanasia, and research involving human embryos, especially the harvesting of stem cells.
In the face of these difficult choices it is tempting to look to biology to find or ratify boundaries such as "when life begins. " But that only highlights the clash between two incommensurable ways of conceiving life and mind. The intuitive and morally useful concept of an immaterial spirit simply cannot be reconciled with the scientific concept of brain activity emerging gradually in ontogeny and phylogeny. No matter where we try to draw the line between life and nonlife, or between mind and nonmind, ambiguous cases pop up to challenge our moral intuitions.
The closest event we can find to a thunderclap marking the entry of a soul {225} into the world is the moment of conception. At that instant a new human genome is determined, and we have an entity destined to develop into a unique individual. The Catholic Church and certain other Christian denominations designate conception as the moment of ensoulment and the beginning of life (which, of course, makes abortion a form of murder). But just as a microscope reveals that a straight edge is really ragged, research on human reproduction shows that the "moment of conception" is not a moment at all. Sometimes several sperm penetrate the outer membrane of the egg, and it takes time for the egg to eject the extra chromosomes. What and where is the soul during this interval? Even when a single sperm enters, its genes remain separate from those of the egg for a day or more, and it takes yet another day or so for the newly merged genome to control the cell. So the "moment" of conception is in fact a span of twenty-four to forty- eight hours. 22 Nor is the conceptus destined to become a baby. Between two-thirds and three-quarters of them never implant in the uterus and are spontaneously aborted, some because they are genetically defective, others for no discernible reason.
Still, one might say that at whatever point during this interlude the new genome is formed, the specification of a unique new person has come into existence. The soul, by this reasoning, may be identified with the genome. But during the next few days, as the embryo's cells begin to divide, they can split into several embryos, which develop into identical twins, triplets, and so on. Do identical twins share a soul? Did the Dionne quintuplets make do with one- fifth of a soul each? If not, where did the four extra souls come from? Indeed, every cell in the growing embryo is capable, with the right manipulations, of becoming a new embryo that can grow into a child. Does a multicell embryo consist of one soul per cell, and if so, where do the other souls go when the cells lose that ability? And not only can one embryo become two people, but two embryos can become one person. Occasionally two fertilized eggs, which ordinarily would go on to become fraternal twins, merge into a single embryo that develops into a person who is a genetic chimera: some of her cells have one genome, others have another genome. Does her body house two souls? For that matter, if human cloning ever became possible (and there appears to be no technical obstacle), every cell in a person's body would have the special ability that is supposedly unique to a conceptus, namely developing into a human being. True, the genes in a cheek cell can become a person only with unnatural intervention, but that is just as true for an egg that is fertilized in vitro. Yet no one would deny that children conceived by IVF have souls.
The idea that ensoulment takes place at conception is not only hard to reconcile with biology but does not have the moral superiority credited to it. It implies that we should prosecute users of intrauterine contraceptive devices and the "morning-after pill" for murder, because they prevent the conceptus from implanting. It implies that we should divert medical research from {226} curing cancer and heart disease to preventing the spontaneous miscarriages of vast numbers of microscopic conceptuses. It impels us to find surrogate mothers for the large number of embryos left over from IVF that are currently sitting in fertility clinic freezers. It would outlaw research on conception and early embryonic development that promises to reduce infertility, birth defects, and pediatric cancer, and research on stem cells that could lead to treatments for Alzheimer's disease, Parkinson's disease, diabetes, and spinal-cord injuries. And it flouts the key moral intuition that other people are worthy of moral consideration because of their feelings -- their ability to love, think, plan, enjoy, and suffer -- all of which depend on a functioning nervous system.
The enormous moral costs of equating a person with a conceptus, and the cognitive gymnastics required to maintain that belief in the face of modern biology, can sometimes lead to an agonizing reconsideration of deeply held beliefs. In 2001, Senator Orrin Hatch of Utah broke with his longtime allies in the anti-abortion movement and came out in favor of stem-cell research after studying the science of reproduction and meditating on his Mormon faith. "I have searched my conscience," he said. "I just cannot equate a child living in the womb, with moving toes and fingers and a beating heart, with an embryo in a freezer. "23
The belief that bodies are invested with souls is not just a product of religious doctrine but embedded in people's psychology and likely to emerge whenever they have not digested the findings of biology. The public reaction to cloning is a case in point. Some people fear that cloning would present us with the option of becoming immortal,
? ? ? ? others that it could produce an army of obedient zombies, or a source of organs for the original person to harvest when needed. In the recent Arnold Schwarzenegger movie The Sixth Day, clones are called "blanks," and their DNA gives them only a physical form, not a mind; they acquire a mind when a neural recording of the original person is downloaded into them. When Dolly the sheep was cloned in 1997, the cover of Der Spiegel showed a parade of Claudia Schiffers, Hitlers, and Einsteins, as if being a supermodel, fascist dictator, or scientific genius could be copied along with the DNA.
Clones, in fact, are just identical twins born at different times. If Einstein had a twin, he would not have been a zombie, would not have continued Einstein's stream of consciousness if Einstein had predeceased him, would not have given up his vital organs without a struggle, and probably would have been no Einstein (since intelligence is only partly heritable). The same would be true of a person cloned from a speck of Einstein. The bizarre misconceptions of cloning can be traced to the persistent belief that the body is suffused with a soul. One conception of cloning, which sets off a fear of an army of zombies, blanks, or organ farms, imagines the process to be the duplication of a body without a soul. The other, which sets off fears of a Faustian grab at {227} immortality or of a resurrected Hitler, conceives of cloning as duplicating the body together with the soul. This conception may also underlie the longing of some bereaved parents for a dead child to be cloned, as if that would bring the child back to life. In fact, the clone would not only grow up in a different world from the one the dead sibling grew up in, but would have different brain tissue and would traverse a different line of sentient experience.
The discovery that what we call "the person" emerges piecemeal from a gradually developing brain forces us to reframe problems in bioethics. It would have been convenient if biologists had discovered a point at which the brain is fully assembled and is plugged in and turned on for the first time, but that is not how brains work. The nervous system emerges in the embryo as a simple tube and differentiates into a brain and spinal cord. The brain begins to function in the fetus, but it continues to wire itself well into childhood and even adolescence. The demand by both religious and secular ethicists that we identify the "criteria for personhood" assumes that a dividing line in brain development can be found. But any claim that such a line has been sighted leads to moral absurdities.
If we set the boundary for personhood at birth, we should be prepared to allow an abortion minutes before birth, despite the lack of any significant difference between a late-term fetus and a neonate. It seems more reasonable to draw the line at viability. But viability is a continuum that depends on the state of current biomedical technology and on the risks of impairment that parents are willing to tolerate in their child. And it invites the obvious rejoinder: if it is all right to abort a twenty-four-week fetus, then why not the barely distinguishable fetus of twenty-four weeks plus one day? And if that is permissible, why not a fetus of twenty-four weeks plus two days, or three days, and so on until birth? On the other hand, if it is impermissible to abort a fetus the day before its birth, then what about two days before, and three days, and so on, all the way back to conception?
We face the same problem in reverse when considering euthanasia and living wills at the end of life. Most people do not depart this world in a puff of smoke but suffer a gradual and uneven breakdown of the various parts of the brain and body. Many kinds and degrees of existence lie between the living and the dead, and that will become even more true as medical technology improves.
We face the problem again in grappling with demands for animal rights. Activists who grant the right to life to any sentient being must conclude that a hamburger eater is a party to murder and that a rodent exterminator is a perpetrator of mass murder. They must outlaw medical research that would sacrifice a few mice but save a million children from painful deaths (since no one would agree to drafting a few human beings for such experiments, and on this view mice have the rights we ordinarily grant to people). On the other hand, {228} an opponent of animal rights who maintains that personhood comes from being a member of Homo sapiens is just a species bigot, no more thoughtful than the race bigots who value the lives of whites more than blacks. After all, other mammals fight to stay alive, appear to experience pleasure, and undergo pain, fear, and stress when their well-being is compromised. The great apes also share our higher pleasures of curiosity and love of kin, and our deeper aches of boredom, loneliness, and grief. Why should those interests be respected for our species but not for others?
Some moral philosophers try to thread a boundary across this treacherous landscape by equating personhood with cognitive traits that humans happen to possess. These include an ability to reflect upon oneself as a continuous locus of consciousness, to form and savor plans for the future, to dread death, and to express a choice not to die. 24 At first glance the boundary is appealing because it puts humans on one side and animals and conceptuses on the other. But it also implies that nothing is wrong with killing unwanted newborns, the senile, and the mentally handicapped, who lack the qualifying traits. Almost no one is willing to accept a criterion with those implications.
There is no solution to these dilemmas, because they arise out of a fundamental incommensurability: between our intuitive psychology, with its all-or-none concept of a person or soul, and the brute facts of biology, which tell us that the human brain evolved gradually, develops gradually, and can die gradually. And that means that moral conundrums such as abortion, euthanasia, and animal rights will never be resolved in a decisive and intuitively
? ? ? satisfying way. This does not mean that no policy is defensible and that the whole matter should be left to personal taste, political power, or religious dogma. As the bioethicist Ronald Green has pointed out, it just means we have to reconceptualize the problem: from finding a boundary in nature to choosing a boundary that best trades off the conflicting goods and evils for each policy dilemma. 25 We should make decisions in each case that can be practically implemented, that maximize happiness, and that minimize current and future suffering. Many of our current policies are already compromises of this sort: research on animals is permitted but regulated; a late-term fetus is not awarded full legal status as a person but may not be aborted unless it is necessary to protect the mother's life or health. Green notes that the shift from finding boundaries to choosing boundaries is a conceptual revolution of Copernican proportions. But the old conceptualization, which amounts to trying to pinpoint when the ghost enters the machine, is scientifically untenable and has no business guiding policy in the twenty-first century.
The traditional argument against pragmatic, case-by-case decisions is that they lead to slippery slopes. If we allow abortion, we will soon allow infanticide; if we permit research on stem cells, we will bring on a Brave New World of government-engineered humans. But here, I think, the nature of human {229} cognition can get us out of the dilemma rather than pushing us into one. A slippery slope assumes that conceptual categories must have crisp boundaries that allow in-or-out decisions, or else anything goes. But that is not how human concepts work. As we have seen, many everyday concepts have fuzzy boundaries, and the mind distinguishes between a fuzzy boundary and no boundary at all. "Adult" and "child" are fuzzy categories, which is why we could raise the drinking age to twenty-one or lower the voting age to eighteen. But that did not put us on a slippery slope in which we eventually raised the drinking age to fifty or lowered the voting age to five. Those policies really would violate our concepts of "child" and "adult," fuzzy though their boundaries may be. In the same way, we can bring our concepts of life and mind into register with biological reality without necessarily slipping down a slope.
~
When a 1999 cyclone in India left millions of people in danger of starvation, some activists denounced relief societies for distributing a nutritious grain meal because it contained genetically modified varieties of corn and soybeans (varieties that had been eaten without apparent harm in the United States). These activists are also opposed to "golden rice," a genetically modified variety that could prevent blindness in millions of children in the developing world and alleviate vitamin A deficiency in a quarter of a billion more. 26 Other activists have vandalized research facilities at which the safety of genetically modified foods is tested and new varieties are developed. For these people, even the possibility that such foods could be safe is unacceptable.
A 2001 report by the European Union reviewed eighty-one research projects conducted over fifteen years and failed to find any new risks to human health or to the environment posed by genetically modified crops. 27 This is no
surprise to a biologist. Genetically modified foods are no more dangerous than "natural" foods because they are not fundamentally different from natural foods. Virtually every animal and vegetable sold in a health-food store has been "genetically modified" for millennia by selective breeding and hybridization. The wild ancestor of carrots was a thin, bitter white root; the ancestor of corn had an inch-long, easily shattered cob with a few small, rock-hard kernels. Plants are Darwinian creatures with no particular desire to be eaten, so they did not go out of their way to be tasty, healthy, or easy for us to grow and harvest. On the contrary: they did go out of their way to deter us from eating them, by evolving irritants, toxins, and bitter-tasting compounds. 28 So there is nothing especially safe about natural foods. The "natural" method of selective breeding for pest resistance simply increases the concentration of the plant's own poisons; one variety of natural potato had to be withdrawn from the market because it proved to be toxic to people. 29 Similarly, natural flavors -- defined by one food scientist as "a flavor that's been derived with an out-of- date technology" -- are often chemically indistinguishable from their artificial {230} counterparts, and when they are distinguishable, sometimes the natural flavor is the more dangerous one. When "natural" almond flavor, benzaldehyde, is derived from peach pits, it is accompanied by traces of cyanide; when it is synthesized as an "artificial flavor," it is not. 30
A blanket fear of all artificial and genetically modified foods is patently irrational on health grounds, and it could make food more expensive and hence less available to the poor. Where do these specious fears come from? Partly they arise from the carcinogen-du-jour school of journalism that uncritically reports any study showing elevated cancer rates in rats fed megadoses of chemicals. But partly they come from an intuition about living things that was first identified by the anthropologist James George Frazer in 1890 and has recently been studied in the lab by Paul Rozin, Susan Gelman, Frank Keil, Scott Atran, and other cognitive scientists. 31
People's intuitive biology begins with the concept of an invisible essence residing in living things, which gives them their form and powers. These essentialist beliefs emerge early in childhood, and in traditional cultures they dominate reasoning about plants and animals. Often the intuitions serve people well. They allow preschoolers to deduce that a raccoon that looks like a skunk will have raccoon babies, that a seed taken from an apple and planted with flowers in
? ? ? ? ? ? ? ? ? a pot will produce an apple tree, and that an animal's behavior depends on its innards, not on its appearance. They allow traditional peoples to deduce that different-looking creatures (such as a caterpillar and a butterfly) can belong to the same kind, and they impel them to extract juices and powders from living things and try them as medicines, poisons, and food supplements. They can prevent people from sickening themselves by eating things that have been in contact with infectious substances such as feces, sick people, and rotting meat. 32
But intuitive essentialism can also lead people into error. 33 Children falsely believe that a child of English-speaking parents will speak English even if brought up in a French-speaking family, and that boys will have short hair and girls will wear dresses even if they are brought up with no other member of their sex from which they can learn those habits. Traditional peoples believe in sympathetic magic, otherwise known as voodoo. They think similar-looking objects have similar powers, so that a ground-up rhinoceros horn is a cure for erectile dysfunction. And they think that animal parts can transmit their powers to anything they mingle with, so that eating or wearing a part of a fierce animal will make one fierce.
Educated Westerners should not feel too smug. Rozin has shown that we have voodoolike intuitions ourselves. Most Americans won't touch a sterilized cockroach, or even a plastic one, and won't drink juice that the roach has touched for even a fraction of a second. 34 And even Ivy League students believe that you are what you eat. They judge that a tribe that hunts turtles for their {231} meat and wild boar for their bristles will be good swimmers, and that a tribe that hunts turtles for their shells and wild boar for their meat will be tough fighters. 35 In his history of biology, Ernst Mayr showed that many biologists originally rejected the theory of natural selection because of their belief that a species was a pure type defined by an essence. They could not wrap their minds around the concept that species are populations of variable individuals and that one can blend into another over evolutionary time. 36
In this context, the fear of genetically modified foods no longer seems so strange: it is simply the standard human intuition that every living thing has an essence. Natural foods are thought to have the pure essence of the plant or animal and to carry with them the rejuvenating powers of the pastoral environment in which they grew. Genetically modified foods, or foods containing artificial additives, are thought of as being deliberately laced with a contaminant tainted by its origins in an acrid laboratory or factory. Arguments that invoke genetics, biochemistry, evolution, and risk analysis are likely to fall on deaf ears when pitted against this deep-rooted way of thinking.
Essentialist intuitions are not the only reason that perceptions of danger can be off the mark. Risk analysts have discovered to their bemusement that people's fears are often way out of line with objective hazards. Many people avoid flying, though car travel is eleven times more dangerous. They fear getting eaten by a shark, though they are four hundred times more likely to drown in their bathtub. They clamor for expensive measures to get chloroform and trichloroethylene out of drinking water, though they are hundreds of times more likely to get cancer from a daily peanut butter sandwich (since peanuts can carry a highly carcinogenic mold).
Actually, the doctrine of hyperreality contradicts the common view of contemporary politics and entertainment as being a matter of image and appearance. The whole point of the common view is that there is a reality separate from images, and that is what allows us to decry the images that are misleading. We can, for example, criticize an old movie that shows slaves leading happy lives, or an ad that shows a corrupt politician pretending to defend the environment. If there were no such thing as substantial content, we would have no basis for preferring an accurate documentary about slavery to an apologia for it, or preferring a good expose of a politician to a slick campaign ad. The entry notes that images are associated with the world of publicity, advertising, and fashion, and thereby with business and profits. An image may thus be tied to "an imposed stereotype or an alternative subjective or cultural identity. " Media images become mental images: people cannot help but think that women or politicians or African Americans conform to the depictions in movies and advertisements. And this elevates cultural studies and postmodernist art into forces for personal and political liberation:
The study of "images of women" or "women's images" sees this field as one in which stereotypes of women can be reinforced, parodied, or actively contested through critical analysis, alternative histories, or creative work in writing and the media committed to the production of positive counter-images. 39
I have not hidden my view that this entire line of thinking is a conceptual mess. If we want to understand how politicians or advertisers manipulate us, the last thing we should do is blur distinctions among things in the world, our perception of those things when they are in front of our eyes, the mental images of those things that we construct from memory, and physical images such as photographs and drawings.
As we saw at the beginning of this chapter, the visual brain is an immensely complicated system that was designed by the forces of evolution to give us an accurate reading of the consequential things in front of us. The "intelligent eye," as perceptual psychologists call it, does not just compute the shapes and motions of people before us. It also guesses their thoughts and intentions by noticing how they gaze at, approach, avoid, help, or hinder other objects and people. And these guesses are then measured against everything else we know about people -- what we infer from gossip, from a person's words and deeds, and from Sherlock Holmes-style deductions. The result is {215} the knowledge base or semantic memory that also underlies our use of language.
Physical images such as photographs and paintings are devices that reflect light in patterns similar to those coming off real objects, thereby making the visual system respond as if it were really seeing those objects. Though people have long dreamed of illusions that completely fool the brain -- Descartes's evil demon, the philosopher's thought experiment in which a person does not realize he is a brain in a vat, the science-fiction writer's prophecy of perfect virtual reality like in The Matrix -- in actuality the illusions foisted upon us by physical images are never more than partially effective. Our perceptual systems pick up on the imperfections of an image -- the brush strokes, pixels, or frame -- and our conceptual systems pick up on the fact that we are entertaining a hypothetical world that is separate from the real world. It's not that people invariably distinguish fiction from reality: they can lose themselves in fiction, or misremember something they read in a novel as something they read in the newspapers or that happened to a friend, or mistakenly believe that a stylized portrayal of a time and place is an accurate portrayal. But all of us are capable of distinguishing fictitious worlds from real ones, as we see when a two-year-old pretends that a banana is a telephone for the fun of it but at the same time understands that a banana is not literally a telephone. 40 Cognitive scientists believe that the ability to entertain propositions without necessarily believing them -- to distinguish "John believes there is a Santa Claus" from "There is a Santa Claus" -- is a fundamental ability of human cognition. 41 Many believe that a breakdown of this ability underlies the thought disorder in the syndrome called schizophrenia. 42 Finally, there are mental images, the visualizations of objects and scenes in the mind's eye. The psychologist Stephen Kosslyn has shown that the brain is equipped with a system capable of reactivating and manipulating memories of perceptual experience, a bit like Photoshop with its tools for assembling, rotating, and coloring images. 43 Like language, imagery may be used as a slave system -- a "visuospatial sketchpad" -- by the central executive of the brain, making it a valuable form of mental representation. We use mental imagery, for example, when we visualize how a chair might fit in a living room or whether a sweater would look good on a relative. Imagery is also an invaluable tool to novelists, who imagine scenes before describing them in words, and to scientists, who rotate molecules or play out forces and motions in their imagination.
Though mental images allow our experiences (including our experience of media images) to affect our thoughts and attitudes long after the original objects have gone, it is a mistake to think that raw images are downloaded into our minds and then constitute our mental lives. Images are not stored in the mind like snapshots in a shoebox; if they
? ? ? ? ? ? ? were, how could you ever find the one you want? Rather, they are labeled and linked to a vast database of {216} knowledge, which allows them to be evaluated and interpreted in terms of what they stand for. 44 Chess masters, for
example, are famous for their ability to remember games in progress, but their mental images of the board are not raw photographs. Rather, they are saturated with abstract information about the game, such as which piece is threatening which other one and which clusters of pieces form viable defenses. We know this because when a chessboard is sprinkled with pieces at random, chess masters are no better at remembering the arrangement than amateurs are. 45 When images represent real people, not just chessmen, there are even more possibilities for organizing and annotating them with information about people's goals and motives -- for example, whether the person in an image is sincere or just acting.
The reason that images cannot constitute the contents of our thoughts is that images, like words, are inherently ambiguous. An image of Lassie could stand for Lassie, collies, dogs, animals, television stars, or family values. Some other, more abstract form of information must pick out the concept that an image is taken to exemplify. Or consider the sentence Yesterday my uncle fired his lawyer (an example suggested by Dan Dennett). When understanding the sentence, Brad might visualize his own ordeals of the day before and glimpse the "uncle" slot in a family tree, then picture courthouse steps and an angry man. Irene might have no image for "yesterday" but might visualize her uncle Bob's face, a slamming door, and a power-suited woman. Yet despite these very different image sequences, both people have understood the sentence in the same way, as we could see by questioning them or asking them to paraphrase the sentence. "Imagery couldn't be the key to comprehension," Dennett points out, "because you can't draw a picture of an uncle, or of yesterday, or firing, or a lawyer. Uncles, unlike clowns and firemen, don't look different in any characteristic way that can be visually represented, and yesterdays don't look like anything at all. "46 Since images are interpreted in the context of a deeper understanding of people and their relationships, the "crisis of representation," with its paranoia about the manipulation of our mind by media images, is overblown. People are not helplessly programmed with images; they can evaluate and interpret what they see using everything else they know, such as the credibility and motives of the source.
The postmodernist equating of images with thoughts has not only made a hash of several scholarly disciplines but has laid waste to the world of contemporary art. If images are the disease, the reasoning goes, then art is the cure. Artists can neutralize the power of media images by distorting them or reproducing them in odd contexts (like the ad parodies in Mad magazine or on Saturday Night Live, only not funny). Anyone familiar with contemporary art has seen the countless works in which stereotypes of women, minorities, or gay {217} people are "reinforced, parodied, or actively contested. " A prototypical example is a 1994 exhibit at the Whitney Museum in New York called "Black Male: Representations of Masculinity in Contemporary Art. " It aimed to take apart the way that African American men are culturally constructed in demonizing and marginalizing visual stereotypes such as the sex symbol, the athlete, the Sambo, and the photograph in a Wanted poster. According to the catalogue essay, "The real struggle is over the power to control images. " The art critic Adam Gopnik (whose mother and sister are cognitive scientists) called attention to the simplistic theory of cognition behind this tedious formula:
The show is intended to be socially therapeutic: its aim is to make you face the socially constructed images of black men, so that by confronting them -- or, rather, seeing artists confront them on your behalf -- you can make them go away. The trouble is that the entire enterprise of "disassembling social images" rests on an ambiguity in the way we use the word "image. " Mental images are not really images at all, but instead consist of complicated opinions, positions, doubts, and passionately held convictions, rooted in experience and amendable by argument, by more experience, or by coercion. Our mental images of black men, white judges, the press, and so on do not take the form of pictures of the kind that you can hang up (or "deconstruct") on a museum wall. . . . Hitler did not hate Jews because there were pictures of swarthy Semites with big noses imprinted on his cerebellum; racism does not exist in America because the picture of O. J. Simpson on the cover of Time is too dark. The view that visual cliches shape beliefs is both too pessimistic, in that it supposes that people are helplessly imprisoned by received stereotypes, and too optimistic, in that it supposes that if you could change the images you could change the beliefs. 47
Recognizing that we are equipped with sophisticated faculties that keep us in touch with reality does not entail ignoring the ways in which our faculties can be turned against us. People lie, sometimes baldly, sometimes through insinuation and presupposition (as in the question "When did you stop beating your wife? "). People disseminate disinformation about ethnic groups, not just pejorative stereotypes but tales of exploitation and perfidy that serve to stoke moralistic outrage against them. People try to manipulate social realities like status (which exist in the mind of the beholder) to make themselves look good or to sell products.
But we can best protect ourselves against such manipulation by pinpointing the vulnerabilities of our faculties of
? ? ? ? ? ? categorization, language, and imagery, not by denying their complexity. The view that humans are passive {218} receptacles of stereotypes, words, and images is condescending to ordinary people and gives unearned importance to the pretensions of cultural and academic elites. And exotic pronouncements about the limitations of our faculties, such as that there is nothing outside the text or that we inhabit a world of images rather than a real world, make it impossible even to identify lies and misrepresentations, let alone to understand how they are promulgated.
<< {219} >> Chapter 13
Out of Our Depths
A man has got to know his limitations. -- Clint Eastwood in Magnum Force
Most people are familiar with the idea that some of our ordeals come from a mismatch between the source of our passions in evolutionary history and the goals we set for ourselves today. People gorge themselves in anticipation of a famine that never comes, engage in dangerous liaisons that conceive babies they don't want, and rev up their bodies in response to stressors from which they cannot run away.
What is true for the emotions may also be true for the intellect. Some of our perplexities may come from a mismatch between the purposes for which our cognitive faculties evolved and the purposes to which we put them today. This is obvious enough when it comes to raw data processing. People do not try to multiply six-digit numbers in their heads or remember the phone number of everyone they meet, because they know their minds were not designed for the job. But it is not as obvious when it comes to the way we conceptualize the world. Our minds keep us in touch with aspects of reality -- such as objects, animals, and people -- that our ancestors dealt with for millions of years. But as science and technology open up new and hidden worlds, our untutored intuitions may find themselves at sea.
What are these intuitions? Many cognitive scientists believe that human reasoning is not accomplished by a single, general-purpose computer in the head. The world is a heterogeneous place, and we are equipped with different kinds of intuitions and logics, each appropriate to one department of reality. These ways of knowing have been called systems, modules, stances, faculties, mental organs, multiple intelligences, and reasoning engines. 1 They emerge early in life, are present in every normal person, and appear to be computed in partly distinct sets of networks in the brain. They may be installed by different {220} combinations of genes, or they may emerge when brain tissue self- organizes in response to different problems to be solved and different patterns in the sensory input. Most likely they develop by some combination of these forces.
What makes our reasoning faculties different from the departments in a university is that they are not just broad areas of knowledge, analyzed with whatever tools work best. Each faculty is based on a core intuition that was suitable for analyzing the world in which we evolved. Though cognitive scientists have not agreed on a Gray's Anatomy of the mind, here is a tentative but defensible list of cognitive faculties and the core intuitions on which they are based:
? An intuitive physics, which we use to keep track of how objects fall, bounce, and bend. Its core intuition is the concept of the object, which occupies one place, exists for a continuous span of time, and follows laws of motion and force. These are not Newton's laws but something closer to the medieval conception of impetus, an "oomph" that keeps an object in motion and gradually dissipates. 2 ? An intuitive version of biology or natural history, which we use to understand the living world. Its core intuition is that living things house a hidden essence that gives them their form and powers and drives their growth and bodily functions. 3
? An intuitive engineering, which we use to make and understand tools and other artifacts. Its core intuition is that a tool is an object with a purpose -- an object designed by a person to achieve a goal. 4
? An intuitive psychology, which we use to understand other people. Its core intuition is that other people are not objects or machines but are animated by the invisible entity we call the mind or the soul. Minds contain beliefs and desires and are the immediate cause of behavior.
? A spatial sense, which we use to navigate the world and keep track of where things are. It is based on a dead reckoner, which updates coordinates of the body's location as it moves and turns, and a network of mental maps. Each map is organized by a different reference frame: the eyes, the head, the body, or salient objects and places in the world. 5
? A number sense, which we use to think about quantities and amounts. It is based on an ability to
? ? ? ? ? ? ? ? ? ? ? register exact quantities for small numbers of objects (one, two, and three) and to make rough relative estimates for larger numbers. 6
? A sense of probability, which we use to reason about the likelihood of uncertain events. It is based on the ability to track the relative frequencies of events, that is, the proportion of events of some kind that turn out one way or the other. 7 {221}
? An intuitive economics, which we use to exchange goods and favors. It is based on the concept of reciprocal exchange, in which one party confers a benefit on another and is entitled to an equivalent benefit in return.
? A mental database and logic, which we use to represent ideas and to infer new ideas from old ones. It is based on assertions about what's what, what's where, or who did what to whom, when, where, and why. The assertions are linked in a mind-wide web and can be recombined with logical and causal operators such as and, or, not, all, some, necessary, possible, and cause. 8
? Language, which we use to share the ideas from our mental logic. It is based on a mental dictionary of memorized words and a mental grammar of combinatorial rules. The rules organize vowels and consonants into words, words into bigger words and phrases, and phrases into sentences, in such a way that the meaning of the combination can be computed from the meanings of the parts and the way they are arranged. 9
The mind also has components for which it is hard to tell where cognition leaves off and emotion begins. These include a system for assessing danger, coupled with the emotion called fear, a system for assessing contamination, coupled with the emotion called disgust, and a moral sense, which is complex enough to deserve a chapter of its own. These ways of knowing and core intuitions are suitable for the lifestyle of small groups of illiterate, stateless people who live off the land, survive by their wits, and depend on what they can carry. Our ancestors left this lifestyle for a settled existence only a few millennia ago, too recently for evolution to have done much, if anything, to our brains. Conspicuous by their absence are faculties suited to the stunning new understanding of the world wrought by science and technology. For many domains of knowledge, the mind could not have evolved dedicated machinery, the brain and genome show no hints of specialization, and people show no spontaneous intuitive understanding either in the crib or afterward. They include modern physics, cosmology, genetics, evolution, neuroscience, embryology, economics, and mathematics.
It's not just that we have to go to school or read books to learn these subjects. It's that we have no mental tools to grasp them intuitively. We depend on analogies that press an old mental faculty into service, or on jerry-built mental contraptions that wire together bits and pieces of other faculties. Understanding in these domains is likely to be uneven, shallow, and contaminated by primitive intuitions. And that can shape debates in the border disputes in which science and technology make contact with everyday life. The point of this chapter is that together with all the moral, empirical, and political factors that go into these debates, we should add the cognitive factors: the way our
{222} minds naturally frame issues. Our own cognitive makeup is a missing piece of many puzzles, including education, bioethics, food safety, economics, and human understanding itself.
~
The most obvious arena in which we confront native ways of thinking is the schoolhouse. Any theory of education must be based on a theory of human nature, and in the twentieth century that theory was often the Blank Slate or the Noble Savage.
Traditional education is based in large part on the Blank Slate: children come to school empty and have knowledge deposited in them, to be reproduced later on tests. (Critics of traditional education call this the "savings and loan" model. ) The Blank Slate also underlies the common philosophy that the early school-age years are an opportunity zone in which social values are shaped for life. Many schools today use the early grades to instill desirable attitudes toward the environment, gender, sexuality, and ethnic diversity.
Progressive educational practice, for its part, is based on the Noble Savage. As A. S. Neill wrote in his influential book Summerhill, "A child is innately wise and realistic. If left to himself without adult suggestion of any kind, he will develop as far as he is capable of developing. "10 Neill and other progressive theorists of the 1960s and 1970s argued that schools should do away with examinations, grades, curricula, and even books. Though few schools went that far, the movement left a mark on educational practice. In the method of reading instruction known as Whole Language, children are not taught which letter goes with which sound but are immersed in a book-rich environment where reading skills are expected to blossom spontaneously. 11 In the philosophy of mathematics instruction known as constructivism, children are not drilled with arithmetic tables but are enjoined to rediscover mathematical truths themselves by solving problems in groups. 12 Both methods fare badly when students' learning is assessed objectively,
? ? ? ? ? ? ? ? ? but advocates of the methods tend to disdain standardized testing.
An understanding of the mind as a complex system shaped by evolution runs against these philosophies. The alternative has emerged from the work of cognitive scientists such as Susan Carey, Howard Gardner, and David Geary. 13 Education is neither writing on a blank slate nor allowing the child's nobility to come into flower. Rather, education is a technology that tries to make up for what the human mind is innately bad at. Children don't have to go to school to learn to walk, talk, recognize objects, or remember the personalities of their friends, even though these tasks are much harder than reading, adding, or remembering dates in history. They do have to go to school to learn written language, arithmetic, and science, because those bodies of knowledge and skill were invented too recently for any species-wide knack for them to have evolved. {223}
Far from being empty receptacles or universal learners, then, children are equipped with a toolbox of implements for reasoning and learning in particular ways, and those implements must be cleverly recruited to master problems for which they were not designed. That requires not just inserting new facts and skills in children's minds but debugging and disabling old ones. Students cannot learn Newtonian physics until they unlearn their intuitive impetus-based physics. 14 They cannot learn modern biology until they unlearn their intuitive biology, which thinks in terms of vital essences. And they cannot learn evolution until they unlearn their intuitive engineering, which attributes design to the intentions of a designer.
15
Schooling also requires pupils to expose and reinforce skills that are ordinarily buried in unconscious black boxes. When children learn to read, the vowels and consonants that are seamlessly woven together in speech must be forced into children's awareness before they can associate them with squiggles on a page. 16 Effective education may also require co-opting old faculties to deal with new demands. Snatches of language can be pressed into service to do calculation, as when we recall the stanza "Five times five is twenty-five. "17 The logic of grammar can be used to grasp large numbers: the expression four thousand three hundred and fifty-seven has the grammatical structure of an English noun phrase like hat, coat, and mittens. When a student parses the number phrase she can call to mind the mental operation of aggregation, which is related to the mathematical operation of addition. 18 Spatial cognition is drafted into understanding mathematical relationships through the use of graphs, which turn data or equations into shapes. 19 Intuitive engineering supports the learning of anatomy and physiology (organs are understood as gadgets with functions), and intuitive physics supports the learning of chemistry and biology (stuff, including living stuff, is made out of tiny, bouncy, sticky objects). 20
Geary points out a final implication. Because much of the content of education is not cognitively natural, the process of mastering it may not always be easy and pleasant, notwithstanding the mantra that learning is fun. Children may be innately motivated to make friends, acquire status, hone motor skills, and explore the physical world, but they are not necessarily motivated to adapt their cognitive faculties to unnatural tasks like formal mathematics. A family, peer group, and culture that ascribe high status to school achievement may be needed to give a child the motive to persevere toward effortful feats of learning whose rewards are apparent only over the long term. 21
~
The layperson's intuitive psychology or "theory of mind" is one of the brain's most striking abilities. We do not treat other people as wind-up dolls but think of them as being animated by minds: nonphysical entities we cannot see or touch but that are as real to us as bodies and objects. Aside from {224} allowing us to predict people's behavior from their beliefs and desires, our theory of mind is tied to our ability to empathize and to our conception of life and death. The difference between a dead body and a living one is that a dead body no longer contains the vital force we call a mind. Our theory of mind is the source of the concept of the soul. The ghost in the machine is deeply rooted in our way of thinking about people.
A belief in the soul, in turn, meshes with our moral convictions. The core of morality is the recognition that others have interests as we do -- that they "feel want, taste grief, need friends," as Shakespeare put it -- and therefore that they have a right to life, liberty, and the pursuit of their interests. But who are those "others"? We need a boundary that allows us to be callous to rocks and plants but forces us to treat other humans as "persons" that possess inalienable rights. Otherwise, it seems, we would place ourselves on a slippery slope that ends in the disposal of inconvenient people or in grotesque deliberations on the value of individual lives. As Pope John Paul II pointed out, the notion that every human carries infinite value by virtue of possessing a soul would seem to give us that boundary. Until recently the intuitive concept of the soul served us pretty well. Living people had souls, which come into existence at the moment of conception and leave their bodies when they die. Animals, plants, and inanimate objects do not have souls at all. But science is showing that what we call the soul -- the locus of sentience, reason, and will -- consists of the information-processing activity of the brain, an organ governed by the laws of biology. In an individual person it comes into existence gradually through the differentiation of tissues growing from a single cell.
? ? ? ? ? ? ? ? ? ? ? In the species it came into existence gradually as the forces of evolution modified the brains of simpler animals. And though our concept of souls used to fit pretty well with natural phenomena -- a woman was either pregnant or not, a person was either dead or alive -- bio-medical research is now presenting us with cases where the two are out of register. These cases are not just scientific curiosities but are intertwined with pressing issues such as contraception, abortion, infanticide, animal rights, cloning, euthanasia, and research involving human embryos, especially the harvesting of stem cells.
In the face of these difficult choices it is tempting to look to biology to find or ratify boundaries such as "when life begins. " But that only highlights the clash between two incommensurable ways of conceiving life and mind. The intuitive and morally useful concept of an immaterial spirit simply cannot be reconciled with the scientific concept of brain activity emerging gradually in ontogeny and phylogeny. No matter where we try to draw the line between life and nonlife, or between mind and nonmind, ambiguous cases pop up to challenge our moral intuitions.
The closest event we can find to a thunderclap marking the entry of a soul {225} into the world is the moment of conception. At that instant a new human genome is determined, and we have an entity destined to develop into a unique individual. The Catholic Church and certain other Christian denominations designate conception as the moment of ensoulment and the beginning of life (which, of course, makes abortion a form of murder). But just as a microscope reveals that a straight edge is really ragged, research on human reproduction shows that the "moment of conception" is not a moment at all. Sometimes several sperm penetrate the outer membrane of the egg, and it takes time for the egg to eject the extra chromosomes. What and where is the soul during this interval? Even when a single sperm enters, its genes remain separate from those of the egg for a day or more, and it takes yet another day or so for the newly merged genome to control the cell. So the "moment" of conception is in fact a span of twenty-four to forty- eight hours. 22 Nor is the conceptus destined to become a baby. Between two-thirds and three-quarters of them never implant in the uterus and are spontaneously aborted, some because they are genetically defective, others for no discernible reason.
Still, one might say that at whatever point during this interlude the new genome is formed, the specification of a unique new person has come into existence. The soul, by this reasoning, may be identified with the genome. But during the next few days, as the embryo's cells begin to divide, they can split into several embryos, which develop into identical twins, triplets, and so on. Do identical twins share a soul? Did the Dionne quintuplets make do with one- fifth of a soul each? If not, where did the four extra souls come from? Indeed, every cell in the growing embryo is capable, with the right manipulations, of becoming a new embryo that can grow into a child. Does a multicell embryo consist of one soul per cell, and if so, where do the other souls go when the cells lose that ability? And not only can one embryo become two people, but two embryos can become one person. Occasionally two fertilized eggs, which ordinarily would go on to become fraternal twins, merge into a single embryo that develops into a person who is a genetic chimera: some of her cells have one genome, others have another genome. Does her body house two souls? For that matter, if human cloning ever became possible (and there appears to be no technical obstacle), every cell in a person's body would have the special ability that is supposedly unique to a conceptus, namely developing into a human being. True, the genes in a cheek cell can become a person only with unnatural intervention, but that is just as true for an egg that is fertilized in vitro. Yet no one would deny that children conceived by IVF have souls.
The idea that ensoulment takes place at conception is not only hard to reconcile with biology but does not have the moral superiority credited to it. It implies that we should prosecute users of intrauterine contraceptive devices and the "morning-after pill" for murder, because they prevent the conceptus from implanting. It implies that we should divert medical research from {226} curing cancer and heart disease to preventing the spontaneous miscarriages of vast numbers of microscopic conceptuses. It impels us to find surrogate mothers for the large number of embryos left over from IVF that are currently sitting in fertility clinic freezers. It would outlaw research on conception and early embryonic development that promises to reduce infertility, birth defects, and pediatric cancer, and research on stem cells that could lead to treatments for Alzheimer's disease, Parkinson's disease, diabetes, and spinal-cord injuries. And it flouts the key moral intuition that other people are worthy of moral consideration because of their feelings -- their ability to love, think, plan, enjoy, and suffer -- all of which depend on a functioning nervous system.
The enormous moral costs of equating a person with a conceptus, and the cognitive gymnastics required to maintain that belief in the face of modern biology, can sometimes lead to an agonizing reconsideration of deeply held beliefs. In 2001, Senator Orrin Hatch of Utah broke with his longtime allies in the anti-abortion movement and came out in favor of stem-cell research after studying the science of reproduction and meditating on his Mormon faith. "I have searched my conscience," he said. "I just cannot equate a child living in the womb, with moving toes and fingers and a beating heart, with an embryo in a freezer. "23
The belief that bodies are invested with souls is not just a product of religious doctrine but embedded in people's psychology and likely to emerge whenever they have not digested the findings of biology. The public reaction to cloning is a case in point. Some people fear that cloning would present us with the option of becoming immortal,
? ? ? ? others that it could produce an army of obedient zombies, or a source of organs for the original person to harvest when needed. In the recent Arnold Schwarzenegger movie The Sixth Day, clones are called "blanks," and their DNA gives them only a physical form, not a mind; they acquire a mind when a neural recording of the original person is downloaded into them. When Dolly the sheep was cloned in 1997, the cover of Der Spiegel showed a parade of Claudia Schiffers, Hitlers, and Einsteins, as if being a supermodel, fascist dictator, or scientific genius could be copied along with the DNA.
Clones, in fact, are just identical twins born at different times. If Einstein had a twin, he would not have been a zombie, would not have continued Einstein's stream of consciousness if Einstein had predeceased him, would not have given up his vital organs without a struggle, and probably would have been no Einstein (since intelligence is only partly heritable). The same would be true of a person cloned from a speck of Einstein. The bizarre misconceptions of cloning can be traced to the persistent belief that the body is suffused with a soul. One conception of cloning, which sets off a fear of an army of zombies, blanks, or organ farms, imagines the process to be the duplication of a body without a soul. The other, which sets off fears of a Faustian grab at {227} immortality or of a resurrected Hitler, conceives of cloning as duplicating the body together with the soul. This conception may also underlie the longing of some bereaved parents for a dead child to be cloned, as if that would bring the child back to life. In fact, the clone would not only grow up in a different world from the one the dead sibling grew up in, but would have different brain tissue and would traverse a different line of sentient experience.
The discovery that what we call "the person" emerges piecemeal from a gradually developing brain forces us to reframe problems in bioethics. It would have been convenient if biologists had discovered a point at which the brain is fully assembled and is plugged in and turned on for the first time, but that is not how brains work. The nervous system emerges in the embryo as a simple tube and differentiates into a brain and spinal cord. The brain begins to function in the fetus, but it continues to wire itself well into childhood and even adolescence. The demand by both religious and secular ethicists that we identify the "criteria for personhood" assumes that a dividing line in brain development can be found. But any claim that such a line has been sighted leads to moral absurdities.
If we set the boundary for personhood at birth, we should be prepared to allow an abortion minutes before birth, despite the lack of any significant difference between a late-term fetus and a neonate. It seems more reasonable to draw the line at viability. But viability is a continuum that depends on the state of current biomedical technology and on the risks of impairment that parents are willing to tolerate in their child. And it invites the obvious rejoinder: if it is all right to abort a twenty-four-week fetus, then why not the barely distinguishable fetus of twenty-four weeks plus one day? And if that is permissible, why not a fetus of twenty-four weeks plus two days, or three days, and so on until birth? On the other hand, if it is impermissible to abort a fetus the day before its birth, then what about two days before, and three days, and so on, all the way back to conception?
We face the same problem in reverse when considering euthanasia and living wills at the end of life. Most people do not depart this world in a puff of smoke but suffer a gradual and uneven breakdown of the various parts of the brain and body. Many kinds and degrees of existence lie between the living and the dead, and that will become even more true as medical technology improves.
We face the problem again in grappling with demands for animal rights. Activists who grant the right to life to any sentient being must conclude that a hamburger eater is a party to murder and that a rodent exterminator is a perpetrator of mass murder. They must outlaw medical research that would sacrifice a few mice but save a million children from painful deaths (since no one would agree to drafting a few human beings for such experiments, and on this view mice have the rights we ordinarily grant to people). On the other hand, {228} an opponent of animal rights who maintains that personhood comes from being a member of Homo sapiens is just a species bigot, no more thoughtful than the race bigots who value the lives of whites more than blacks. After all, other mammals fight to stay alive, appear to experience pleasure, and undergo pain, fear, and stress when their well-being is compromised. The great apes also share our higher pleasures of curiosity and love of kin, and our deeper aches of boredom, loneliness, and grief. Why should those interests be respected for our species but not for others?
Some moral philosophers try to thread a boundary across this treacherous landscape by equating personhood with cognitive traits that humans happen to possess. These include an ability to reflect upon oneself as a continuous locus of consciousness, to form and savor plans for the future, to dread death, and to express a choice not to die. 24 At first glance the boundary is appealing because it puts humans on one side and animals and conceptuses on the other. But it also implies that nothing is wrong with killing unwanted newborns, the senile, and the mentally handicapped, who lack the qualifying traits. Almost no one is willing to accept a criterion with those implications.
There is no solution to these dilemmas, because they arise out of a fundamental incommensurability: between our intuitive psychology, with its all-or-none concept of a person or soul, and the brute facts of biology, which tell us that the human brain evolved gradually, develops gradually, and can die gradually. And that means that moral conundrums such as abortion, euthanasia, and animal rights will never be resolved in a decisive and intuitively
? ? ? satisfying way. This does not mean that no policy is defensible and that the whole matter should be left to personal taste, political power, or religious dogma. As the bioethicist Ronald Green has pointed out, it just means we have to reconceptualize the problem: from finding a boundary in nature to choosing a boundary that best trades off the conflicting goods and evils for each policy dilemma. 25 We should make decisions in each case that can be practically implemented, that maximize happiness, and that minimize current and future suffering. Many of our current policies are already compromises of this sort: research on animals is permitted but regulated; a late-term fetus is not awarded full legal status as a person but may not be aborted unless it is necessary to protect the mother's life or health. Green notes that the shift from finding boundaries to choosing boundaries is a conceptual revolution of Copernican proportions. But the old conceptualization, which amounts to trying to pinpoint when the ghost enters the machine, is scientifically untenable and has no business guiding policy in the twenty-first century.
The traditional argument against pragmatic, case-by-case decisions is that they lead to slippery slopes. If we allow abortion, we will soon allow infanticide; if we permit research on stem cells, we will bring on a Brave New World of government-engineered humans. But here, I think, the nature of human {229} cognition can get us out of the dilemma rather than pushing us into one. A slippery slope assumes that conceptual categories must have crisp boundaries that allow in-or-out decisions, or else anything goes. But that is not how human concepts work. As we have seen, many everyday concepts have fuzzy boundaries, and the mind distinguishes between a fuzzy boundary and no boundary at all. "Adult" and "child" are fuzzy categories, which is why we could raise the drinking age to twenty-one or lower the voting age to eighteen. But that did not put us on a slippery slope in which we eventually raised the drinking age to fifty or lowered the voting age to five. Those policies really would violate our concepts of "child" and "adult," fuzzy though their boundaries may be. In the same way, we can bring our concepts of life and mind into register with biological reality without necessarily slipping down a slope.
~
When a 1999 cyclone in India left millions of people in danger of starvation, some activists denounced relief societies for distributing a nutritious grain meal because it contained genetically modified varieties of corn and soybeans (varieties that had been eaten without apparent harm in the United States). These activists are also opposed to "golden rice," a genetically modified variety that could prevent blindness in millions of children in the developing world and alleviate vitamin A deficiency in a quarter of a billion more. 26 Other activists have vandalized research facilities at which the safety of genetically modified foods is tested and new varieties are developed. For these people, even the possibility that such foods could be safe is unacceptable.
A 2001 report by the European Union reviewed eighty-one research projects conducted over fifteen years and failed to find any new risks to human health or to the environment posed by genetically modified crops. 27 This is no
surprise to a biologist. Genetically modified foods are no more dangerous than "natural" foods because they are not fundamentally different from natural foods. Virtually every animal and vegetable sold in a health-food store has been "genetically modified" for millennia by selective breeding and hybridization. The wild ancestor of carrots was a thin, bitter white root; the ancestor of corn had an inch-long, easily shattered cob with a few small, rock-hard kernels. Plants are Darwinian creatures with no particular desire to be eaten, so they did not go out of their way to be tasty, healthy, or easy for us to grow and harvest. On the contrary: they did go out of their way to deter us from eating them, by evolving irritants, toxins, and bitter-tasting compounds. 28 So there is nothing especially safe about natural foods. The "natural" method of selective breeding for pest resistance simply increases the concentration of the plant's own poisons; one variety of natural potato had to be withdrawn from the market because it proved to be toxic to people. 29 Similarly, natural flavors -- defined by one food scientist as "a flavor that's been derived with an out-of- date technology" -- are often chemically indistinguishable from their artificial {230} counterparts, and when they are distinguishable, sometimes the natural flavor is the more dangerous one. When "natural" almond flavor, benzaldehyde, is derived from peach pits, it is accompanied by traces of cyanide; when it is synthesized as an "artificial flavor," it is not. 30
A blanket fear of all artificial and genetically modified foods is patently irrational on health grounds, and it could make food more expensive and hence less available to the poor. Where do these specious fears come from? Partly they arise from the carcinogen-du-jour school of journalism that uncritically reports any study showing elevated cancer rates in rats fed megadoses of chemicals. But partly they come from an intuition about living things that was first identified by the anthropologist James George Frazer in 1890 and has recently been studied in the lab by Paul Rozin, Susan Gelman, Frank Keil, Scott Atran, and other cognitive scientists. 31
People's intuitive biology begins with the concept of an invisible essence residing in living things, which gives them their form and powers. These essentialist beliefs emerge early in childhood, and in traditional cultures they dominate reasoning about plants and animals. Often the intuitions serve people well. They allow preschoolers to deduce that a raccoon that looks like a skunk will have raccoon babies, that a seed taken from an apple and planted with flowers in
? ? ? ? ? ? ? ? ? a pot will produce an apple tree, and that an animal's behavior depends on its innards, not on its appearance. They allow traditional peoples to deduce that different-looking creatures (such as a caterpillar and a butterfly) can belong to the same kind, and they impel them to extract juices and powders from living things and try them as medicines, poisons, and food supplements. They can prevent people from sickening themselves by eating things that have been in contact with infectious substances such as feces, sick people, and rotting meat. 32
But intuitive essentialism can also lead people into error. 33 Children falsely believe that a child of English-speaking parents will speak English even if brought up in a French-speaking family, and that boys will have short hair and girls will wear dresses even if they are brought up with no other member of their sex from which they can learn those habits. Traditional peoples believe in sympathetic magic, otherwise known as voodoo. They think similar-looking objects have similar powers, so that a ground-up rhinoceros horn is a cure for erectile dysfunction. And they think that animal parts can transmit their powers to anything they mingle with, so that eating or wearing a part of a fierce animal will make one fierce.
Educated Westerners should not feel too smug. Rozin has shown that we have voodoolike intuitions ourselves. Most Americans won't touch a sterilized cockroach, or even a plastic one, and won't drink juice that the roach has touched for even a fraction of a second. 34 And even Ivy League students believe that you are what you eat. They judge that a tribe that hunts turtles for their {231} meat and wild boar for their bristles will be good swimmers, and that a tribe that hunts turtles for their shells and wild boar for their meat will be tough fighters. 35 In his history of biology, Ernst Mayr showed that many biologists originally rejected the theory of natural selection because of their belief that a species was a pure type defined by an essence. They could not wrap their minds around the concept that species are populations of variable individuals and that one can blend into another over evolutionary time. 36
In this context, the fear of genetically modified foods no longer seems so strange: it is simply the standard human intuition that every living thing has an essence. Natural foods are thought to have the pure essence of the plant or animal and to carry with them the rejuvenating powers of the pastoral environment in which they grew. Genetically modified foods, or foods containing artificial additives, are thought of as being deliberately laced with a contaminant tainted by its origins in an acrid laboratory or factory. Arguments that invoke genetics, biochemistry, evolution, and risk analysis are likely to fall on deaf ears when pitted against this deep-rooted way of thinking.
Essentialist intuitions are not the only reason that perceptions of danger can be off the mark. Risk analysts have discovered to their bemusement that people's fears are often way out of line with objective hazards. Many people avoid flying, though car travel is eleven times more dangerous. They fear getting eaten by a shark, though they are four hundred times more likely to drown in their bathtub. They clamor for expensive measures to get chloroform and trichloroethylene out of drinking water, though they are hundreds of times more likely to get cancer from a daily peanut butter sandwich (since peanuts can carry a highly carcinogenic mold).
