The talents that are human birthrights -- speaking and understanding, using common sense,
teaching
children, inferring other people's motives -- will probably not be duplicated by machines in our lifetime, if ever.
Steven-Pinker-The-Blank-Slate 1
The point of this section is not to refute his doctrines, nor is it to condemn religion or argue against the existence of God.
Religions have provided comfort, community, and moral guidance to countless people, and some biologists argue that a sophisticated deism, toward which many religions are evolving, can be made compatible with an evolutionary understanding of the mind and human nature.
2 My goal is defensive: to refute the accusation that a materialistic view of the mind is inherently amoral and that religious conceptions are to be favored because they are inherently more humane.
Even the most atheistic scientists do not, of course, advocate a callous amorality. The brain may be a physical system made of ordinary matter, but that matter is organized in such a way as to give rise to a sentient organism with a capacity to feel pleasure and pain. And that in turn sets the stage for the emergence of morality. The reason is succinctly explained in the comic strip Calvin and Hobbes (see p. 188).
The feline Hobbes, like his human namesake, has shown why an amoral egoist is in an untenable position. He is better off if he never gets shoved into the mud, but he can hardly demand that others refrain from shoving him if he himself is not willing to forgo shoving others. And since one is better off not shoving and not getting shoved than shoving and getting shoved, it pays to insist on a moral code, even if the price is adhering to it oneself. As moral philosophers through the ages have pointed out, a philosophy of living based on "Not everyone, just me! " falls apart as soon as one sees oneself from an objective standpoint as a person just like others. It is like insisting that "here," the point in space one happens to be occupying at the moment, is a special place in the universe. 3
The dynamic between Calvin and Hobbes (the cartoon characters) is inherent to social organisms, and there are reasons to believe that the solution {188}
? ? ? ? ? ? to it -- a moral sense -- evolved in our species rather than having to be deduced from scratch by each of us after we've picked ourselves up out of the mud. 4 Children as young as a year and a half spontaneously give toys, proffer help, and try to comfort adults or other children who are visibly distressed. 5 People in all cultures distinguish right from wrong, have a sense of fairness, help one another, impose rights and obligations, believe that wrongs should be redressed, and proscribe rape, murder, and some kinds of violence. 6 These normal sentiments are conspicuous by their absence in the aberrant individuals we call psychopaths. 7 The alternative, then, to the religious theory of the source of values is that evolution endowed us with a moral sense, and we have expanded its circle of application over the course of history through reason (grasping the logical interchangeability of our interests and others'), knowledge (learning of the advantages of cooperation over the long term), and sympathy (having experiences that allow us to feel other people's pain). {189}
How can we tell which theory is preferable? A thought experiment can pit them against each other. What would be the right thing to do if God had commanded people to be selfish and cruel rather than generous and kind? Those who root their values in religion would have to say that we ought to be selfish and cruel. Those who appeal to a moral sense would say that we ought to reject God's command. This shows -- I hope -- that it is our moral sense that deserves priority. 8
This thought experiment is not just a logical brainteaser of the kind beloved by thirteen-year-old atheists, such as why God cares how we behave if he can see the future and already knows. The history of religion shows that God has commanded people to do all manner of selfish and cruel acts: massacre Midianites and abduct their women, stone prostitutes, execute homosexuals, burn witches, slay heretics and infidels, throw Protestants out of windows, withhold medicine from dying children, shoot up abortion clinics, hunt down Salman Rushdie, blow themselves up in marketplaces, and crash airplanes into skyscrapers. Recall that even Hitler thought he was carrying out the will of God. 9 The recurrence of evil acts committed in the name of God shows that they are not random perversions. An omnipotent authority that no one can see is a useful backer for malevolent leaders hoping to enlist holy warriors. And since unverifiable beliefs have to be passed along from parents and peers rather than discovered in the world, they
? ? ? ? ? ? ? differ from group to group and become divisive identity badges.
And who says the doctrine of the soul is more humane than the understanding of the mind as a physical organ? I see no dignity in letting people die of hepatitis or be ravaged by Parkinson's disease when a cure may lie in research on stem cells that religious movements seek to ban because it uses balls of cells that have made the "ontological leap" to "spiritual souls. " Sources of immense misery such as Alzheimer's disease, major depression, and schizophrenia will be alleviated not by treating thought and emotion as manifestations of an immaterial soul but by treating them as manifestations of physiology and genetics. 10
Finally, the doctrine of a soul that outlives the body is anything but righteous, because it necessarily devalues the lives we live on this earth. When Susan Smith sent her two young sons to the bottom of a lake, she eased her conscience with the rationalization that "my children deserve to have the best, and now they will. " Allusions to a happy afterlife are typical in the final letters of parents who take their children's lives before taking their own,11 and we have recently been reminded of how such beliefs embolden suicide bombers and kamikaze hijackers. This is why we should reject the argument that if people stopped believing in divine retribution they would do evil with impunity. Yes, if nonbelievers thought they could elude the legal system, the opprobrium of their {190} communities, and their own consciences, they would not be deterred by the threat of spending eternity in hell. But they would also not be tempted to massacre thousands of people by the promise of spending eternity in heaven.
Even the emotional comfort of a belief in an afterlife can go both ways. Would life lose its purpose if we ceased to exist when our brains die? On the contrary, nothing invests life with more meaning than the realization that every moment of sentience is a precious gift. How many fights have been averted, how many friendships renewed, how many hours not squandered, how many gestures of affection offered, because we sometimes remind ourselves that "life is short"? ~
Why do secular thinkers fear that biology drains life of meaning? It is because biology seems to deflate the values we most cherish. If the reason we love our children is that a squirt of oxytocin in the brain compels us to protect our genetic investment, wouldn't the nobility of parenthood be undermined and its sacrifices devalued? If sympathy, trust, and a yearning for justice evolved as a way to earn favors and deter cheaters, wouldn't that imply that there are really no such things as altruism and justice for their own sake? We sneer at the philanthropist who profits from his donation because of the tax savings, the televangelist who thunders against sin but visits prostitutes, the politician who defends the downtrodden only when the cameras are rolling, and the sensitive new-age guy who backs feminism because it's a good way to attract women. Evolutionary psychology seems to be saying that we are all such hypocrites, all the time.
The fear that scientific knowledge undermines human values reminds me of the opening scene in Annie Hall, in which the young Alvy Singer has been taken to the family doctor:
mother: He's been depressed. All of a sudden, he can't do anything. doctor: Why are you depressed, Alvy?
mother: Tell Dr. Flicker. [Answers for him. ] It's something he read. doctor: Something he read, huh?
alvy: [Head down. ] The universe is expanding.
doctor: The universe is expanding?
alvy: Well, the universe is everything, and if it's expanding, someday it will break apart and that would be the end of everything!
mother: What is that your business? [To the doctor. ] He stopped doing his homework.
alvy: What's the point?
The scene is funny because Alvy has confused two levels of analysis: the scale of billions of years with which we measure the universe, and the scale of {191} decades, years, and days with which we measure our lives. As Alvy's mother points out, "What has the universe got to do with it? You're here in Brooklyn! Brooklyn is not expanding! " People who are depressed at the thought that all our motives are selfish are as confused as Alvy. They have mixed up ultimate causation (why something evolved by natural selection) with proximate causation (how the entity works here and now). The mix-up is natural because the two explanations can look so much alike.
Richard Dawkins showed that a good way to understand the logic of natural selection is to imagine that genes are agents with selfish motives. No one should begrudge him the metaphor, but it contains a trap for the unwary. The genes have metaphorical motives -- making copies of themselves -- and the organisms they design have real motives. But they are not the same motives. Sometimes the most selfish thing a gene can do is wire unselfish motives into a human brain -- heartfelt, unstinting, deep-in-the-marrow unselfishness. The love of children (who carry one's
? ? ? ? genes into posterity), a faithful spouse (whose genetic fate is identical to one's own), and friends and allies (who trust you if you're trustworthy) can be bottomless and unimpeachable as far as we humans are concerned (proximate level), even if it is metaphorically self-serving as far as the genes are concerned (ultimate level).
I suspect there is another reason why the explanations are so easily confused. We all know that people sometimes have ulterior motives. They may be publicly generous but privately greedy, publicly pious but privately cynical, publicly platonic but privately lusting. Freud accustomed us to the idea that ulterior motives are pervasive in behavior, exerting their effects from an inaccessible stratum of the mind. Combine this with the common misconception that the genes are a kind of essence or core of the person, and you get a mongrel of Dawkins and Freud: the idea that the metaphorical motives of the genes are the deep, unconscious, ulterior motives of the person. That is an error. Brooklyn is not expanding.
Even people who can keep genes and people apart in their minds might find themselves depressed. Psychology has taught us that aspects of our experience may be figments, artifacts of how information is processed in the brain. The difference in kind between our experience of red and our experience of green does not mirror any difference in kind in lightwaves in the world -- the wavelengths of light, which give rise to our perception of hue, form a smooth continuum. Red and green, perceived as qualitatively different properties, are constructs of the chemistry and circuitry of our nervous system. They could be absent in an organism with different photopigments or wiring; indeed, people with the most common form of colorblindness are just such organisms. And the emotional coloring of an object is as much a figment as its {192} physical coloring. The sweetness of fruit, the scariness of heights, and the vileness of carrion are fancies of a nervous system that evolved to react to those objects in adaptive ways.
The sciences of human nature seem to imply that the same is true of right and wrong, merit and worthlessness, beauty and ugliness, holiness and baseness. They are neural constructs, movies we project onto the interior of our skulls, ways to tickle the pleasure centers of the brain, with no more reality than the difference between red and green. When Marley's ghost asked Scrooge why he doubted his senses, he said, "Because a little thing affects them. A slight disorder of the stomach makes them cheats. You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There's more of gravy than of grave about you, whatever you are! " Science seems to be saying that the same is true of everything we value.
But just because our brains are prepared to think in certain ways, it does not follow that the objects of those thoughts are fictitious. Many of our faculties evolved to mesh with real entities in the world. Our perception of depth is the product of complicated circuitry in the brain, circuitry that is absent from other species. But that does not mean that there aren't real trees and cliffs out there, or that the world is as flat as a pancake. And so it may be with more abstract entities. Humans, like many animals, appear to have an innate sense of number, which can be explained by the advantages of reasoning about numerosity during our evolutionary history. (For example, if three bears go into a cave and two come out, is it safe to enter? ) But the mere fact that a number faculty evolved does not mean that numbers are hallucinations. According to the Platonist conception of number favored by many mathematicians and philosophers, entities such as numbers and shapes have an existence independent of minds. The number three is not invented out of whole cloth; it has real properties that can be discovered and explored. No rational creature equipped with circuitry to understand the concept "two" and the concept of addition could discover that two plus one equals anything other than three. That is why we expect similar bodies of mathematical results to emerge from different cultures or even different planets. If so, the number sense evolved to grasp abstract truths in the world that exist independently of the minds that grasp them.
Perhaps the same argument can be made for morality. According to the theory of moral realism, right and wrong exist, and have an inherent logic that licenses some moral arguments and not others. 12 The world presents us with non-
zero-sum games in which it is better for both parties to act unselfishly than for both to act selfishly (better not to shove and not to be shoved than to shove and be shoved). Given the goal of being better off, certain conditions
{193} follow necessarily. No creature equipped with circuitry to understand that it is immoral for you to hurt me could discover anything but that it is immoral for me to hurt you. As with numbers and the number sense, we would expect moral systems to evolve toward similar conclusions in different cultures or even different planets. And in fact the Golden Rule has been rediscovered many times: by the authors of Leviticus and the Mahabharata; by Hillel, Jesus, and Confucius; by the Stoic philosophers of the Roman Empire; by social contract theorists such as Hobbes, Rousseau, and Locke; and by moral philosophers such as Kant in his categorical imperative. 13 Our moral sense may have evolved to mesh with an intrinsic logic of ethics rather than concocting it in our heads out of nothing.
But even if the Platonic existence of moral logic is too rich for your blood, you can still see morality as something more than a social convention or religious dogma. Whatever its ontological status may be, a moral sense is part of the standard equipment of the human mind. It's the only mind we've got, and we have no choice but to take its intuitions seriously. If we are so constituted that we cannot help but think in moral terms (at least some of the time and toward some people), then morality is as real for us as if it were decreed by the Almighty or written into the cosmos. And so
? ? ? ? it is with other human values like love, truth, and beauty. Could we ever know whether they are really "out there" or whether we just think they are out there because the human brain makes it impossible not to think they are out there? And how bad would it be if they were inherent to the human way of thinking? Perhaps we should reflect on our condition as Kant did in his Critique of Practical Reason: "Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them: the starry heavens above and the moral law within. " .
~
In the past four chapters I have shown why new ideas from the sciences of human nature do not undermine humane values. On the contrary, they present opportunities to sharpen our ethical reasoning and put those values on a firmer foundation. In a nutshell:
? It is a bad idea to say that discrimination is wrong only because the traits of all people are indistinguishable.
? It is a bad idea to say that violence and exploitation are wrong only because people are not naturally inclined to them.
? It is a bad idea to say that people are responsible for their actions only because the causes of those actions are mysterious.
? And it is a bad idea to say that our motives are meaningful in a personal sense only because they are inexplicable in a biological sense. {194}
These are bad ideas because they make our values hostages to fortune, implying that someday factual discoveries could make them obsolete. And they are bad ideas because they conceal the downsides of denying human nature: persecution of the successful, intrusive social engineering, the writing off of suffering in other cultures, an incomprehension of the logic of justice, and the devaluing of human life on earth.
<< {195} >>
KNOW THYSELF
ow that I have attempted to make the very idea of human nature respectable, it is time to say something about what it is and what difference N it makes for our public and private lives. The chapters in Part IV present some current ideas about the design specs of the basic human
faculties. These are not just topics in a psychology curriculum but have implications for many arenas of public discourse. Ideas about the contents of cognition -- concepts, words, and images -- shed light on the roots of prejudice, on the media, and on the arts. Ideas about the capacity for reason can enter into our policies of education and applications of technology. Ideas about social relations are relevant to the family, to sexuality, to social organization, and to crime. Ideas about the moral sense inform the way we evaluate political movements and how we trade off one value against another.
In each of these arenas, people always appeal to some conception of human nature, whether they acknowledge it or not. The problem is that the conceptions are often based on gut feelings, folk theories, and archaic versions of biology. My goal is to make these conceptions explicit, to suggest what is right and wrong about them, and to spell out some of the implications. Ideas about human nature cannot, on their own, resolve perplexing controversies or determine public policy. But without such ideas we are not playing with a full deck and are vulnerable to unnecessary befuddlement. As the biologist Richard Alexander has noted, "Evolution is surely most deterministic for those still unaware of it. "1
<< {197} >> Chapter 12
In Touch with Reality
? ? ? ? ? ? ? ? ? ? ? ? ? ? What a piece of work is a man!
How noble in reason!
How infinite in faculty!
In form, in moving, how express and admirable! In action, how like an angel!
In apprehension, how like a god!
-- William Shakespeare
The starting point for acknowledging human nature is a sheer awe and humility in the face of the staggering complexity of its source, the brain. Organized by the three billion bases of our genome and shaped by hundreds of millions of years of evolution, the brain is a network of unimaginable intricacy: a hundred billion neurons linked by a hundred trillion connections, woven into a convoluted three-dimensional architecture. Humbling, too, is the complexity of what it does. Even the mundane talents we share with other primates -- walking, grasping, recognizing -- are solutions to engineering problems at or beyond the cutting edge of artificial intelligence.
The talents that are human birthrights -- speaking and understanding, using common sense, teaching children, inferring other people's motives -- will probably not be duplicated by machines in our lifetime, if ever. All this should serve as a counterweight to the image of the mind as formless raw material and to people as insignificant atoms making up the complex being we call "society. "
The human brain equips us to thrive in a world of objects, living things, and other people. Those entities have a large impact on our well-being, and one would expect the brain to be well suited to detecting them and their powers. Failing to recognize a steep precipice or a hungry panther or a jealous spouse can have significant negative consequences for biological fitness, to put it mildly. The fantastic complexity of the brain is there in part to register consequential facts about the world around us. {198}
But this truism has been rejected by many sectors of modern intellectual life. According to the relativistic wisdom prevailing in much of academia today, reality is socially constructed by the use of language, stereotypes, and media images. The idea that people have access to facts about the world is nai? ve, say the proponents of social constructionism, science studies, cultural studies, critical theory, postmodernism, and deconstructionism. In their view, observations are always infected by theories, and theories are saturated with ideology and political doctrines, so anyone who claims to have the facts or know the truth is just trying to exert power over everyone else.
Relativism is entwined with the doctrine of the Blank Slate in two ways. One is that relativists have a penny-pinching theory of psychology in which the mind has no mechanisms designed to grasp reality; all it can do is passively download words, images, and stereotypes from the surrounding culture. The other is the relativists' attitude toward science. Most scientists regard their work as an extension of our everyday ability to figure out what is out there and how things work. Telescopes and microscopes amplify the visual system; theories formalize our hunches about cause and effect; experiments refine our drive to gather evidence about events we cannot witness directly. Relativist movements agree that science is perception and cognition writ large, but they draw the opposite conclusion: that scientists, like laypeople, are unequipped to grasp an objective reality. Instead, their advocates say, "Western science is only one way of describing reality, nature, and the way things work -- a very effective way, certainly, for the production of goods and profits, but unsatisfactory in most other respects. It is an imperialist arrogance which ignores the sciences and insights of most other cultures and times. "1 Nowhere is this more significant than in the scientific study of politically charged topics such as race, gender, violence, and social organization. Appealing to "facts" or "the truth" in connection with these topics is just a ruse, the relativists say, because there is no "truth" in the sense of an objective yardstick independent of cultural and political presuppositions.
Skepticism about the soundness of people's mental faculties also determines whether one should respect ordinary people's tastes and opinions (even those we don't much like) or treat the people as dupes of an insidious commercial culture. According to relativist doctrines like "false consciousness," "inauthentic preferences," and "interiorized authority," people may be mistaken about their own desires. If so, it would undermine the assumptions behind democracy, which gives ultimate authority to the preferences of the majority of a population, and the assumptions behind market economies, which treat people as the best judges of how they should allocate their own resources. Perhaps not coincidentally, it elevates the scholars and artists who analyze the use of language and images in society, because only they can unmask the ways in which such media mislead and corrupt. {199}
This chapter is about the assumptions about cognition -- in particular, concepts, words, and images -- that underlie recent relativistic movements in intellectual life. The best way to introduce the argument is with examples from the study of perception, our most immediate connection to the world. They immediately show that the question of whether reality is socially constructed or directly available has not been properly framed. Neither alternative is correct.
? ? ? ? Relativists have a point when they say that we don't just open our eyes and apprehend reality, as if perception were a window through which the soul gazes at the world. The idea that we just see things as they are is called nai? ve realism, and it was refuted by skeptical philosophers thousands of years ago with the help of a simple phenomenon: visual illusions. Our visual systems can play tricks on us, and that is enough to prove they are gadgets, not pipelines to the truth. Here are two of my favorites. In Roger Shepard's "Turning the Tables"2 (right), the two parallelograms are identical in size and shape. In Edward Adelson's "Checker Shadow Illusion"3 (below) the light square in the middle of the shadow (B) is the same shade of gray as the dark squares outside the shadow (A):
? ? ? world. Most of the time the system works: people don't usually bump into trees or bite into rocks.
But occasionally the brain is fooled. The ground stretching away from our feet projects an image from the bottom to the center of our visual field. As a result, the brain often interprets down-up in the visual field as near-far in the world, especially when reinforced by other perspective cues such as occluded parts (like the hidden table legs). Objects stretching away from the viewer get foreshortened by projection, and the brain compensates for this, so we tend to see a given distance running up-and-down in the visual field as coming from a longer object than the same distance running left-to-right. And that makes us see the lengths and widths differently in the turned tables. By similar logic, objects in shadow reflect less light onto our retinas than objects in full illumination. Our brains compensate, making us see a given shade of gray as lighter when it is in shadow than when it is in sunshine. In each case we may see the lines and patches on the page incorrectly, but that is only because our visual systems are working very hard to see them as coming from a real world. Like a policeman framing a suspect, Shepard and Adelson have planted evidence that would lead a rational but unsuspecting observer to an incorrect conclusion. If we were in a world of ordinary 3-D objects that had projected those images onto our retinas, our perceptual experience would be accurate. Adelson explains: "As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view. "4
It's not that expectations from past experience are irrelevant to perception. But their influence is to make our perceptual systems more accurate, not more arbitrary. In the two words below, we perceive the same shape as an "H" in the first word and as an "A" in the second:5
But just because the world we know is a construct of our brain, that does not mean it is an arbitrary construct -- a phantasm created by expectations or the social context. Our perceptual systems are designed to register aspects of the external world that were important to our survival, like the sizes, shapes, and materials of objects. They need a complex design to accomplish this feat because the retinal image is not a replica of the world. The projection of an object on the retina grows, shrinks, and warps as the object moves around; color and brightness fluctuate as the lighting changes from sun to clouds or from indoor to outdoor light. But somehow the brain solves these maddening problems. It works as if it were reasoning backwards from the retinal image to hypotheses about reality, using {200} geometry, optics, probability theory, and assumptions about the
? ? ? ? We see the shapes that way because experience tells us -- correctly -- that the odds are high that there really is an "H" in the middle of the first word and an "A" in the middle of the second, even if that is not true in an atypical case. The mechanisms of perception go to a lot of trouble to ensure that what we see corresponds to what is usually out there.
So the demonstrations that refute nai? ve realism most decisively also refute the idea that the mind is disconnected from reality. There is a third alternative: {201} that the brain evolved fallible yet intelligent mechanisms that work to keep us in touch with aspects of reality that were relevant to the survival and reproduction of our ancestors. And that is true not just of our perceptual faculties but of our cognitive faculties. The fact that our cognitive faculties (like our perceptual faculties) are attuned to the real world is most obvious from their response to illusions: they recognize the possibility of a breach with reality and find a way to get at the truth behind the false impression. When we see an oar that appears to be severed at the water's surface, we know how to tell whether it really is severed or just looks that way: we can palpate the oar, slide a straight object along it, or pull on it to see if the submerged part gets left behind. The concept of truth and reality behind such tests appears to be universal. People in all cultures distinguish truth from falsity and inner mental life from overt reality, and try to deduce the presence of unobservable objects from the perceptible clues they leave behind. 6
~
Visual perception is the most piquant form of knowledge of the world, but relativists are less concerned with how we see objects than with how we categorize them: how we sort our experiences into conceptual categories like birds, tools, and people. The seemingly innocuous suggestion that the categories of the mind correspond to something in reality became a contentious idea in the twentieth century because some categories -- stereotypes of race, gender, ethnicity, and sexual orientation -- can be harmful when they are used to discriminate or oppress.
The word stereotype originally referred to a kind of printing plate. Its current sense as a pejorative and inaccurate image standing for a category of people was introduced in 1922 by the journalist Walter Lippmann. Lippmann was an important public intellectual who, among other things, helped to found The New Republic, influenced Woodrow Wilson's policies at the end of World War I, and wrote some of the first attacks on IQ testing. In his book Public Opinion, Lippmann fretted about the difficulty of achieving true democracy in an age in which ordinary people could no longer judge public issues rationally because they got their information in what we today call sound bites. As part of this argument, Lippmann proposed that ordinary people's concepts of social groups were stereotypes: mental pictures that are incomplete, biased, insensitive to variation, and resistant to disconfirming information.
Lippmann had an immediate influence on social science (though the subtleties and qualifications of his original argument were forgotten). Psychologists gave people lists of ethnic groups and lists of traits and asked them to pair them up. Sure enough, people linked Jews with "shrewd" and "mercenary," Germans with "efficient" and "nationalistic," Negroes with "superstitious" and "happy-go-lucky," and so on. 7 Such generalizations are pernicious when applied to individuals, and though they are still lamentably common in much of {202} the world, they are now actively avoided by educated people and by mainstream public figures.
By the 1970s, many thinkers were not content to note that stereotypes about categories of people can be inaccurate. They began to insist that the categories themselves don't exist other than in our stereotypes. An effective way to fight racism, sexism, and other kinds of prejudice, in this view, is to deny that conceptual categories about people have any claim to objective reality. It would be impossible to believe that homosexuals are effeminate, blacks superstitious, and women passive if there were no such things as categories of homosexuals, blacks, or women to begin with. For example, the philosopher Richard Rorty has written," 'The homosexual,' 'the Negro,' and 'the female' are best seen not as inevitable classifications of human beings but rather as inventions that have done more harm than good. "8
For that matter, many writers think, why stop there? Better still to insist that all categories are social constructions and therefore figments, because that would really make invidious stereotypes figments. Rorty notes with approval that many thinkers today "go on to suggest that quarks and genes probably are [inventions] too. " Postmodernists and other relativists attack truth and objectivity not so much because they are interested in philosophical problems of ontology and epistemology but because they feel it is the best way to pull the rug out from under racists, sexists, and homophobes. The philosopher Ian Hacking provides a list of almost forty categories that have recently been claimed to be "socially constructed. " The prime examples are race, gender, masculinity, nature, facts, reality, and the past.
? ? ? ? ? But the list has been growing and now includes authorship, AIDS, brotherhood, choice, danger, dementia, illness, Indian forests, inequality, the Landsat satellite system, the medicalized immigrant, the nation-state, quarks, school success, serial homicide, technological systems, white-collar crime, women refugees, and Zulu nationalism. According to Hacking, the common thread is a conviction that the category is not determined by the nature of things and therefore is not inevitable. The further implication is that we would be much better off if it were done away with or radically transformed. 9
This whole enterprise is based on an unstated theory of human concept formation: that conceptual categories bear no systematic relation to things in the world but are socially constructed (and can therefore be reconstructed). Is it a correct theory? In some cases it has a grain of truth. As we saw in Chapter 4, some categories really are social constructions: they exist only because people tacitly agree to act as if they exist. Examples include money, tenure, citizenship, decorations for bravery, and the presidency of the United States. 10 But that does not mean that all conceptual categories are socially constructed. Concept formation has been studied for decades by cognitive psychologists, and they conclude that most concepts pick out categories of objects in the {203} world which had some kind of reality before we ever stopped to think about them. 11
Yes, every snowflake is unique, and no category will do complete justice to every one of its members. But intelligence depends on lumping together things that share properties, so that we are not flabbergasted by every new thing we encounter. As William James wrote, "A polyp would be a conceptual thinker if a feeling of 'Hollo! thingumbob again! ' ever flitted through its mind. " We perceive some traits of a new object, place it in a mental category, and infer that it is likely to have the other traits typical of that category, ones we cannot perceive. If it walks like a duck and quacks like a duck, it probably is a duck. If it's a duck, it's likely to swim, fly, have a back off which water rolls, and contain meat that's tasty when wrapped in a pancake with scallions and hoisin sauce.
This kind of inference works because the world really does contain ducks, which really do share properties. If we lived in a world in which walking quacking objects were no more likely to contain meat than did any other object, the category "duck" would be useless and we probably would not have evolved the ability to form it. If you were to construct a giant spreadsheet in which the rows and columns were traits that people notice and the cells were filled in by objects that possess that combination of traits, the pattern of filled cells would be lumpy. You would find lots of entries at the intersection of the "quacks" row and the "waddles" column but none at the "quacks" row and the "gallops" column. Once you specify the rows and columns, the lumpiness comes from the world, not from society or language. It is no coincidence that the same living things tend to be classified together by the words in European cultures, the words for plant and animal kinds in other cultures (including preliterate cultures), and the Linnaean taxa of professional biologists equipped with calipers, dissecting tools, and DNA sequencers. Ducks, biologists say, are several dozen species in the subfamily Anatinae, each with a distinct anatomy, an ability to interbreed with other members of their species, and a common ancestor in evolutionary history.
Most cognitive psychologists believe that conceptual categories come from two mental processes. 12 One of them notices clumps of entries in the mental spreadsheet and treats them as categories with fuzzy boundaries, prototypical members, and overlapping similarities, like the members of a family. That's why our mental category "duck" can embrace odd ducks that don't match the prototypical duck, such as lame ducks, who cannot swim or fly, Muscovy ducks, which have claws and spurs on their feet, and Donald Duck, who talks and wears clothing. The other mental process looks for crisp rules and definitions and enters them into chains of reasoning. The second system can learn that true ducks molt twice a season and have overlapping scales on their legs and hence that certain birds that look like geese and are called geese really are ducks. Even when people don't know these facts from academic {204} biology, they have a strong intuition that species are defined by an internal essence or hidden trait that lawfully gives rise to its visible features. 13
Anyone who teaches the psychology of categorization has been hit with this question from a puzzled student: "You're telling us that putting things into categories is rational and makes us smart. But we've always been taught that putting people into categories is irrational and makes us sexist and racist. If categorization is so great when we think about ducks and chairs, why is it so terrible when we think about genders and ethnic groups? " As with many ingenuous questions from students, this one uncovers a shortcoming in the literature, not a flaw in their understanding.
The idea that stereotypes are inherently irrational owes more to a condescension toward ordinary people than it does to good psychological research. Many researchers, having shown that stereotypes existed in the minds of their subjects, assumed that the stereotypes had to be irrational, because they were uncomfortable with the possibility that some trait might be statistically true of some group. They never actually checked. That began to change in the 1980s, and now a fair amount is known about the accuracy of stereotypes. 14
With some important exceptions, stereotypes are in fact not inaccurate when assessed against objective benchmarks such as census figures or the reports of the stereotyped people themselves. People who believe that African Americans are more likely to be on welfare than whites, that Jews have higher average incomes than WASPs, that
? ? ? ? ? ? ? ? ? business students are more conservative than students in the arts, that women are more likely than men to want to lose weight, and that men are more likely than women to swat a fly with their bare hands, are not being irrational or bigoted. Those beliefs are correct. People's stereotypes are generally consistent with the statistics, and in many cases their bias is to underestimate the real differences between sexes or ethnic groups. 15 This does not mean that the stereotyped traits are unchangeable, of course, or that people think they are unchangeable, only that people perceive the traits fairly accurately at the time.
Moreover, even when people believe that ethnic groups have characteristic traits, they are never mindless stereotypers who literally believe that each and every member of the group possesses those traits. People may think that Germans are, on average, more efficient than non-Germans, but no one believes that every last German is more efficient than every non-German. 16 And people have no trouble overriding a stereotype when they have good information about an individual. Contrary to a common accusation, teachers' impressions of their individual pupils are not contaminated by their stereotypes of race, gender, or socioeconomic status. The teachers' impressions accurately reflect the pupil's performance as measured by objective tests. 17
Now for the important exceptions. Stereotypes can be downright inaccurate when a person has few or no firsthand encounters with the stereotyped {205} group, or belongs to a group that is overtly hostile to the one being judged. During World War II, when the Russians were allies of the United States and the Germans were enemies, Americans judged Russians to have more positive traits than Germans. Soon afterward, when the alliances reversed, Americans judged Germans to have more positive traits than Russians. 18
Also, people's ability to set aside stereotypes when judging an individual is accomplished by their conscious, deliberate reasoning. When people are distracted or put under pressure to respond quickly, they are more likely to judge that a member of an ethnic group has all the stereotyped traits of the group. 19 This comes from the two-part design of the human categorization system mentioned earlier. Our network of fuzzy associations naturally reverts to a stereotype when we first encounter an individual. But our rule-based categorizer can block out those associations and make deductions based on the relevant facts about that individual. It can do so either for practical reasons, when information about a group-wide average is less diagnostic than information about the individual, or for social and moral reasons, out of respect for the imperative that one ought to ignore certain group-wide averages when judging an individual.
The upshot of this research is not that stereotypes are always accurate but that they are not always false, or even usually false. This is just what we would expect if human categorization -- like the rest of the mind -- is an adaptation that keeps track of aspects of the world that are relevant to our long-term well-being. As the social psychologist Roger Brown pointed out, the main difference between categories of people and categories of other things is that when you use a prototypical exemplar to stand for a category of things, no one takes offense. When Webster's dictionary used a sparrow to stand for all birds, "emus and ostriches and penguins and eagles did not go on the attack. " But just imagine what would have happened if Webster's had used a picture of a soccer mom to illustrate woman and a picture of a business executive to illustrate man. Brown remarks, "Of course, people would be right to take offense since a prototype can never represent the variation that exists in natural categories. It's just that birds don't care but people do. "20
What are the implications of the fact that many stereotypes are statistically accurate? One is that contemporary scientific research on sex differences cannot be dismissed just because some of the findings are consistent with traditional stereotypes of men and women. Some parts of those stereotypes may be false, but the mere fact that they are stereotypes does not prove that they are false in every respect.
The partial accuracy of many stereotypes does not, of course, mean that racism, sexism, and ethnic prejudice are acceptable. Quite apart from the democratic principle that in the public sphere people should be treated as individuals, there are good reasons to be concerned about stereotypes. {206} Stereotypes based on hostile depictions rather than on firsthand experience are bound to be inaccurate. And some stereotypes are accurate only because of self-fulfilling prophecies. Forty years ago it may have been factually correct that few women and African Americans were qualified to be chief executives or presidential candidates. But that was only because of barriers that prevented them from attaining those qualifications, such as university policies that refused them admission out of a belief that they were not qualified. The institutional barriers had to be dismantled before the facts could change. The good news is that when the facts do change, people's stereotypes can change with them.
What about policies that go farther and actively compensate for prejudicial stereotypes, such as quotas and preferences that favor underrepresented groups? Some defenders of these policies assume that gatekeepers are incurably afflicted with baseless prejudices, and that quotas must be kept in place forever to neutralize their effects. The research on stereotype accuracy refutes that argument. Nonetheless, the research might support a different argument for preferences and other gender- and color-sensitive policies. Stereotypes, even when they are accurate, might be self-fulfilling, and not just in the obvious case of institutionalized barriers like those that kept women and
? ? ? ? ? ?
Even the most atheistic scientists do not, of course, advocate a callous amorality. The brain may be a physical system made of ordinary matter, but that matter is organized in such a way as to give rise to a sentient organism with a capacity to feel pleasure and pain. And that in turn sets the stage for the emergence of morality. The reason is succinctly explained in the comic strip Calvin and Hobbes (see p. 188).
The feline Hobbes, like his human namesake, has shown why an amoral egoist is in an untenable position. He is better off if he never gets shoved into the mud, but he can hardly demand that others refrain from shoving him if he himself is not willing to forgo shoving others. And since one is better off not shoving and not getting shoved than shoving and getting shoved, it pays to insist on a moral code, even if the price is adhering to it oneself. As moral philosophers through the ages have pointed out, a philosophy of living based on "Not everyone, just me! " falls apart as soon as one sees oneself from an objective standpoint as a person just like others. It is like insisting that "here," the point in space one happens to be occupying at the moment, is a special place in the universe. 3
The dynamic between Calvin and Hobbes (the cartoon characters) is inherent to social organisms, and there are reasons to believe that the solution {188}
? ? ? ? ? ? to it -- a moral sense -- evolved in our species rather than having to be deduced from scratch by each of us after we've picked ourselves up out of the mud. 4 Children as young as a year and a half spontaneously give toys, proffer help, and try to comfort adults or other children who are visibly distressed. 5 People in all cultures distinguish right from wrong, have a sense of fairness, help one another, impose rights and obligations, believe that wrongs should be redressed, and proscribe rape, murder, and some kinds of violence. 6 These normal sentiments are conspicuous by their absence in the aberrant individuals we call psychopaths. 7 The alternative, then, to the religious theory of the source of values is that evolution endowed us with a moral sense, and we have expanded its circle of application over the course of history through reason (grasping the logical interchangeability of our interests and others'), knowledge (learning of the advantages of cooperation over the long term), and sympathy (having experiences that allow us to feel other people's pain). {189}
How can we tell which theory is preferable? A thought experiment can pit them against each other. What would be the right thing to do if God had commanded people to be selfish and cruel rather than generous and kind? Those who root their values in religion would have to say that we ought to be selfish and cruel. Those who appeal to a moral sense would say that we ought to reject God's command. This shows -- I hope -- that it is our moral sense that deserves priority. 8
This thought experiment is not just a logical brainteaser of the kind beloved by thirteen-year-old atheists, such as why God cares how we behave if he can see the future and already knows. The history of religion shows that God has commanded people to do all manner of selfish and cruel acts: massacre Midianites and abduct their women, stone prostitutes, execute homosexuals, burn witches, slay heretics and infidels, throw Protestants out of windows, withhold medicine from dying children, shoot up abortion clinics, hunt down Salman Rushdie, blow themselves up in marketplaces, and crash airplanes into skyscrapers. Recall that even Hitler thought he was carrying out the will of God. 9 The recurrence of evil acts committed in the name of God shows that they are not random perversions. An omnipotent authority that no one can see is a useful backer for malevolent leaders hoping to enlist holy warriors. And since unverifiable beliefs have to be passed along from parents and peers rather than discovered in the world, they
? ? ? ? ? ? ? differ from group to group and become divisive identity badges.
And who says the doctrine of the soul is more humane than the understanding of the mind as a physical organ? I see no dignity in letting people die of hepatitis or be ravaged by Parkinson's disease when a cure may lie in research on stem cells that religious movements seek to ban because it uses balls of cells that have made the "ontological leap" to "spiritual souls. " Sources of immense misery such as Alzheimer's disease, major depression, and schizophrenia will be alleviated not by treating thought and emotion as manifestations of an immaterial soul but by treating them as manifestations of physiology and genetics. 10
Finally, the doctrine of a soul that outlives the body is anything but righteous, because it necessarily devalues the lives we live on this earth. When Susan Smith sent her two young sons to the bottom of a lake, she eased her conscience with the rationalization that "my children deserve to have the best, and now they will. " Allusions to a happy afterlife are typical in the final letters of parents who take their children's lives before taking their own,11 and we have recently been reminded of how such beliefs embolden suicide bombers and kamikaze hijackers. This is why we should reject the argument that if people stopped believing in divine retribution they would do evil with impunity. Yes, if nonbelievers thought they could elude the legal system, the opprobrium of their {190} communities, and their own consciences, they would not be deterred by the threat of spending eternity in hell. But they would also not be tempted to massacre thousands of people by the promise of spending eternity in heaven.
Even the emotional comfort of a belief in an afterlife can go both ways. Would life lose its purpose if we ceased to exist when our brains die? On the contrary, nothing invests life with more meaning than the realization that every moment of sentience is a precious gift. How many fights have been averted, how many friendships renewed, how many hours not squandered, how many gestures of affection offered, because we sometimes remind ourselves that "life is short"? ~
Why do secular thinkers fear that biology drains life of meaning? It is because biology seems to deflate the values we most cherish. If the reason we love our children is that a squirt of oxytocin in the brain compels us to protect our genetic investment, wouldn't the nobility of parenthood be undermined and its sacrifices devalued? If sympathy, trust, and a yearning for justice evolved as a way to earn favors and deter cheaters, wouldn't that imply that there are really no such things as altruism and justice for their own sake? We sneer at the philanthropist who profits from his donation because of the tax savings, the televangelist who thunders against sin but visits prostitutes, the politician who defends the downtrodden only when the cameras are rolling, and the sensitive new-age guy who backs feminism because it's a good way to attract women. Evolutionary psychology seems to be saying that we are all such hypocrites, all the time.
The fear that scientific knowledge undermines human values reminds me of the opening scene in Annie Hall, in which the young Alvy Singer has been taken to the family doctor:
mother: He's been depressed. All of a sudden, he can't do anything. doctor: Why are you depressed, Alvy?
mother: Tell Dr. Flicker. [Answers for him. ] It's something he read. doctor: Something he read, huh?
alvy: [Head down. ] The universe is expanding.
doctor: The universe is expanding?
alvy: Well, the universe is everything, and if it's expanding, someday it will break apart and that would be the end of everything!
mother: What is that your business? [To the doctor. ] He stopped doing his homework.
alvy: What's the point?
The scene is funny because Alvy has confused two levels of analysis: the scale of billions of years with which we measure the universe, and the scale of {191} decades, years, and days with which we measure our lives. As Alvy's mother points out, "What has the universe got to do with it? You're here in Brooklyn! Brooklyn is not expanding! " People who are depressed at the thought that all our motives are selfish are as confused as Alvy. They have mixed up ultimate causation (why something evolved by natural selection) with proximate causation (how the entity works here and now). The mix-up is natural because the two explanations can look so much alike.
Richard Dawkins showed that a good way to understand the logic of natural selection is to imagine that genes are agents with selfish motives. No one should begrudge him the metaphor, but it contains a trap for the unwary. The genes have metaphorical motives -- making copies of themselves -- and the organisms they design have real motives. But they are not the same motives. Sometimes the most selfish thing a gene can do is wire unselfish motives into a human brain -- heartfelt, unstinting, deep-in-the-marrow unselfishness. The love of children (who carry one's
? ? ? ? genes into posterity), a faithful spouse (whose genetic fate is identical to one's own), and friends and allies (who trust you if you're trustworthy) can be bottomless and unimpeachable as far as we humans are concerned (proximate level), even if it is metaphorically self-serving as far as the genes are concerned (ultimate level).
I suspect there is another reason why the explanations are so easily confused. We all know that people sometimes have ulterior motives. They may be publicly generous but privately greedy, publicly pious but privately cynical, publicly platonic but privately lusting. Freud accustomed us to the idea that ulterior motives are pervasive in behavior, exerting their effects from an inaccessible stratum of the mind. Combine this with the common misconception that the genes are a kind of essence or core of the person, and you get a mongrel of Dawkins and Freud: the idea that the metaphorical motives of the genes are the deep, unconscious, ulterior motives of the person. That is an error. Brooklyn is not expanding.
Even people who can keep genes and people apart in their minds might find themselves depressed. Psychology has taught us that aspects of our experience may be figments, artifacts of how information is processed in the brain. The difference in kind between our experience of red and our experience of green does not mirror any difference in kind in lightwaves in the world -- the wavelengths of light, which give rise to our perception of hue, form a smooth continuum. Red and green, perceived as qualitatively different properties, are constructs of the chemistry and circuitry of our nervous system. They could be absent in an organism with different photopigments or wiring; indeed, people with the most common form of colorblindness are just such organisms. And the emotional coloring of an object is as much a figment as its {192} physical coloring. The sweetness of fruit, the scariness of heights, and the vileness of carrion are fancies of a nervous system that evolved to react to those objects in adaptive ways.
The sciences of human nature seem to imply that the same is true of right and wrong, merit and worthlessness, beauty and ugliness, holiness and baseness. They are neural constructs, movies we project onto the interior of our skulls, ways to tickle the pleasure centers of the brain, with no more reality than the difference between red and green. When Marley's ghost asked Scrooge why he doubted his senses, he said, "Because a little thing affects them. A slight disorder of the stomach makes them cheats. You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There's more of gravy than of grave about you, whatever you are! " Science seems to be saying that the same is true of everything we value.
But just because our brains are prepared to think in certain ways, it does not follow that the objects of those thoughts are fictitious. Many of our faculties evolved to mesh with real entities in the world. Our perception of depth is the product of complicated circuitry in the brain, circuitry that is absent from other species. But that does not mean that there aren't real trees and cliffs out there, or that the world is as flat as a pancake. And so it may be with more abstract entities. Humans, like many animals, appear to have an innate sense of number, which can be explained by the advantages of reasoning about numerosity during our evolutionary history. (For example, if three bears go into a cave and two come out, is it safe to enter? ) But the mere fact that a number faculty evolved does not mean that numbers are hallucinations. According to the Platonist conception of number favored by many mathematicians and philosophers, entities such as numbers and shapes have an existence independent of minds. The number three is not invented out of whole cloth; it has real properties that can be discovered and explored. No rational creature equipped with circuitry to understand the concept "two" and the concept of addition could discover that two plus one equals anything other than three. That is why we expect similar bodies of mathematical results to emerge from different cultures or even different planets. If so, the number sense evolved to grasp abstract truths in the world that exist independently of the minds that grasp them.
Perhaps the same argument can be made for morality. According to the theory of moral realism, right and wrong exist, and have an inherent logic that licenses some moral arguments and not others. 12 The world presents us with non-
zero-sum games in which it is better for both parties to act unselfishly than for both to act selfishly (better not to shove and not to be shoved than to shove and be shoved). Given the goal of being better off, certain conditions
{193} follow necessarily. No creature equipped with circuitry to understand that it is immoral for you to hurt me could discover anything but that it is immoral for me to hurt you. As with numbers and the number sense, we would expect moral systems to evolve toward similar conclusions in different cultures or even different planets. And in fact the Golden Rule has been rediscovered many times: by the authors of Leviticus and the Mahabharata; by Hillel, Jesus, and Confucius; by the Stoic philosophers of the Roman Empire; by social contract theorists such as Hobbes, Rousseau, and Locke; and by moral philosophers such as Kant in his categorical imperative. 13 Our moral sense may have evolved to mesh with an intrinsic logic of ethics rather than concocting it in our heads out of nothing.
But even if the Platonic existence of moral logic is too rich for your blood, you can still see morality as something more than a social convention or religious dogma. Whatever its ontological status may be, a moral sense is part of the standard equipment of the human mind. It's the only mind we've got, and we have no choice but to take its intuitions seriously. If we are so constituted that we cannot help but think in moral terms (at least some of the time and toward some people), then morality is as real for us as if it were decreed by the Almighty or written into the cosmos. And so
? ? ? ? it is with other human values like love, truth, and beauty. Could we ever know whether they are really "out there" or whether we just think they are out there because the human brain makes it impossible not to think they are out there? And how bad would it be if they were inherent to the human way of thinking? Perhaps we should reflect on our condition as Kant did in his Critique of Practical Reason: "Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them: the starry heavens above and the moral law within. " .
~
In the past four chapters I have shown why new ideas from the sciences of human nature do not undermine humane values. On the contrary, they present opportunities to sharpen our ethical reasoning and put those values on a firmer foundation. In a nutshell:
? It is a bad idea to say that discrimination is wrong only because the traits of all people are indistinguishable.
? It is a bad idea to say that violence and exploitation are wrong only because people are not naturally inclined to them.
? It is a bad idea to say that people are responsible for their actions only because the causes of those actions are mysterious.
? And it is a bad idea to say that our motives are meaningful in a personal sense only because they are inexplicable in a biological sense. {194}
These are bad ideas because they make our values hostages to fortune, implying that someday factual discoveries could make them obsolete. And they are bad ideas because they conceal the downsides of denying human nature: persecution of the successful, intrusive social engineering, the writing off of suffering in other cultures, an incomprehension of the logic of justice, and the devaluing of human life on earth.
<< {195} >>
KNOW THYSELF
ow that I have attempted to make the very idea of human nature respectable, it is time to say something about what it is and what difference N it makes for our public and private lives. The chapters in Part IV present some current ideas about the design specs of the basic human
faculties. These are not just topics in a psychology curriculum but have implications for many arenas of public discourse. Ideas about the contents of cognition -- concepts, words, and images -- shed light on the roots of prejudice, on the media, and on the arts. Ideas about the capacity for reason can enter into our policies of education and applications of technology. Ideas about social relations are relevant to the family, to sexuality, to social organization, and to crime. Ideas about the moral sense inform the way we evaluate political movements and how we trade off one value against another.
In each of these arenas, people always appeal to some conception of human nature, whether they acknowledge it or not. The problem is that the conceptions are often based on gut feelings, folk theories, and archaic versions of biology. My goal is to make these conceptions explicit, to suggest what is right and wrong about them, and to spell out some of the implications. Ideas about human nature cannot, on their own, resolve perplexing controversies or determine public policy. But without such ideas we are not playing with a full deck and are vulnerable to unnecessary befuddlement. As the biologist Richard Alexander has noted, "Evolution is surely most deterministic for those still unaware of it. "1
<< {197} >> Chapter 12
In Touch with Reality
? ? ? ? ? ? ? ? ? ? ? ? ? ? What a piece of work is a man!
How noble in reason!
How infinite in faculty!
In form, in moving, how express and admirable! In action, how like an angel!
In apprehension, how like a god!
-- William Shakespeare
The starting point for acknowledging human nature is a sheer awe and humility in the face of the staggering complexity of its source, the brain. Organized by the three billion bases of our genome and shaped by hundreds of millions of years of evolution, the brain is a network of unimaginable intricacy: a hundred billion neurons linked by a hundred trillion connections, woven into a convoluted three-dimensional architecture. Humbling, too, is the complexity of what it does. Even the mundane talents we share with other primates -- walking, grasping, recognizing -- are solutions to engineering problems at or beyond the cutting edge of artificial intelligence.
The talents that are human birthrights -- speaking and understanding, using common sense, teaching children, inferring other people's motives -- will probably not be duplicated by machines in our lifetime, if ever. All this should serve as a counterweight to the image of the mind as formless raw material and to people as insignificant atoms making up the complex being we call "society. "
The human brain equips us to thrive in a world of objects, living things, and other people. Those entities have a large impact on our well-being, and one would expect the brain to be well suited to detecting them and their powers. Failing to recognize a steep precipice or a hungry panther or a jealous spouse can have significant negative consequences for biological fitness, to put it mildly. The fantastic complexity of the brain is there in part to register consequential facts about the world around us. {198}
But this truism has been rejected by many sectors of modern intellectual life. According to the relativistic wisdom prevailing in much of academia today, reality is socially constructed by the use of language, stereotypes, and media images. The idea that people have access to facts about the world is nai? ve, say the proponents of social constructionism, science studies, cultural studies, critical theory, postmodernism, and deconstructionism. In their view, observations are always infected by theories, and theories are saturated with ideology and political doctrines, so anyone who claims to have the facts or know the truth is just trying to exert power over everyone else.
Relativism is entwined with the doctrine of the Blank Slate in two ways. One is that relativists have a penny-pinching theory of psychology in which the mind has no mechanisms designed to grasp reality; all it can do is passively download words, images, and stereotypes from the surrounding culture. The other is the relativists' attitude toward science. Most scientists regard their work as an extension of our everyday ability to figure out what is out there and how things work. Telescopes and microscopes amplify the visual system; theories formalize our hunches about cause and effect; experiments refine our drive to gather evidence about events we cannot witness directly. Relativist movements agree that science is perception and cognition writ large, but they draw the opposite conclusion: that scientists, like laypeople, are unequipped to grasp an objective reality. Instead, their advocates say, "Western science is only one way of describing reality, nature, and the way things work -- a very effective way, certainly, for the production of goods and profits, but unsatisfactory in most other respects. It is an imperialist arrogance which ignores the sciences and insights of most other cultures and times. "1 Nowhere is this more significant than in the scientific study of politically charged topics such as race, gender, violence, and social organization. Appealing to "facts" or "the truth" in connection with these topics is just a ruse, the relativists say, because there is no "truth" in the sense of an objective yardstick independent of cultural and political presuppositions.
Skepticism about the soundness of people's mental faculties also determines whether one should respect ordinary people's tastes and opinions (even those we don't much like) or treat the people as dupes of an insidious commercial culture. According to relativist doctrines like "false consciousness," "inauthentic preferences," and "interiorized authority," people may be mistaken about their own desires. If so, it would undermine the assumptions behind democracy, which gives ultimate authority to the preferences of the majority of a population, and the assumptions behind market economies, which treat people as the best judges of how they should allocate their own resources. Perhaps not coincidentally, it elevates the scholars and artists who analyze the use of language and images in society, because only they can unmask the ways in which such media mislead and corrupt. {199}
This chapter is about the assumptions about cognition -- in particular, concepts, words, and images -- that underlie recent relativistic movements in intellectual life. The best way to introduce the argument is with examples from the study of perception, our most immediate connection to the world. They immediately show that the question of whether reality is socially constructed or directly available has not been properly framed. Neither alternative is correct.
? ? ? ? Relativists have a point when they say that we don't just open our eyes and apprehend reality, as if perception were a window through which the soul gazes at the world. The idea that we just see things as they are is called nai? ve realism, and it was refuted by skeptical philosophers thousands of years ago with the help of a simple phenomenon: visual illusions. Our visual systems can play tricks on us, and that is enough to prove they are gadgets, not pipelines to the truth. Here are two of my favorites. In Roger Shepard's "Turning the Tables"2 (right), the two parallelograms are identical in size and shape. In Edward Adelson's "Checker Shadow Illusion"3 (below) the light square in the middle of the shadow (B) is the same shade of gray as the dark squares outside the shadow (A):
? ? ? world. Most of the time the system works: people don't usually bump into trees or bite into rocks.
But occasionally the brain is fooled. The ground stretching away from our feet projects an image from the bottom to the center of our visual field. As a result, the brain often interprets down-up in the visual field as near-far in the world, especially when reinforced by other perspective cues such as occluded parts (like the hidden table legs). Objects stretching away from the viewer get foreshortened by projection, and the brain compensates for this, so we tend to see a given distance running up-and-down in the visual field as coming from a longer object than the same distance running left-to-right. And that makes us see the lengths and widths differently in the turned tables. By similar logic, objects in shadow reflect less light onto our retinas than objects in full illumination. Our brains compensate, making us see a given shade of gray as lighter when it is in shadow than when it is in sunshine. In each case we may see the lines and patches on the page incorrectly, but that is only because our visual systems are working very hard to see them as coming from a real world. Like a policeman framing a suspect, Shepard and Adelson have planted evidence that would lead a rational but unsuspecting observer to an incorrect conclusion. If we were in a world of ordinary 3-D objects that had projected those images onto our retinas, our perceptual experience would be accurate. Adelson explains: "As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view. "4
It's not that expectations from past experience are irrelevant to perception. But their influence is to make our perceptual systems more accurate, not more arbitrary. In the two words below, we perceive the same shape as an "H" in the first word and as an "A" in the second:5
But just because the world we know is a construct of our brain, that does not mean it is an arbitrary construct -- a phantasm created by expectations or the social context. Our perceptual systems are designed to register aspects of the external world that were important to our survival, like the sizes, shapes, and materials of objects. They need a complex design to accomplish this feat because the retinal image is not a replica of the world. The projection of an object on the retina grows, shrinks, and warps as the object moves around; color and brightness fluctuate as the lighting changes from sun to clouds or from indoor to outdoor light. But somehow the brain solves these maddening problems. It works as if it were reasoning backwards from the retinal image to hypotheses about reality, using {200} geometry, optics, probability theory, and assumptions about the
? ? ? ? We see the shapes that way because experience tells us -- correctly -- that the odds are high that there really is an "H" in the middle of the first word and an "A" in the middle of the second, even if that is not true in an atypical case. The mechanisms of perception go to a lot of trouble to ensure that what we see corresponds to what is usually out there.
So the demonstrations that refute nai? ve realism most decisively also refute the idea that the mind is disconnected from reality. There is a third alternative: {201} that the brain evolved fallible yet intelligent mechanisms that work to keep us in touch with aspects of reality that were relevant to the survival and reproduction of our ancestors. And that is true not just of our perceptual faculties but of our cognitive faculties. The fact that our cognitive faculties (like our perceptual faculties) are attuned to the real world is most obvious from their response to illusions: they recognize the possibility of a breach with reality and find a way to get at the truth behind the false impression. When we see an oar that appears to be severed at the water's surface, we know how to tell whether it really is severed or just looks that way: we can palpate the oar, slide a straight object along it, or pull on it to see if the submerged part gets left behind. The concept of truth and reality behind such tests appears to be universal. People in all cultures distinguish truth from falsity and inner mental life from overt reality, and try to deduce the presence of unobservable objects from the perceptible clues they leave behind. 6
~
Visual perception is the most piquant form of knowledge of the world, but relativists are less concerned with how we see objects than with how we categorize them: how we sort our experiences into conceptual categories like birds, tools, and people. The seemingly innocuous suggestion that the categories of the mind correspond to something in reality became a contentious idea in the twentieth century because some categories -- stereotypes of race, gender, ethnicity, and sexual orientation -- can be harmful when they are used to discriminate or oppress.
The word stereotype originally referred to a kind of printing plate. Its current sense as a pejorative and inaccurate image standing for a category of people was introduced in 1922 by the journalist Walter Lippmann. Lippmann was an important public intellectual who, among other things, helped to found The New Republic, influenced Woodrow Wilson's policies at the end of World War I, and wrote some of the first attacks on IQ testing. In his book Public Opinion, Lippmann fretted about the difficulty of achieving true democracy in an age in which ordinary people could no longer judge public issues rationally because they got their information in what we today call sound bites. As part of this argument, Lippmann proposed that ordinary people's concepts of social groups were stereotypes: mental pictures that are incomplete, biased, insensitive to variation, and resistant to disconfirming information.
Lippmann had an immediate influence on social science (though the subtleties and qualifications of his original argument were forgotten). Psychologists gave people lists of ethnic groups and lists of traits and asked them to pair them up. Sure enough, people linked Jews with "shrewd" and "mercenary," Germans with "efficient" and "nationalistic," Negroes with "superstitious" and "happy-go-lucky," and so on. 7 Such generalizations are pernicious when applied to individuals, and though they are still lamentably common in much of {202} the world, they are now actively avoided by educated people and by mainstream public figures.
By the 1970s, many thinkers were not content to note that stereotypes about categories of people can be inaccurate. They began to insist that the categories themselves don't exist other than in our stereotypes. An effective way to fight racism, sexism, and other kinds of prejudice, in this view, is to deny that conceptual categories about people have any claim to objective reality. It would be impossible to believe that homosexuals are effeminate, blacks superstitious, and women passive if there were no such things as categories of homosexuals, blacks, or women to begin with. For example, the philosopher Richard Rorty has written," 'The homosexual,' 'the Negro,' and 'the female' are best seen not as inevitable classifications of human beings but rather as inventions that have done more harm than good. "8
For that matter, many writers think, why stop there? Better still to insist that all categories are social constructions and therefore figments, because that would really make invidious stereotypes figments. Rorty notes with approval that many thinkers today "go on to suggest that quarks and genes probably are [inventions] too. " Postmodernists and other relativists attack truth and objectivity not so much because they are interested in philosophical problems of ontology and epistemology but because they feel it is the best way to pull the rug out from under racists, sexists, and homophobes. The philosopher Ian Hacking provides a list of almost forty categories that have recently been claimed to be "socially constructed. " The prime examples are race, gender, masculinity, nature, facts, reality, and the past.
? ? ? ? ? But the list has been growing and now includes authorship, AIDS, brotherhood, choice, danger, dementia, illness, Indian forests, inequality, the Landsat satellite system, the medicalized immigrant, the nation-state, quarks, school success, serial homicide, technological systems, white-collar crime, women refugees, and Zulu nationalism. According to Hacking, the common thread is a conviction that the category is not determined by the nature of things and therefore is not inevitable. The further implication is that we would be much better off if it were done away with or radically transformed. 9
This whole enterprise is based on an unstated theory of human concept formation: that conceptual categories bear no systematic relation to things in the world but are socially constructed (and can therefore be reconstructed). Is it a correct theory? In some cases it has a grain of truth. As we saw in Chapter 4, some categories really are social constructions: they exist only because people tacitly agree to act as if they exist. Examples include money, tenure, citizenship, decorations for bravery, and the presidency of the United States. 10 But that does not mean that all conceptual categories are socially constructed. Concept formation has been studied for decades by cognitive psychologists, and they conclude that most concepts pick out categories of objects in the {203} world which had some kind of reality before we ever stopped to think about them. 11
Yes, every snowflake is unique, and no category will do complete justice to every one of its members. But intelligence depends on lumping together things that share properties, so that we are not flabbergasted by every new thing we encounter. As William James wrote, "A polyp would be a conceptual thinker if a feeling of 'Hollo! thingumbob again! ' ever flitted through its mind. " We perceive some traits of a new object, place it in a mental category, and infer that it is likely to have the other traits typical of that category, ones we cannot perceive. If it walks like a duck and quacks like a duck, it probably is a duck. If it's a duck, it's likely to swim, fly, have a back off which water rolls, and contain meat that's tasty when wrapped in a pancake with scallions and hoisin sauce.
This kind of inference works because the world really does contain ducks, which really do share properties. If we lived in a world in which walking quacking objects were no more likely to contain meat than did any other object, the category "duck" would be useless and we probably would not have evolved the ability to form it. If you were to construct a giant spreadsheet in which the rows and columns were traits that people notice and the cells were filled in by objects that possess that combination of traits, the pattern of filled cells would be lumpy. You would find lots of entries at the intersection of the "quacks" row and the "waddles" column but none at the "quacks" row and the "gallops" column. Once you specify the rows and columns, the lumpiness comes from the world, not from society or language. It is no coincidence that the same living things tend to be classified together by the words in European cultures, the words for plant and animal kinds in other cultures (including preliterate cultures), and the Linnaean taxa of professional biologists equipped with calipers, dissecting tools, and DNA sequencers. Ducks, biologists say, are several dozen species in the subfamily Anatinae, each with a distinct anatomy, an ability to interbreed with other members of their species, and a common ancestor in evolutionary history.
Most cognitive psychologists believe that conceptual categories come from two mental processes. 12 One of them notices clumps of entries in the mental spreadsheet and treats them as categories with fuzzy boundaries, prototypical members, and overlapping similarities, like the members of a family. That's why our mental category "duck" can embrace odd ducks that don't match the prototypical duck, such as lame ducks, who cannot swim or fly, Muscovy ducks, which have claws and spurs on their feet, and Donald Duck, who talks and wears clothing. The other mental process looks for crisp rules and definitions and enters them into chains of reasoning. The second system can learn that true ducks molt twice a season and have overlapping scales on their legs and hence that certain birds that look like geese and are called geese really are ducks. Even when people don't know these facts from academic {204} biology, they have a strong intuition that species are defined by an internal essence or hidden trait that lawfully gives rise to its visible features. 13
Anyone who teaches the psychology of categorization has been hit with this question from a puzzled student: "You're telling us that putting things into categories is rational and makes us smart. But we've always been taught that putting people into categories is irrational and makes us sexist and racist. If categorization is so great when we think about ducks and chairs, why is it so terrible when we think about genders and ethnic groups? " As with many ingenuous questions from students, this one uncovers a shortcoming in the literature, not a flaw in their understanding.
The idea that stereotypes are inherently irrational owes more to a condescension toward ordinary people than it does to good psychological research. Many researchers, having shown that stereotypes existed in the minds of their subjects, assumed that the stereotypes had to be irrational, because they were uncomfortable with the possibility that some trait might be statistically true of some group. They never actually checked. That began to change in the 1980s, and now a fair amount is known about the accuracy of stereotypes. 14
With some important exceptions, stereotypes are in fact not inaccurate when assessed against objective benchmarks such as census figures or the reports of the stereotyped people themselves. People who believe that African Americans are more likely to be on welfare than whites, that Jews have higher average incomes than WASPs, that
? ? ? ? ? ? ? ? ? business students are more conservative than students in the arts, that women are more likely than men to want to lose weight, and that men are more likely than women to swat a fly with their bare hands, are not being irrational or bigoted. Those beliefs are correct. People's stereotypes are generally consistent with the statistics, and in many cases their bias is to underestimate the real differences between sexes or ethnic groups. 15 This does not mean that the stereotyped traits are unchangeable, of course, or that people think they are unchangeable, only that people perceive the traits fairly accurately at the time.
Moreover, even when people believe that ethnic groups have characteristic traits, they are never mindless stereotypers who literally believe that each and every member of the group possesses those traits. People may think that Germans are, on average, more efficient than non-Germans, but no one believes that every last German is more efficient than every non-German. 16 And people have no trouble overriding a stereotype when they have good information about an individual. Contrary to a common accusation, teachers' impressions of their individual pupils are not contaminated by their stereotypes of race, gender, or socioeconomic status. The teachers' impressions accurately reflect the pupil's performance as measured by objective tests. 17
Now for the important exceptions. Stereotypes can be downright inaccurate when a person has few or no firsthand encounters with the stereotyped {205} group, or belongs to a group that is overtly hostile to the one being judged. During World War II, when the Russians were allies of the United States and the Germans were enemies, Americans judged Russians to have more positive traits than Germans. Soon afterward, when the alliances reversed, Americans judged Germans to have more positive traits than Russians. 18
Also, people's ability to set aside stereotypes when judging an individual is accomplished by their conscious, deliberate reasoning. When people are distracted or put under pressure to respond quickly, they are more likely to judge that a member of an ethnic group has all the stereotyped traits of the group. 19 This comes from the two-part design of the human categorization system mentioned earlier. Our network of fuzzy associations naturally reverts to a stereotype when we first encounter an individual. But our rule-based categorizer can block out those associations and make deductions based on the relevant facts about that individual. It can do so either for practical reasons, when information about a group-wide average is less diagnostic than information about the individual, or for social and moral reasons, out of respect for the imperative that one ought to ignore certain group-wide averages when judging an individual.
The upshot of this research is not that stereotypes are always accurate but that they are not always false, or even usually false. This is just what we would expect if human categorization -- like the rest of the mind -- is an adaptation that keeps track of aspects of the world that are relevant to our long-term well-being. As the social psychologist Roger Brown pointed out, the main difference between categories of people and categories of other things is that when you use a prototypical exemplar to stand for a category of things, no one takes offense. When Webster's dictionary used a sparrow to stand for all birds, "emus and ostriches and penguins and eagles did not go on the attack. " But just imagine what would have happened if Webster's had used a picture of a soccer mom to illustrate woman and a picture of a business executive to illustrate man. Brown remarks, "Of course, people would be right to take offense since a prototype can never represent the variation that exists in natural categories. It's just that birds don't care but people do. "20
What are the implications of the fact that many stereotypes are statistically accurate? One is that contemporary scientific research on sex differences cannot be dismissed just because some of the findings are consistent with traditional stereotypes of men and women. Some parts of those stereotypes may be false, but the mere fact that they are stereotypes does not prove that they are false in every respect.
The partial accuracy of many stereotypes does not, of course, mean that racism, sexism, and ethnic prejudice are acceptable. Quite apart from the democratic principle that in the public sphere people should be treated as individuals, there are good reasons to be concerned about stereotypes. {206} Stereotypes based on hostile depictions rather than on firsthand experience are bound to be inaccurate. And some stereotypes are accurate only because of self-fulfilling prophecies. Forty years ago it may have been factually correct that few women and African Americans were qualified to be chief executives or presidential candidates. But that was only because of barriers that prevented them from attaining those qualifications, such as university policies that refused them admission out of a belief that they were not qualified. The institutional barriers had to be dismantled before the facts could change. The good news is that when the facts do change, people's stereotypes can change with them.
What about policies that go farther and actively compensate for prejudicial stereotypes, such as quotas and preferences that favor underrepresented groups? Some defenders of these policies assume that gatekeepers are incurably afflicted with baseless prejudices, and that quotas must be kept in place forever to neutralize their effects. The research on stereotype accuracy refutes that argument. Nonetheless, the research might support a different argument for preferences and other gender- and color-sensitive policies. Stereotypes, even when they are accurate, might be self-fulfilling, and not just in the obvious case of institutionalized barriers like those that kept women and
? ? ? ? ? ?
