Once again, redundancy is removed and only
information
gets through.
Richard-Dawkins-Unweaving-the-Rainbow
Robins are an intermediate case.
Their cuckoos lay eggs which are slightly robin-like, but not very.
Perhaps the arms race between robins and the robin gens of cuckoos is of intermediate antiquity.
On this view, the Y chromosomes of robin cuckoos are somewhat experienced, but their description of recent (robin) ancestral environments is still sketchy and contaminated by earlier descriptions of other species, previously 'experienced'.
Davies and Brooke did experiments deliberately putting extra eggs, of various kinds, in nests belonging to different species of birds. They wanted to see which species would accept, or reject, strange eggs. Their hypothesis was that species that have been through an arms race with cuckoos would, as a consequence of their genetic 'experience', be most likely to reject foreign eggs. One way to test this was to look at species which are not even suitable as cuckoo hosts. Baby cuckoos need to eat insects or worms. Species that feed their young on seeds, or species that nest in holes that female cuckoos can't reach, have never been at risk. Davies and Brooke predicted that such birds would not worry if they experimentally introduced strange eggs into their nests. And so it proved. Species that are suitable for cuckoos, however, like chaffinches, song thrushes and blackbirds, showed a stronger tendency to reject the experimental eggs that Davies and Brooke, playing cuckoo, placed in their nests. Flycatchers are potentially vulnerable in that they feed their young on a cuckoo-friendly diet. But whereas spotted flycatchers have open and accessible nests, pied flycatchers nest in holes which female cuckoos are too large to penetrate. Sure enough, when the experimenters dumped foreign eggs in their nests, pied flycatchers, with their 'inexperienced' gene pools, accepted foreign eggs without protest; spotted flycatchers, by contrast, rejected them, suggesting that their gene pools were wise to the cuckoo menace from long ago.
Davies and Brooke did similar experiments with species that cuckoos actually do parasitize. Meadow pipits, reed warblers and pied wagtails usually rejected artificially added eggs. As befits the 'lack of ancestral experience' hypothesis, dunnocks did not; nor did wrens. Robins and sedge warblers were intermediate. At the other extreme, reed buntings, which are suitable for cuckoos but not much parasitized by them, showed total rejection of foreign eggs. No wonder cuckoos don't parasitize them. Davies and Brooke's interpretation would presumably be that reed buntings have come out the other side of a long ancestral arms race with cuckoos, which they eventually won. Dunnocks are near the beginning of
their arms race. Robins are slightly more advanced in theirs. Meadow pipits, reed warblers and pied wagtails are in the middle of theirs. When we say dunnocks have only just begun their arms race with cuckoos, 'only just' has to be interpreted with evolutionary timescales in mind. By human standards the association could still be quite old. The Oxford English Dictionary quotes a 1616 reference to the Heisugge (archaic word for hedge sparrow or dunnock) as 'a bird which hatcheth the Cuckooes egges'. Davies notes the following lines in King Lear I, iv, written a decade earlier:
For, you trow, nuncle,
The hedge-sparrow fed the cuckoo so long,
That its had it head bit off by it young.
And in the fourteenth century Chaucer wrote of the cuckoo's treatment of the dunnock in The Parliament of Fowls:
'Thou mortherere of the hey sage on the braunche That broughte the forth, thow rewthelees glotoun! '
Although dunnock, hedge sparrow and heysoge are all given as synonyms in the dictionary, I can't help wondering how far we should rely on medieval ornithology. Chaucer himself was usually a rather precise user of language, but nevertheless the name sparrow has at times been given to what today is technically called an LBB (little brown bird). This may have been Shakespeare's meaning in the following, from Henry IV Part I, V, i:
And, being fed by us, you used us so
As that ungentle gull the cuckoo's bird,
Useth the sparrow - did oppress our nest
Grew by our feeding to so great a bulk
That even our love durst not come near your sight For fear of swallowing;
Sparrow, on its own, would nowadays mean the house sparrow, Passer domesticus, which is never parasitized by cuckoos. Despite its alternative name hedge sparrow, the dunnock, Prunella modularis, is unrelated; it is a 'sparrow' only in the loose sense of being a little brown bird. But
anyway, even if we take Chaucer's evidence as showing that the arms race between cuckoos and dunnocks really does go back at least to the fourteenth century, Davies and Brooke cite theoretical calculations, taking into account the comparative rarity of cuckoos, suggesting that this is still sufficiently recent in evolutionary terms to account for the apparent naivety of dunnocks when faced with cuckoos.
Before we leave cuckoos, here's an interesting thought. There could be, simultaneously existing, more than one gens of, say, robin cuckoos, who have built up their egg mimicry. ' independently. Since there is no gene flow between them as far as Y chromosomes are concerned, there could be accurate egg mimics coexisting with less accurate egg mimics. All are capable of mating with the same males but they don't share the same Y chromosomes. The accurate mimics would be descended from a female who moved into parasitizing robins a long time ago. The less accurate ones would be descended from a different female who moved into robins, possibly from a different predecessor host species, more recently.
Ants, termites and other social insect species are odd in a different way. They have sterile workers, often divided into several 'castes' - soldiers, media (middle-sized) workers, minor (small) workers, and so on. Every worker, whatever its caste, contains the genes that could have turned it into any other caste. Different sets of genes are switched on under different rearing conditions. It is by regulating these rearing conditions that the colony engineers a useful balance of different castes. Often the differences among castes are dramatic.
In the Asian ant species Pheidologeton diversus, the large worker caste (specialized for bulldozing smooth paths for other colony members) is 500 times heavier than the small caste, who do all the normal duties of a worker ant. The same set of genes equips a larva to grow up into either a Brobdingnagian or a Lilliputian, depending upon which ones are switched on. Honeypot ants are immobile storage vats, abdomens pumped up with nectar to transparent yellow spheres, hanging from the ceiling of the nest. The normal duties of an ants' nest, defence, foraging and, in this case, filling up the living vats, are done by normal workers whose abdomens are not swollen. The normal workers have genes that equip them to be honeypots, and honeypots, as far as their genes are concerned, could equally well be normal workers. As in the case of male and female, the visible differences in bodily form depend upon which genes are switched on. In this case it is determined by environmental factors, perhaps diet. Once again, the zoologist of the future could read out from the genes, but not the body, of any one member of the species a complete picture of the disparate lives of the different castes.
The European snail Cepaea nemoralis comes in a number of colours and patterns. The background shell colour can be any of six distinct shades (in order of dominance, in the technical genetic sense): brown, dark pink, light pink, very pale pink, dark yellow, light yellow. Overlaying this, there may be any number of stripes from zero to five. Unlike the case of the social insects, it is not true that every individual snail is genetically equipped to assume any of the different forms. Nor are these differences among snails determined by different environments of upbringing. Striped snails have genes that determine their number of stripes, dark pink individuals have genes that make them dark pink. But all the kinds can mate with each other.
The reasons for the persistence of many different types of snail (polymorphism), as well as the detailed genetics of the polymorphism itself, have been exhaustively studied by the English zoologists A. J. Cain and the late P. M. Sheppard with their school. A major part of the evolutionary explanation is that the species ranges over different habitats - woodland, grassland, bare soil - and you need a different colour pattern to be camouflaged against birds in each place. Beechwood snails contain an admixture of genes from grassland because they interbreed at the margins. A chalk downland snail has some genes that previously survived in the bodies of woodland ancestors; and their legacy, depending on the other genes in the snail, may be stripes. Our zoologist of the future would need to look at the gene pool of the species as a whole to reconstruct the full range of its ancestral worlds.
Just as Cepaea snails range over different habitats in space, so the ancestors of any species have changed their way of life from time to time. House mice, Mus musculus, today live almost exclusively in or around human habitations, as unwanted beneficiaries of human agriculture. But by evolutionary standards their way of life is recent. They must have fed on something else before there was human agriculture. Doubtless that something was sufficiently similar for their genetic skills to be pressed into service when the agricultural bonanza came along. Mice and rats have been described as animal weeds (incidentally, a good piece of poetic imagery, genuinely illuminating). They are generalists, opportunists, carrying genes that helped their ancestors to survive through probably a considerable range of ways of life; and pre-agricultural genes are in them yet. Anybody attempting to 'read' their genes may find a confusing palimpsest of ancestral world descriptions.
From earlier still, the DNA of all mammals must describe aspects of very ancient environments as well as more recent ones. The DNA of a camel was once in the sea, but it hasn't been there for a good 300 million years. It has spent most of recent geological history in deserts, programming bodies to withstand dust and conserve water. Like sandbluffs carved into
fantastic shapes by the desert winds, like rocks shaped by ocean waves, camel DNA has been sculpted by survival in ancient deserts, and even more ancient seas, to yield modern camels. Camel DNA speaks - if only we could understand the language - of the changing worlds of camel ancestors. If only we could read the language, the DNA of tuna and starfish would have 'sea' written into the text. The DNA of moles and earthworms would spell 'underground'. Of course all the DNA would spell many other things as well. Shark and cheetah DNA would spell 'hunt', as well as separate messages about sea and land. Monkey and cheetah DNA would spell 'milk'. Monkey and sloth DNA would spell 'trees'. Whale and dugong DNA presumably describes very ancient seas, fairly ancient lands and more recent seas: complicated palimpsests again.
Features of the environment that occur frequently or importantly are heavily emphasized or 'weighted' in the genetic description, compared with rare or trivial features. Environments that lie in the remote past have a different weighting from recent ones, presumably lower, though not in any obvious way. Environments that lasted a long time in the species' history will have a more prominent weighting in the genetic description than environmental events that, however drastic they may have seemed at the time, were geological flashes in the pan.
It has been poetically suggested that the remote marine apprenticeship of all land life is reflected in the biochemistry of the blood, which is said to resemble a primeval salt sea. Or the liquid in a reptile's egg has been described as a private pond, relic of the actual ponds in which the larvae of distant, amphibious ancestors would have grown. To the extent that animals and their genes bear such a stamp of ancient history it will be for good functional reasons. It won't be history for history's sake. Here is the kind of thing I mean by this. When our remote ancestors lived in the sea, many of our biochemical and metabolic processes became geared to the chemistry of the sea - and our genes became a description of marine chemistry - for functional reasons. But (this is an aspect of our 'selfish Cooperator' argument) biochemical processes become geared not only to the external world but to each other. The world to which they became fitted included the other molecules in the body and the chemical processes in which they partook. There after, when remote descendants of these marine animals moved out on to the land and became gradually more and more fitted to a dry airy world, the old mutual adaptation of biochemical processes to each other - and incidentally to the chemical 'memory' of the sea - persisted. Why should it not, when the different kinds of molecules in the cells and blood so greatly outnumber the different kinds of molecules encountered in the outside world? It is only in a very indirect sense that the genes spell out descriptions of ancestral environments. What they directly describe, after being translated into the parallel language of protein molecules, is instructions for individual
embryonic development. It is the gene pool of the species as a whole that becomes carved to fit the environments that its ancestors have encountered - which is why I said that the species is a statistical averaging device. It is in this indirect sense that our DNA is a coded description of the worlds in which our ancestors survived. And isn't it an arresting thought? We are digital archives of the African Pliocene, even of Devonian seas; walking repositories of wisdom out of the old days. You could spend a lifetime reading in this ancient library and die unsated by the wonder of it.
11
REWEAVING THE WORLD
Since my education began I have always had things described to me with their colors and sounds, by one with keen senses and a fine feeling for the significant.
Therefore, I habitually think of things as colored and resonant Habit accounts for part
The soul sense accounts for another part.
The brain with its five-sensed construction asserts its right and accounts for the rest
Inclusive of all, the unity of the world demands that color be kept in it whether I have cognizance of it or not.
Rather than be shut out, I take part in it by discussing it, happy in the happiness of those near to me who gaze at the lovely hues of the sunset or the rainbow.
HELEN KELLER, The Story of My Life (1902)
Where the gene pool or a species is sculpted into a set of models of ancestral worlds, the brain of an individual houses a parallel set of models of the animal's own world.
Both are equivalent to descriptions of the past, and both are used to aid survival into the future. The difference is one of timescale and of relative privacy. The genetic description is a collective memory belonging to the species as a whole, going back into the indefinite past. The memory of the brain is private and contains the individual's experiences since it was born.
Our subjective knowledge of a familiar place does indeed feel to us like a model of the place. Not an accurate scale model, certainly less accurate
than we think it is, but a serviceable model for the purposes required. One way to approach this idea was proposed some years ago by the Cambridge physiologist Horace Barlow, incidentally a direct descendant of Charles Darwin. Barlow is especially interested in vision and his argument starts from the realization that to recognize an object is a much more difficult problem than we, who seem to see so effortlessly, ordinarily understand.
For we are blissfully unaware of what a formidably clever thing we do every second of our waking lives when we see and recognize objects. The sense organs' task of unweaving the physical stimuli that bombard them is easy compared with the brain's task of reweaving an internal model of the world that it can then make use of. The argument holds for all our sensory systems, but I'll stick mostly to vision because that is the one that means the most to us. Think what a problem our brain solves when it recognizes something, say a letter A. Or think of the problem of recognizing a particular person's face. By long in-group convention, the hypothetical face we are talking about is assumed to belong to the grandmother of the distinguished neurobiologist J. Lettvin, but substitute any face you know, or indeed any object you can recognize. We are not concerned here with subjective consciousness, with the philosophically hard problem of what it means to be aware of your grandmother's face. Just a cell in the brain which fires if and only if the grandmother's face appears on the retina will do nicely for a start, and it is very difficult to arrange. It would be easy if we could assume that the face would always fall exactly on a particular part of the retina. There could be a keyhole arrangement, with a grandmother-shaped region of cells on the retina wired up to a grandmother-signalling cell in the brain. Other cells - members of the 'anti-keyhole' - would have to be wired up in inhibitory-fashion, otherwise the central nervous cell would respond to a white sheet just as strongly as to the grandmother's face which - together with all other conceivable images - it would necessarily 'contain'. The essence of responding to a key image is to avoid responding to everything else.
The keyhole strategy is ruled out by sheer force of numbers.
Even if Lettvin needed to recognize nothing but his grandmother, how could he cope when her image falls on a different part of the retina? How cope with her image's changing size and shape as she approaches or recedes, as she turns sideways, or cants to the rear, as she smiles or as she frowns? If we add up all possible combinations of keyholes and anti- keyholes, the number enters the astronomical range. When you realize that Lettvin can recognize not only his grandmother's face but hundreds of other faces, the other bits of his grandmother and of other people, all the letters of the alphabet, all the thousands of objects to which a normal person can instantly give a name, in all possible orientations and
apparent sizes, the explosion of triggering cells gets rapidly out of hand. The American psychologist Fred Attneave, who had come up with the same general idea as Barlow, dramatized the point by the following calculation: if there were just one brain cell to cope, keyhole fashion, with each image that we can distinguish in all its presentations, the volume of the brain would have to be measured in cubic light years.
How then, with a brain capacity measured only in hundreds of cubic centimetres, do we do it? The answer was proposed in the 1950s by Barlow and Attneave independently. They suggested that nervous systems exploit the massive redundancy in all sensory information. Redundancy is jargon from the world of information theory, originally developed by engineers concerned with the economics of telephone line capacity. Information, in the technical sense, is surprise value, measured as the inverse of expected probability. Redundancy is the opposite of information, a measure of unsurprisingness, of old-hatitude. Redundant messages or parts of messages are not informative because the receiver, in some sense, already knows what is coming. Newspapers do not carry headlines saying, 'The sun rose this morning'. That would convey almost zero information. But if a morning came when the sun did not rise, headline writers would, if any survived, make much of it. The information content would be high, measured as the surprise value of the message. Much of spoken and written language is redundant - hence possible condense telegraphese: redundancy lost, information preserved.
Everything that we know about the world outside our skulls comes to us via nerve cells whose impulses chatter like machine guns. What passes along a nerve cell is a volleying of 'spikes', impulses whose voltage is
fixed (or at least irrelevant) but whose rate of arriving varies meaningfully. Now let's think about coding principles. How would you translate information from the outside world, say, the sound of an oboe or the temperature of a bath, into a pulse code? A first thought is a simple rate code: the hotter the bath, the faster the machine gun should fire. The brain, in other words, would have a thermometer calibrated in pulse rates. Actually, this is not a good code because it is uneconomical with pulses. By exploiting redundancy, it is possible to devise codes that convey the same information at a cost of fewer pulses. Temperatures in the world mostly stay the same for long periods at a time. To signal 'It is hot, it is hot, it is still hot. . . ' by a continuously high rate of machine-gun pulses is wasteful; it is better to say, 'It has suddenly become hot' (now you can assume that it will stay the same until further notice).
And, satisfyingly, this is what nerve cells mostly do, not just for signalling temperature but for signalling almost everything about the world. Most nerve cells are biased to signal changes in the world. If a trumpet plays a long sustained note, a typical nerve cell telling the brain
about it would show the following pattern of impulses: Before the trumpet starts, low firing rate; immediately after the trumpet starts, high firing rate; as the trumpet carries on sustaining its note, the firing rate dies away to an infrequent mutter; at the moment when the trumpet stops, high firing rate, dying away to a resting mutter again. Or there might be one class of nerve cells that fire only at the onset of sounds and a different class of cells that fire only when sounds go off. Similar exploitation of redundancy - screening out of the sameness in the world - goes on in cells that tell the brain about changes in light, changes in temperature, changes in pressure. Everything about the world is signalled as change, and this is a major economy.
But you and I don't seem to hear the trumpet die away. To us the trumpet seems to carry on at the same volume and then to stop abruptly. Yes, of course. That's what you'd expect because the coding system is ingenious. It doesn't throw away information, it only throws away redundancy. The brain is told only about changes, and it is then in a position to reconstruct the rest. Barlow doesn't put it like this, but we could say that the brain constructs a virtual sound, using the messages supplied by the nerves coming from the ears. The reconstructed virtual sound is complete and unabridged, even though the messages
themselves are economically stripped down to information about changes. The system works because the state of the world at a given time is
usually not greatly different from the preceding second. Only if the world changed capriciously, randomly and frequently, would it be economical for sense organs to signal continuously the state of the world. As it is, sense organs are set up to signal, economically, the discontinuities in the worlds and the brain, assuming correctly that the world doesn't change capriciously and at random, uses the information to construct an
internal virtual reality in which the continuity is restored.
The world presents an equivalent kind of redundancy in space, and the nervous system uses the corresponding trick. Sense organs tell the brain about edges and the brain fills in the boring bits between. Suppose you are looking at a black rectangle on a white background. The whole scene is projected on to your retina - you can think of the retina as a screen covered with a dense carpet of tiny photocells, the rods and cones. In theory, each photocell could report to the brain the exact state of the light falling upon it. But the scene we are looking at is massively redundant. Cells registering black are overwhelmingly likely to be surrounded by other cells registering black. Cells registering white are nearly all surrounded by other white-signalling cells. The important exceptions are cells on edges. Those on the white side of an edge signal white themselves and so do their neighbours that sit further into the white area. But their neighbours on the other side are in the black area. The brain can theoretically reconstruct the whole scene if just the retinal
cells on edges fire. If this could be achieved there would be massive savings in nerve impulses.
Once again, redundancy is removed and only information gets through.
Elegantly, the economy is achieved in practice by the mechanism known as lateral inhibition'. Here's a simplified version of the principle, using our analogy of the screen of photocells. Each photocell sends one long wire to the central computer (brain) and also short wires to its immediate neighbours in the photocell screen. The short connections to the neighbours inhibit them, that is, turn down their firing rate. It is easy to see that maximal firing will come only from cells that lie along edges, for they are inhibited from one side only. Lateral inhibition of this kind is common among the low-level units of both vertebrate and invertebrate eyes.
Once again, we could say that the brain constructs a virtual world which is more complete than the picture relayed to it by the senses. The information which the senses supply to the brain is mostly information about edges. But the model in the brain is able to reconstruct the bits between the edges. As in the case of discontinuities in time, an economy is achieved by the elimination - and later reconstruction in the brain - of redundancy. This economy is possible only because uniform patches exist in the world. If the shades and colours in the world were randomly dotted about, no economical remodeling would be possible.
Another kind of redundancy stems from the fact that many lines in the real world are straight, or curved in smooth and therefore predictable (or mathematically reconstructable), ways. If the ends of a line are specified, the middle can be filled in using a simple rule that the brain already 'knows'. Among the nerve cells that have been discovered in the brains of mammals are the so-called 'line-detectors', neurones that fire whenever a straight line, aligned in a particular direction, falls on a particular place in the retina, the so-called 'retinal field' of the brain cell. Each of these line-detector cells has its own preferred direction. In the cat brain, there are only two preferred directions, horizontal and vertical, with an approximately equal number favouring each direction; however, in monkeys other angles are accommodated. From the point of view of the redundancy argument, what is going on here is as follows. In the retina, all the cells along a straight line fire and most of these impulses are redundant. The nervous system economizes by using a single cell to register the line, labelled with its angle. Straight lines are economically specified by their position and direction alone, or by their ends, not by the light value of every point along their length. The brain reweaves a virtual line in which the points along the line are reconstructed.
However, if a part of a scene suddenly detaches itself from the rest and starts to crawl over the background, it is news and should be signalled. Biologists have indeed discovered nerve cells that are silent until something moves against a still background. These cells don't respond when the entire scene moves - that would correspond to the sort of apparent movement the animal would see when it itself moves. But movement of a small object against a still background is information-rich and there are nerve cells tuned to detect it. The most famous of these are the so-called 'bug-detectors' discovered in frogs by Lettvin (he of the grandmother) and his colleagues. A bug-detector is a cell which is apparently blind to everything except the movement of small objects against their background. As soon as an insect moves in the field covered by a bug-detector, the cell immediately initiates massive signalling and the frog's tongue is likely to shoot out to catch the insect. To a sufficiently sophisticated nervous system, though, even the movement of a bug is redundant if it is movement in a straight line. Once you've been told that a bug is moving steadily in a northerly direction, you can assume that it will continue to move in this direction until further notice. Carrying the logic a step further, we should expect to find higher-order movement detector cells in the brain that are especially sensitive to change in movement, say, change in direction or change in speed. Lettvin and his colleagues found a cell that seems to do this, again in the frog. In their paper in Sensory Communication (1961) they describe a particular experiment as follows:
Let us begin with an empty gray hemisphere for the visual field-There is usually no response of the cell to turning on and off the illumination. It is silent. We bring in a small dark object say 1 to 2 degrees in diameter, and at a certain point in its travel, almost anywhere in the field, the cell suddenly 'notices' it. Thereafter, wherever that object is moved it is tracked by the cell. Every time it moves, with even the faintest jerk, there is a burst of impulses that dies down to a mutter that continues as long as the object is visible. If the object is kept moving, the bursts signal discontinuities in the movement, such as the turning of corners, reversals, and so forth, and these bursts occur against a continuous background mutter that tells us the object is visible to the cell. . .
To summarize, it is as if the nervous system is tuned at successive hierarchical levels to respond strongly to the unexpected, weakly or not at all to the expected. What happens at successively higher levels is that the definition of that which is expected becomes progressively more sophisticated. At the lowest level, every spot of light is news. And the next level up, only edges are 'news'. At a higher level still, since so many edges are straight, only the ends of edges are news. Higher again, only movement is news. Then only changes in rate or direction of movement. In Barlow's terms derived from the theory of codes, we could say that the
nervous system uses short, economical words for messages that occur frequently and are expected; long, expensive words for messages that occur rarely and are not expected. It is a bit like language, in which (the generalization is called Zipf's Law) the shortest words in the dictionary are the ones most often used in speech. To push the idea to an extreme, most of the time the brain does not need to be told anything because what is going on is the norm. The message would be redundant. The brain is protected from redundancy by a hierarchy of filters, each filter tuned to remove expected features of a certain kind.
It follows that the set of nervous filters constitutes a kind of summary description of the norm, of the statistical properties of the world in which the animal lives. It is the nervous equivalent of our insight of the previous chapter: that the genes of a species come to constitute a statistical description of the worlds in which its ancestors were naturally selected. Now we see that the sensory coding units with which the brain confronts the environment also constitute a statistical description of that environment. They are tuned to discount the common and emphasize the rare. Our hypothetical zoologist of the future should therefore be able, by inspecting the nervous system of an unknown animal and measuring the statistical biases in its tuning, to reconstruct the statistical properties of the world in which the animal lived, to read off what is common and what rare in the animal's world.
The inference would be indirect, in the same way as for the case of the genes. We would not be reading the animal's world as a direct description. Rather, we'd infer things about the animal's world by inspecting the glossary of abbreviations that its brain used to describe it. Civil servants love acronyms like CAP (Common Agricultural Policy) and HEFCE
(Higher Education Funding Council for England); fledgling bureaucrats surely need a glossary of such abbreviations, a codebook. If you find
such a codebook dropped in the street, you could work out which ministry it came from by seeing which phrases have been granted abbreviations, presumably because they are commonly used in that ministry. An intercepted codebook is not a particular message about the world, but it is a statistical summary of the kind of world which this code was designed to describe economically.
We can think of each brain as equipped with a store cupboard of basic images, useful for modelling important or common features of the animal's world. Although, following Barlow, I have emphasized learning as the means by which the store cupboard is stocked, there is no reason why natural selection itself, working on genes, should not do some of the work of filling up the cupboard. In this case, following the logic of the previous chapter, we should say that the store cupboard in the brain contains images from the ancestral past of the species. We could call it a
collective unconscious, if the phrase had not become tarnished by association.
But the biases of the image kit in the cupboard will not only reflect what is statistically unexpected in the world. Natural selection will ensure that the repertoire of virtual representations is also well endowed with images that are of particular salience or importance in the life of the particular kind of animal and in the world of its ancestors, even if these are not especially common. An animal may need only once in its life to recognize a complicated pattern, say the shape of a female of its species, but on that occasion it is vitally important to get it right, and do so without delay. For humans, faces are of special importance, as well as being common in our world. The same is true of social monkeys. Monkey brains have been found to possess a special class of cells which fire at full strength only when presented with a complete face. We've already seen that humans with particular kinds of localized brain damage experience a very peculiar, and revealing, kind of selective blindness. They can't recognize faces. They can see everything else, apparently normally, and they can see that a face has a shape, with features. They can describe the nose, the eyes and the mouth. But they can't recognize the face even of the person they love best in all the world.
Normal people not only recognize faces. We seem to have an almost indecent eagerness to see faces, whether they are really there or not. We see faces in damp patches on the ceiling, in the contours of a hillside, in clouds or in Martian rocks. Generations of moongazers have been led, by the most unpromising of raw materials, to invent a face in the pattern of craters on the moon. The Daily Express (London) of 15 January 1998 bestowed most of a page, complete with banner headline, on the story that an Irish cleaning woman saw the face of Jesus in her duster: 'Now a stream of pilgrims is expected at her semi-detached home . . . The woman's parish priest said, 'I've never seen anything like it before in my 34 years in the priesthood. " The accompanying photograph shows a pattern of dirty polish on a cloth which slightly resembles a face of some kind: there is a faint suggestion of an eye on one side of what could be a nose; there is also a sloping eyebrow on the other side which gives it a look of Harold Macmillan, although I suppose even Harold Macmillan might look like Jesus to a suitably prepared mind. The Express reminds us of similar stories, including the 'nun bun' served up in a Nashville cafe, which 'resembled the face of Mother Teresa, 86' and caused great excitement until 'the aged nun wrote to the cafe demanding the bun be removed'.
The eagerness of the brain to construct a face, when offered the slightest encouragement, fosters a remarkable illusion. Get an ordinary mask of a
human face - President Clinton's face, or whatever is on sale for fancy dress parties. Stand it up in a good light and look at it from the far side of the room. If you look at it the normal way round, not surprisingly it looks solid. But now turn the mask so that it is facing away from you and look at the hollow side from across the room. Most people see the illusion immediately. If you don't, try adjusting the light. It may help if you shut one eye, but it is by no means necessary. The illusion is that the hollow side of the mask looks solid. The nose, brows and mouth stick out towards you and seem nearer than the ears. It is even more striking if you move from side to side, or up and down. The apparently solid face seems to turn with you, in an odd, almost magical way. I'm not talking about the ordinary experience we have when the eyes of a good portrait seem to follow you around the room. The hollow mask illusion is far more spooky. It seems to hover, luminously, in space. The face really really seems to turn. I have a mask of Einstein's face mounted in my room, hollow side out, and visitors gasp when they glimpse it. The illusion is most strikingly displayed if you set the mask on a slowly rotating turntable. As the solid side turns before you, you'll see it move in a sensible 'normal reality' way. Now the hollow side comes into view and something extraordinary happens. You see another solid face, but it is rotating in the opposite direction. Because one face (say, the real solid face) is turning clockwise while the other, pseudo-solid face appears to be turning anticlockwise, the face that is rotating into view seems to swallow up the face that is rotating away from view. As the turning continues, you then see the really hollow but apparently solid face rotating firmly in the wrong direction for a while, before the really solid face reappears and swallows up the virtual face. The whole experience of watching the illusion is quite unsettling and it remains so no matter how long you go on watching it. You don't get used to it and don't lose the illusion.
What is happening? We can take the answer in two stages. First, why do we see the hollow mask as solid? And second, why does it seem to rotate in the wrong direction? We've already agreed that the brain is very good at - and very keen on - constructing faces in its internal simulation room. The information that the eyes are feeding to the brain is of course compatible with the mask's being hollow, but it is also compatible - just - with an alternative hypothesis, that it is solid. And the brain, in its simulation, goes for the second alternative, presumably because of its eagerness to see faces. So it overrules the messages from the eyes that say, 'This is hollow'; instead, it listens to the messages that say, 'This is a face, this is a face, face, face, face. ' Faces are always solid. So the brain takes a face model out of its cupboard which is, by its nature, solid.
But having constructed its apparently solid face model, the brain is caught in a contradiction when the mask starts to rotate. To simplify the
explanation, suppose that the mask is that of Oliver Cromwell and that his famous warts are visible from both sides of the mask. When looking
at the hollow interior of the nose, which is really pointing away from the viewer, the eye looks straight across to the right side of the nose where there is a prominent wart. But the constructed virtual nose is apparently pointing towards the viewer, not away, and the wart is on what, from the virtual Cromwell's point of view, would be his left side, as if we were looking at Cromwell's mirror image. As the mask rotates, if the face were really solid, our eye would see more of the side that it expected to see more of and less of the side that it expected to see less of. But because the mask is actually hollow, the reverse happens. The relative
proportions of the retinal image change in the way the brain would
expect if the face were solid but rotating in the opposite direction. And that is the illusion that we see. The brain resolves the inevitable contradiction, as one side gives way to the other, in the only way possible, given its stubborn insistence on the mask's being a solid face: it
simulates a virtual model of one face swallowing up the other face.
The rare brain disorder that destroys our ability to recognize faces is called prosopagnosia. It is caused by injury to specific parts of the brain. This very fact supports the importance of a 'face cupboard' in the brain. I don't know, but I'd bet that prosopagnosics wouldn't see the hollow mask illusion. Francis Crick discusses prosopagnosia in his book The Astonishing Hypothesis (1994), together with other revealing clinical conditions. For instance, one patient found the following condition very frightening which, as Crick observes, is not surprising:
. . . objects or persons she saw in one place suddenly appeared in another without her being aware they were moving. This was particularly distressing if she wanted to cross a road, since a car that at first seemed far away would suddenly be very close . . . She experienced the world rather as some of us might see the dance floor in the strobe lighting of a discotheque.
This woman had a mental cupboard full of images for assembling her virtual world, just as we all do. The images themselves were probably perfectly good. But something had gone wrong with her software for deploying them in a smoothly changing virtual world. Other patients have lost their ability to construct virtual depth. They see the world as though it was made of flat, cardboard cut-outs. Yet other patients can recognize objects only if they are presented from a familiar angle. The rest of us, having seen, say, a saucepan from the side, can effortlessly recognize it from above. These patients have presumably lost some ability to manipulate virtual images and turn them around. The technology of virtual reality gives us a language to think about such skills, and this will be my next topic.
I shall not dwell on the details of today's virtual reality which is certain,
in any case, to become obsolete. The technology changes as rapidly as everything else in the world of computers. Essentially what happens is as follows. You don a headset which presents to each of your eyes a miniature computer screen. The images on the two screens are nearly
the same as each other, but offset to give the stereo illusion of three dimensions. The scene is whatever has been programmed into the computer: the Parthenon, perhaps, intact and in its original garish colours; an imagined landscape on Mars; the inside of a cell, hugely magnified. So far, I might have been I describing an ordinary 3-D movie. But the virtual reality machine provides a two-way street. The computer doesn't just present you with scenes, it responds to you. The headset is wired up to register all turnings of your head, and other body movements, which would, in the normal course of events, affect your viewpoint. The computer is continuously informed of all such movements and - here is the cunning part - it is programmed to change the scene presented to the eyes, in exactly the way it would change if you were really moving your head. As you turn your head, the pillars of the Parthenon, say, swing round and you find yourself looking at a statue which, previously, had been 'behind' you.
A more advanced system might have you in a body stocking, laced with strain gauges to monitor the positions of all your limbs. The computer can now tell whenever you take a step, whenever you sit down, stand up, or wave your arms. You can now walk from one end of the Parthenon to the other, watching the pillars pass by as the computer changes the images in sympathy with your steps. Tread carefully because, remember, you are not really in the Parthenon but in a cluttered computer room. Present day virtual reality systems, indeed, are likely to tether you to the computer by a complicated umbilicus of cables, so let's postulate a future tangle-free radio link, or infrared data beam. Now you can walk freely in an empty real world and explore the fantasy virtual world that has been programmed for you. Since the computer knows where your body stocking is, there is no reason why it shouldn't represent you to yourself as a complete human form, an avatar, allowing you to look down at your 'legs', which might be very different from your real legs. You can watch your avatar's hands as they move in imitation of your real hands. If you use these hands to pick up a virtual object, say a Grecian urn, the urn will seem to rise into the air as you 'lift' it.
If somebody else, who could be in another country, dons another set of kit hooked up to the same computer, in principle you should be able to see their avatar and even shake hands - though with present day technology- you might find yourself passing through each other like ghosts. The technicians and programmers are still working on how to
create the illusion of texture and the 'feel' of solid resistance. When I visited England's leading virtual reality company, they told me they get many letters from people wanting a virtual sexual partner. Perhaps in the future, lovers separated by the Atlantic will caress each other over the Internet, albeit incommoded by the need to wear gloves and a body stocking wired up with strain gauges and pressure pads.
Now let's take virtual reality a shade away from dreams and closer to practical usefulness. Present day doctors have recourse to the ingenious endoscope, a sophisticated tube that is inserted into a patient's body through, say, the mouth or the rectum and used for diagnosis and even surgical intervention. By the equivalent of pulling wires, the surgeon steers the long tube round the bends of the intestine. The tube itself has a tiny television camera lens at its tip and a light pipe to illuminate the way. The tip of the tube may also be furnished with various remote- control instruments which the surgeon can control, such as micro- scalpels and forceps.
In conventional endoscopy, the surgeon sees what he is doing using an ordinary television screen, and he operates the remote controls using his fingers. But as various people have realized (not least Jaron Lanier, who coined the phrase 'virtual reality' itself) it is in principle possible to give the surgeon the illusion of being shrunk and actually inside the patient's body. This idea is in the research stage, so I shall resort to a fantasy of how the technique might work in the next century. The surgeon of the future has no need to scrub up, for she need not go near her patient. She stands ? in a wide open area, connected by radio to the endoscope inside the patient's intestine. The miniature screens in front of her two eyes present a magnified stereo image of the interior of the patient
immediately in front of the tip of the endoscope. When she moves her head to the left, the computer automatically swivels the tip of the endoscope to the left. The angle of view of the camera inside the intestine faithfully moves to follow the surgeon's head movements in all three planes. She drives the endoscope forward along the intestine by her footsteps. Slowly, slowly, for fear of damaging the patient, the computer pushes the endoscope forwards, its direction always controlled by the direction in which, in a completely different room, the surgeon is walking. It feels to her as though she is actually walking through the intestine. It doesn't even feel claustrophobic. Following present day endoscopic practice, the gut has been carefully inflated with air, otherwise the walls would press in upon the surgeon and force her to crawl rather than walk.
When she finds what she is looking for, say a malignant tumour, the surgeon selects an instrument from her virtual toolbag.
Davies and Brooke did experiments deliberately putting extra eggs, of various kinds, in nests belonging to different species of birds. They wanted to see which species would accept, or reject, strange eggs. Their hypothesis was that species that have been through an arms race with cuckoos would, as a consequence of their genetic 'experience', be most likely to reject foreign eggs. One way to test this was to look at species which are not even suitable as cuckoo hosts. Baby cuckoos need to eat insects or worms. Species that feed their young on seeds, or species that nest in holes that female cuckoos can't reach, have never been at risk. Davies and Brooke predicted that such birds would not worry if they experimentally introduced strange eggs into their nests. And so it proved. Species that are suitable for cuckoos, however, like chaffinches, song thrushes and blackbirds, showed a stronger tendency to reject the experimental eggs that Davies and Brooke, playing cuckoo, placed in their nests. Flycatchers are potentially vulnerable in that they feed their young on a cuckoo-friendly diet. But whereas spotted flycatchers have open and accessible nests, pied flycatchers nest in holes which female cuckoos are too large to penetrate. Sure enough, when the experimenters dumped foreign eggs in their nests, pied flycatchers, with their 'inexperienced' gene pools, accepted foreign eggs without protest; spotted flycatchers, by contrast, rejected them, suggesting that their gene pools were wise to the cuckoo menace from long ago.
Davies and Brooke did similar experiments with species that cuckoos actually do parasitize. Meadow pipits, reed warblers and pied wagtails usually rejected artificially added eggs. As befits the 'lack of ancestral experience' hypothesis, dunnocks did not; nor did wrens. Robins and sedge warblers were intermediate. At the other extreme, reed buntings, which are suitable for cuckoos but not much parasitized by them, showed total rejection of foreign eggs. No wonder cuckoos don't parasitize them. Davies and Brooke's interpretation would presumably be that reed buntings have come out the other side of a long ancestral arms race with cuckoos, which they eventually won. Dunnocks are near the beginning of
their arms race. Robins are slightly more advanced in theirs. Meadow pipits, reed warblers and pied wagtails are in the middle of theirs. When we say dunnocks have only just begun their arms race with cuckoos, 'only just' has to be interpreted with evolutionary timescales in mind. By human standards the association could still be quite old. The Oxford English Dictionary quotes a 1616 reference to the Heisugge (archaic word for hedge sparrow or dunnock) as 'a bird which hatcheth the Cuckooes egges'. Davies notes the following lines in King Lear I, iv, written a decade earlier:
For, you trow, nuncle,
The hedge-sparrow fed the cuckoo so long,
That its had it head bit off by it young.
And in the fourteenth century Chaucer wrote of the cuckoo's treatment of the dunnock in The Parliament of Fowls:
'Thou mortherere of the hey sage on the braunche That broughte the forth, thow rewthelees glotoun! '
Although dunnock, hedge sparrow and heysoge are all given as synonyms in the dictionary, I can't help wondering how far we should rely on medieval ornithology. Chaucer himself was usually a rather precise user of language, but nevertheless the name sparrow has at times been given to what today is technically called an LBB (little brown bird). This may have been Shakespeare's meaning in the following, from Henry IV Part I, V, i:
And, being fed by us, you used us so
As that ungentle gull the cuckoo's bird,
Useth the sparrow - did oppress our nest
Grew by our feeding to so great a bulk
That even our love durst not come near your sight For fear of swallowing;
Sparrow, on its own, would nowadays mean the house sparrow, Passer domesticus, which is never parasitized by cuckoos. Despite its alternative name hedge sparrow, the dunnock, Prunella modularis, is unrelated; it is a 'sparrow' only in the loose sense of being a little brown bird. But
anyway, even if we take Chaucer's evidence as showing that the arms race between cuckoos and dunnocks really does go back at least to the fourteenth century, Davies and Brooke cite theoretical calculations, taking into account the comparative rarity of cuckoos, suggesting that this is still sufficiently recent in evolutionary terms to account for the apparent naivety of dunnocks when faced with cuckoos.
Before we leave cuckoos, here's an interesting thought. There could be, simultaneously existing, more than one gens of, say, robin cuckoos, who have built up their egg mimicry. ' independently. Since there is no gene flow between them as far as Y chromosomes are concerned, there could be accurate egg mimics coexisting with less accurate egg mimics. All are capable of mating with the same males but they don't share the same Y chromosomes. The accurate mimics would be descended from a female who moved into parasitizing robins a long time ago. The less accurate ones would be descended from a different female who moved into robins, possibly from a different predecessor host species, more recently.
Ants, termites and other social insect species are odd in a different way. They have sterile workers, often divided into several 'castes' - soldiers, media (middle-sized) workers, minor (small) workers, and so on. Every worker, whatever its caste, contains the genes that could have turned it into any other caste. Different sets of genes are switched on under different rearing conditions. It is by regulating these rearing conditions that the colony engineers a useful balance of different castes. Often the differences among castes are dramatic.
In the Asian ant species Pheidologeton diversus, the large worker caste (specialized for bulldozing smooth paths for other colony members) is 500 times heavier than the small caste, who do all the normal duties of a worker ant. The same set of genes equips a larva to grow up into either a Brobdingnagian or a Lilliputian, depending upon which ones are switched on. Honeypot ants are immobile storage vats, abdomens pumped up with nectar to transparent yellow spheres, hanging from the ceiling of the nest. The normal duties of an ants' nest, defence, foraging and, in this case, filling up the living vats, are done by normal workers whose abdomens are not swollen. The normal workers have genes that equip them to be honeypots, and honeypots, as far as their genes are concerned, could equally well be normal workers. As in the case of male and female, the visible differences in bodily form depend upon which genes are switched on. In this case it is determined by environmental factors, perhaps diet. Once again, the zoologist of the future could read out from the genes, but not the body, of any one member of the species a complete picture of the disparate lives of the different castes.
The European snail Cepaea nemoralis comes in a number of colours and patterns. The background shell colour can be any of six distinct shades (in order of dominance, in the technical genetic sense): brown, dark pink, light pink, very pale pink, dark yellow, light yellow. Overlaying this, there may be any number of stripes from zero to five. Unlike the case of the social insects, it is not true that every individual snail is genetically equipped to assume any of the different forms. Nor are these differences among snails determined by different environments of upbringing. Striped snails have genes that determine their number of stripes, dark pink individuals have genes that make them dark pink. But all the kinds can mate with each other.
The reasons for the persistence of many different types of snail (polymorphism), as well as the detailed genetics of the polymorphism itself, have been exhaustively studied by the English zoologists A. J. Cain and the late P. M. Sheppard with their school. A major part of the evolutionary explanation is that the species ranges over different habitats - woodland, grassland, bare soil - and you need a different colour pattern to be camouflaged against birds in each place. Beechwood snails contain an admixture of genes from grassland because they interbreed at the margins. A chalk downland snail has some genes that previously survived in the bodies of woodland ancestors; and their legacy, depending on the other genes in the snail, may be stripes. Our zoologist of the future would need to look at the gene pool of the species as a whole to reconstruct the full range of its ancestral worlds.
Just as Cepaea snails range over different habitats in space, so the ancestors of any species have changed their way of life from time to time. House mice, Mus musculus, today live almost exclusively in or around human habitations, as unwanted beneficiaries of human agriculture. But by evolutionary standards their way of life is recent. They must have fed on something else before there was human agriculture. Doubtless that something was sufficiently similar for their genetic skills to be pressed into service when the agricultural bonanza came along. Mice and rats have been described as animal weeds (incidentally, a good piece of poetic imagery, genuinely illuminating). They are generalists, opportunists, carrying genes that helped their ancestors to survive through probably a considerable range of ways of life; and pre-agricultural genes are in them yet. Anybody attempting to 'read' their genes may find a confusing palimpsest of ancestral world descriptions.
From earlier still, the DNA of all mammals must describe aspects of very ancient environments as well as more recent ones. The DNA of a camel was once in the sea, but it hasn't been there for a good 300 million years. It has spent most of recent geological history in deserts, programming bodies to withstand dust and conserve water. Like sandbluffs carved into
fantastic shapes by the desert winds, like rocks shaped by ocean waves, camel DNA has been sculpted by survival in ancient deserts, and even more ancient seas, to yield modern camels. Camel DNA speaks - if only we could understand the language - of the changing worlds of camel ancestors. If only we could read the language, the DNA of tuna and starfish would have 'sea' written into the text. The DNA of moles and earthworms would spell 'underground'. Of course all the DNA would spell many other things as well. Shark and cheetah DNA would spell 'hunt', as well as separate messages about sea and land. Monkey and cheetah DNA would spell 'milk'. Monkey and sloth DNA would spell 'trees'. Whale and dugong DNA presumably describes very ancient seas, fairly ancient lands and more recent seas: complicated palimpsests again.
Features of the environment that occur frequently or importantly are heavily emphasized or 'weighted' in the genetic description, compared with rare or trivial features. Environments that lie in the remote past have a different weighting from recent ones, presumably lower, though not in any obvious way. Environments that lasted a long time in the species' history will have a more prominent weighting in the genetic description than environmental events that, however drastic they may have seemed at the time, were geological flashes in the pan.
It has been poetically suggested that the remote marine apprenticeship of all land life is reflected in the biochemistry of the blood, which is said to resemble a primeval salt sea. Or the liquid in a reptile's egg has been described as a private pond, relic of the actual ponds in which the larvae of distant, amphibious ancestors would have grown. To the extent that animals and their genes bear such a stamp of ancient history it will be for good functional reasons. It won't be history for history's sake. Here is the kind of thing I mean by this. When our remote ancestors lived in the sea, many of our biochemical and metabolic processes became geared to the chemistry of the sea - and our genes became a description of marine chemistry - for functional reasons. But (this is an aspect of our 'selfish Cooperator' argument) biochemical processes become geared not only to the external world but to each other. The world to which they became fitted included the other molecules in the body and the chemical processes in which they partook. There after, when remote descendants of these marine animals moved out on to the land and became gradually more and more fitted to a dry airy world, the old mutual adaptation of biochemical processes to each other - and incidentally to the chemical 'memory' of the sea - persisted. Why should it not, when the different kinds of molecules in the cells and blood so greatly outnumber the different kinds of molecules encountered in the outside world? It is only in a very indirect sense that the genes spell out descriptions of ancestral environments. What they directly describe, after being translated into the parallel language of protein molecules, is instructions for individual
embryonic development. It is the gene pool of the species as a whole that becomes carved to fit the environments that its ancestors have encountered - which is why I said that the species is a statistical averaging device. It is in this indirect sense that our DNA is a coded description of the worlds in which our ancestors survived. And isn't it an arresting thought? We are digital archives of the African Pliocene, even of Devonian seas; walking repositories of wisdom out of the old days. You could spend a lifetime reading in this ancient library and die unsated by the wonder of it.
11
REWEAVING THE WORLD
Since my education began I have always had things described to me with their colors and sounds, by one with keen senses and a fine feeling for the significant.
Therefore, I habitually think of things as colored and resonant Habit accounts for part
The soul sense accounts for another part.
The brain with its five-sensed construction asserts its right and accounts for the rest
Inclusive of all, the unity of the world demands that color be kept in it whether I have cognizance of it or not.
Rather than be shut out, I take part in it by discussing it, happy in the happiness of those near to me who gaze at the lovely hues of the sunset or the rainbow.
HELEN KELLER, The Story of My Life (1902)
Where the gene pool or a species is sculpted into a set of models of ancestral worlds, the brain of an individual houses a parallel set of models of the animal's own world.
Both are equivalent to descriptions of the past, and both are used to aid survival into the future. The difference is one of timescale and of relative privacy. The genetic description is a collective memory belonging to the species as a whole, going back into the indefinite past. The memory of the brain is private and contains the individual's experiences since it was born.
Our subjective knowledge of a familiar place does indeed feel to us like a model of the place. Not an accurate scale model, certainly less accurate
than we think it is, but a serviceable model for the purposes required. One way to approach this idea was proposed some years ago by the Cambridge physiologist Horace Barlow, incidentally a direct descendant of Charles Darwin. Barlow is especially interested in vision and his argument starts from the realization that to recognize an object is a much more difficult problem than we, who seem to see so effortlessly, ordinarily understand.
For we are blissfully unaware of what a formidably clever thing we do every second of our waking lives when we see and recognize objects. The sense organs' task of unweaving the physical stimuli that bombard them is easy compared with the brain's task of reweaving an internal model of the world that it can then make use of. The argument holds for all our sensory systems, but I'll stick mostly to vision because that is the one that means the most to us. Think what a problem our brain solves when it recognizes something, say a letter A. Or think of the problem of recognizing a particular person's face. By long in-group convention, the hypothetical face we are talking about is assumed to belong to the grandmother of the distinguished neurobiologist J. Lettvin, but substitute any face you know, or indeed any object you can recognize. We are not concerned here with subjective consciousness, with the philosophically hard problem of what it means to be aware of your grandmother's face. Just a cell in the brain which fires if and only if the grandmother's face appears on the retina will do nicely for a start, and it is very difficult to arrange. It would be easy if we could assume that the face would always fall exactly on a particular part of the retina. There could be a keyhole arrangement, with a grandmother-shaped region of cells on the retina wired up to a grandmother-signalling cell in the brain. Other cells - members of the 'anti-keyhole' - would have to be wired up in inhibitory-fashion, otherwise the central nervous cell would respond to a white sheet just as strongly as to the grandmother's face which - together with all other conceivable images - it would necessarily 'contain'. The essence of responding to a key image is to avoid responding to everything else.
The keyhole strategy is ruled out by sheer force of numbers.
Even if Lettvin needed to recognize nothing but his grandmother, how could he cope when her image falls on a different part of the retina? How cope with her image's changing size and shape as she approaches or recedes, as she turns sideways, or cants to the rear, as she smiles or as she frowns? If we add up all possible combinations of keyholes and anti- keyholes, the number enters the astronomical range. When you realize that Lettvin can recognize not only his grandmother's face but hundreds of other faces, the other bits of his grandmother and of other people, all the letters of the alphabet, all the thousands of objects to which a normal person can instantly give a name, in all possible orientations and
apparent sizes, the explosion of triggering cells gets rapidly out of hand. The American psychologist Fred Attneave, who had come up with the same general idea as Barlow, dramatized the point by the following calculation: if there were just one brain cell to cope, keyhole fashion, with each image that we can distinguish in all its presentations, the volume of the brain would have to be measured in cubic light years.
How then, with a brain capacity measured only in hundreds of cubic centimetres, do we do it? The answer was proposed in the 1950s by Barlow and Attneave independently. They suggested that nervous systems exploit the massive redundancy in all sensory information. Redundancy is jargon from the world of information theory, originally developed by engineers concerned with the economics of telephone line capacity. Information, in the technical sense, is surprise value, measured as the inverse of expected probability. Redundancy is the opposite of information, a measure of unsurprisingness, of old-hatitude. Redundant messages or parts of messages are not informative because the receiver, in some sense, already knows what is coming. Newspapers do not carry headlines saying, 'The sun rose this morning'. That would convey almost zero information. But if a morning came when the sun did not rise, headline writers would, if any survived, make much of it. The information content would be high, measured as the surprise value of the message. Much of spoken and written language is redundant - hence possible condense telegraphese: redundancy lost, information preserved.
Everything that we know about the world outside our skulls comes to us via nerve cells whose impulses chatter like machine guns. What passes along a nerve cell is a volleying of 'spikes', impulses whose voltage is
fixed (or at least irrelevant) but whose rate of arriving varies meaningfully. Now let's think about coding principles. How would you translate information from the outside world, say, the sound of an oboe or the temperature of a bath, into a pulse code? A first thought is a simple rate code: the hotter the bath, the faster the machine gun should fire. The brain, in other words, would have a thermometer calibrated in pulse rates. Actually, this is not a good code because it is uneconomical with pulses. By exploiting redundancy, it is possible to devise codes that convey the same information at a cost of fewer pulses. Temperatures in the world mostly stay the same for long periods at a time. To signal 'It is hot, it is hot, it is still hot. . . ' by a continuously high rate of machine-gun pulses is wasteful; it is better to say, 'It has suddenly become hot' (now you can assume that it will stay the same until further notice).
And, satisfyingly, this is what nerve cells mostly do, not just for signalling temperature but for signalling almost everything about the world. Most nerve cells are biased to signal changes in the world. If a trumpet plays a long sustained note, a typical nerve cell telling the brain
about it would show the following pattern of impulses: Before the trumpet starts, low firing rate; immediately after the trumpet starts, high firing rate; as the trumpet carries on sustaining its note, the firing rate dies away to an infrequent mutter; at the moment when the trumpet stops, high firing rate, dying away to a resting mutter again. Or there might be one class of nerve cells that fire only at the onset of sounds and a different class of cells that fire only when sounds go off. Similar exploitation of redundancy - screening out of the sameness in the world - goes on in cells that tell the brain about changes in light, changes in temperature, changes in pressure. Everything about the world is signalled as change, and this is a major economy.
But you and I don't seem to hear the trumpet die away. To us the trumpet seems to carry on at the same volume and then to stop abruptly. Yes, of course. That's what you'd expect because the coding system is ingenious. It doesn't throw away information, it only throws away redundancy. The brain is told only about changes, and it is then in a position to reconstruct the rest. Barlow doesn't put it like this, but we could say that the brain constructs a virtual sound, using the messages supplied by the nerves coming from the ears. The reconstructed virtual sound is complete and unabridged, even though the messages
themselves are economically stripped down to information about changes. The system works because the state of the world at a given time is
usually not greatly different from the preceding second. Only if the world changed capriciously, randomly and frequently, would it be economical for sense organs to signal continuously the state of the world. As it is, sense organs are set up to signal, economically, the discontinuities in the worlds and the brain, assuming correctly that the world doesn't change capriciously and at random, uses the information to construct an
internal virtual reality in which the continuity is restored.
The world presents an equivalent kind of redundancy in space, and the nervous system uses the corresponding trick. Sense organs tell the brain about edges and the brain fills in the boring bits between. Suppose you are looking at a black rectangle on a white background. The whole scene is projected on to your retina - you can think of the retina as a screen covered with a dense carpet of tiny photocells, the rods and cones. In theory, each photocell could report to the brain the exact state of the light falling upon it. But the scene we are looking at is massively redundant. Cells registering black are overwhelmingly likely to be surrounded by other cells registering black. Cells registering white are nearly all surrounded by other white-signalling cells. The important exceptions are cells on edges. Those on the white side of an edge signal white themselves and so do their neighbours that sit further into the white area. But their neighbours on the other side are in the black area. The brain can theoretically reconstruct the whole scene if just the retinal
cells on edges fire. If this could be achieved there would be massive savings in nerve impulses.
Once again, redundancy is removed and only information gets through.
Elegantly, the economy is achieved in practice by the mechanism known as lateral inhibition'. Here's a simplified version of the principle, using our analogy of the screen of photocells. Each photocell sends one long wire to the central computer (brain) and also short wires to its immediate neighbours in the photocell screen. The short connections to the neighbours inhibit them, that is, turn down their firing rate. It is easy to see that maximal firing will come only from cells that lie along edges, for they are inhibited from one side only. Lateral inhibition of this kind is common among the low-level units of both vertebrate and invertebrate eyes.
Once again, we could say that the brain constructs a virtual world which is more complete than the picture relayed to it by the senses. The information which the senses supply to the brain is mostly information about edges. But the model in the brain is able to reconstruct the bits between the edges. As in the case of discontinuities in time, an economy is achieved by the elimination - and later reconstruction in the brain - of redundancy. This economy is possible only because uniform patches exist in the world. If the shades and colours in the world were randomly dotted about, no economical remodeling would be possible.
Another kind of redundancy stems from the fact that many lines in the real world are straight, or curved in smooth and therefore predictable (or mathematically reconstructable), ways. If the ends of a line are specified, the middle can be filled in using a simple rule that the brain already 'knows'. Among the nerve cells that have been discovered in the brains of mammals are the so-called 'line-detectors', neurones that fire whenever a straight line, aligned in a particular direction, falls on a particular place in the retina, the so-called 'retinal field' of the brain cell. Each of these line-detector cells has its own preferred direction. In the cat brain, there are only two preferred directions, horizontal and vertical, with an approximately equal number favouring each direction; however, in monkeys other angles are accommodated. From the point of view of the redundancy argument, what is going on here is as follows. In the retina, all the cells along a straight line fire and most of these impulses are redundant. The nervous system economizes by using a single cell to register the line, labelled with its angle. Straight lines are economically specified by their position and direction alone, or by their ends, not by the light value of every point along their length. The brain reweaves a virtual line in which the points along the line are reconstructed.
However, if a part of a scene suddenly detaches itself from the rest and starts to crawl over the background, it is news and should be signalled. Biologists have indeed discovered nerve cells that are silent until something moves against a still background. These cells don't respond when the entire scene moves - that would correspond to the sort of apparent movement the animal would see when it itself moves. But movement of a small object against a still background is information-rich and there are nerve cells tuned to detect it. The most famous of these are the so-called 'bug-detectors' discovered in frogs by Lettvin (he of the grandmother) and his colleagues. A bug-detector is a cell which is apparently blind to everything except the movement of small objects against their background. As soon as an insect moves in the field covered by a bug-detector, the cell immediately initiates massive signalling and the frog's tongue is likely to shoot out to catch the insect. To a sufficiently sophisticated nervous system, though, even the movement of a bug is redundant if it is movement in a straight line. Once you've been told that a bug is moving steadily in a northerly direction, you can assume that it will continue to move in this direction until further notice. Carrying the logic a step further, we should expect to find higher-order movement detector cells in the brain that are especially sensitive to change in movement, say, change in direction or change in speed. Lettvin and his colleagues found a cell that seems to do this, again in the frog. In their paper in Sensory Communication (1961) they describe a particular experiment as follows:
Let us begin with an empty gray hemisphere for the visual field-There is usually no response of the cell to turning on and off the illumination. It is silent. We bring in a small dark object say 1 to 2 degrees in diameter, and at a certain point in its travel, almost anywhere in the field, the cell suddenly 'notices' it. Thereafter, wherever that object is moved it is tracked by the cell. Every time it moves, with even the faintest jerk, there is a burst of impulses that dies down to a mutter that continues as long as the object is visible. If the object is kept moving, the bursts signal discontinuities in the movement, such as the turning of corners, reversals, and so forth, and these bursts occur against a continuous background mutter that tells us the object is visible to the cell. . .
To summarize, it is as if the nervous system is tuned at successive hierarchical levels to respond strongly to the unexpected, weakly or not at all to the expected. What happens at successively higher levels is that the definition of that which is expected becomes progressively more sophisticated. At the lowest level, every spot of light is news. And the next level up, only edges are 'news'. At a higher level still, since so many edges are straight, only the ends of edges are news. Higher again, only movement is news. Then only changes in rate or direction of movement. In Barlow's terms derived from the theory of codes, we could say that the
nervous system uses short, economical words for messages that occur frequently and are expected; long, expensive words for messages that occur rarely and are not expected. It is a bit like language, in which (the generalization is called Zipf's Law) the shortest words in the dictionary are the ones most often used in speech. To push the idea to an extreme, most of the time the brain does not need to be told anything because what is going on is the norm. The message would be redundant. The brain is protected from redundancy by a hierarchy of filters, each filter tuned to remove expected features of a certain kind.
It follows that the set of nervous filters constitutes a kind of summary description of the norm, of the statistical properties of the world in which the animal lives. It is the nervous equivalent of our insight of the previous chapter: that the genes of a species come to constitute a statistical description of the worlds in which its ancestors were naturally selected. Now we see that the sensory coding units with which the brain confronts the environment also constitute a statistical description of that environment. They are tuned to discount the common and emphasize the rare. Our hypothetical zoologist of the future should therefore be able, by inspecting the nervous system of an unknown animal and measuring the statistical biases in its tuning, to reconstruct the statistical properties of the world in which the animal lived, to read off what is common and what rare in the animal's world.
The inference would be indirect, in the same way as for the case of the genes. We would not be reading the animal's world as a direct description. Rather, we'd infer things about the animal's world by inspecting the glossary of abbreviations that its brain used to describe it. Civil servants love acronyms like CAP (Common Agricultural Policy) and HEFCE
(Higher Education Funding Council for England); fledgling bureaucrats surely need a glossary of such abbreviations, a codebook. If you find
such a codebook dropped in the street, you could work out which ministry it came from by seeing which phrases have been granted abbreviations, presumably because they are commonly used in that ministry. An intercepted codebook is not a particular message about the world, but it is a statistical summary of the kind of world which this code was designed to describe economically.
We can think of each brain as equipped with a store cupboard of basic images, useful for modelling important or common features of the animal's world. Although, following Barlow, I have emphasized learning as the means by which the store cupboard is stocked, there is no reason why natural selection itself, working on genes, should not do some of the work of filling up the cupboard. In this case, following the logic of the previous chapter, we should say that the store cupboard in the brain contains images from the ancestral past of the species. We could call it a
collective unconscious, if the phrase had not become tarnished by association.
But the biases of the image kit in the cupboard will not only reflect what is statistically unexpected in the world. Natural selection will ensure that the repertoire of virtual representations is also well endowed with images that are of particular salience or importance in the life of the particular kind of animal and in the world of its ancestors, even if these are not especially common. An animal may need only once in its life to recognize a complicated pattern, say the shape of a female of its species, but on that occasion it is vitally important to get it right, and do so without delay. For humans, faces are of special importance, as well as being common in our world. The same is true of social monkeys. Monkey brains have been found to possess a special class of cells which fire at full strength only when presented with a complete face. We've already seen that humans with particular kinds of localized brain damage experience a very peculiar, and revealing, kind of selective blindness. They can't recognize faces. They can see everything else, apparently normally, and they can see that a face has a shape, with features. They can describe the nose, the eyes and the mouth. But they can't recognize the face even of the person they love best in all the world.
Normal people not only recognize faces. We seem to have an almost indecent eagerness to see faces, whether they are really there or not. We see faces in damp patches on the ceiling, in the contours of a hillside, in clouds or in Martian rocks. Generations of moongazers have been led, by the most unpromising of raw materials, to invent a face in the pattern of craters on the moon. The Daily Express (London) of 15 January 1998 bestowed most of a page, complete with banner headline, on the story that an Irish cleaning woman saw the face of Jesus in her duster: 'Now a stream of pilgrims is expected at her semi-detached home . . . The woman's parish priest said, 'I've never seen anything like it before in my 34 years in the priesthood. " The accompanying photograph shows a pattern of dirty polish on a cloth which slightly resembles a face of some kind: there is a faint suggestion of an eye on one side of what could be a nose; there is also a sloping eyebrow on the other side which gives it a look of Harold Macmillan, although I suppose even Harold Macmillan might look like Jesus to a suitably prepared mind. The Express reminds us of similar stories, including the 'nun bun' served up in a Nashville cafe, which 'resembled the face of Mother Teresa, 86' and caused great excitement until 'the aged nun wrote to the cafe demanding the bun be removed'.
The eagerness of the brain to construct a face, when offered the slightest encouragement, fosters a remarkable illusion. Get an ordinary mask of a
human face - President Clinton's face, or whatever is on sale for fancy dress parties. Stand it up in a good light and look at it from the far side of the room. If you look at it the normal way round, not surprisingly it looks solid. But now turn the mask so that it is facing away from you and look at the hollow side from across the room. Most people see the illusion immediately. If you don't, try adjusting the light. It may help if you shut one eye, but it is by no means necessary. The illusion is that the hollow side of the mask looks solid. The nose, brows and mouth stick out towards you and seem nearer than the ears. It is even more striking if you move from side to side, or up and down. The apparently solid face seems to turn with you, in an odd, almost magical way. I'm not talking about the ordinary experience we have when the eyes of a good portrait seem to follow you around the room. The hollow mask illusion is far more spooky. It seems to hover, luminously, in space. The face really really seems to turn. I have a mask of Einstein's face mounted in my room, hollow side out, and visitors gasp when they glimpse it. The illusion is most strikingly displayed if you set the mask on a slowly rotating turntable. As the solid side turns before you, you'll see it move in a sensible 'normal reality' way. Now the hollow side comes into view and something extraordinary happens. You see another solid face, but it is rotating in the opposite direction. Because one face (say, the real solid face) is turning clockwise while the other, pseudo-solid face appears to be turning anticlockwise, the face that is rotating into view seems to swallow up the face that is rotating away from view. As the turning continues, you then see the really hollow but apparently solid face rotating firmly in the wrong direction for a while, before the really solid face reappears and swallows up the virtual face. The whole experience of watching the illusion is quite unsettling and it remains so no matter how long you go on watching it. You don't get used to it and don't lose the illusion.
What is happening? We can take the answer in two stages. First, why do we see the hollow mask as solid? And second, why does it seem to rotate in the wrong direction? We've already agreed that the brain is very good at - and very keen on - constructing faces in its internal simulation room. The information that the eyes are feeding to the brain is of course compatible with the mask's being hollow, but it is also compatible - just - with an alternative hypothesis, that it is solid. And the brain, in its simulation, goes for the second alternative, presumably because of its eagerness to see faces. So it overrules the messages from the eyes that say, 'This is hollow'; instead, it listens to the messages that say, 'This is a face, this is a face, face, face, face. ' Faces are always solid. So the brain takes a face model out of its cupboard which is, by its nature, solid.
But having constructed its apparently solid face model, the brain is caught in a contradiction when the mask starts to rotate. To simplify the
explanation, suppose that the mask is that of Oliver Cromwell and that his famous warts are visible from both sides of the mask. When looking
at the hollow interior of the nose, which is really pointing away from the viewer, the eye looks straight across to the right side of the nose where there is a prominent wart. But the constructed virtual nose is apparently pointing towards the viewer, not away, and the wart is on what, from the virtual Cromwell's point of view, would be his left side, as if we were looking at Cromwell's mirror image. As the mask rotates, if the face were really solid, our eye would see more of the side that it expected to see more of and less of the side that it expected to see less of. But because the mask is actually hollow, the reverse happens. The relative
proportions of the retinal image change in the way the brain would
expect if the face were solid but rotating in the opposite direction. And that is the illusion that we see. The brain resolves the inevitable contradiction, as one side gives way to the other, in the only way possible, given its stubborn insistence on the mask's being a solid face: it
simulates a virtual model of one face swallowing up the other face.
The rare brain disorder that destroys our ability to recognize faces is called prosopagnosia. It is caused by injury to specific parts of the brain. This very fact supports the importance of a 'face cupboard' in the brain. I don't know, but I'd bet that prosopagnosics wouldn't see the hollow mask illusion. Francis Crick discusses prosopagnosia in his book The Astonishing Hypothesis (1994), together with other revealing clinical conditions. For instance, one patient found the following condition very frightening which, as Crick observes, is not surprising:
. . . objects or persons she saw in one place suddenly appeared in another without her being aware they were moving. This was particularly distressing if she wanted to cross a road, since a car that at first seemed far away would suddenly be very close . . . She experienced the world rather as some of us might see the dance floor in the strobe lighting of a discotheque.
This woman had a mental cupboard full of images for assembling her virtual world, just as we all do. The images themselves were probably perfectly good. But something had gone wrong with her software for deploying them in a smoothly changing virtual world. Other patients have lost their ability to construct virtual depth. They see the world as though it was made of flat, cardboard cut-outs. Yet other patients can recognize objects only if they are presented from a familiar angle. The rest of us, having seen, say, a saucepan from the side, can effortlessly recognize it from above. These patients have presumably lost some ability to manipulate virtual images and turn them around. The technology of virtual reality gives us a language to think about such skills, and this will be my next topic.
I shall not dwell on the details of today's virtual reality which is certain,
in any case, to become obsolete. The technology changes as rapidly as everything else in the world of computers. Essentially what happens is as follows. You don a headset which presents to each of your eyes a miniature computer screen. The images on the two screens are nearly
the same as each other, but offset to give the stereo illusion of three dimensions. The scene is whatever has been programmed into the computer: the Parthenon, perhaps, intact and in its original garish colours; an imagined landscape on Mars; the inside of a cell, hugely magnified. So far, I might have been I describing an ordinary 3-D movie. But the virtual reality machine provides a two-way street. The computer doesn't just present you with scenes, it responds to you. The headset is wired up to register all turnings of your head, and other body movements, which would, in the normal course of events, affect your viewpoint. The computer is continuously informed of all such movements and - here is the cunning part - it is programmed to change the scene presented to the eyes, in exactly the way it would change if you were really moving your head. As you turn your head, the pillars of the Parthenon, say, swing round and you find yourself looking at a statue which, previously, had been 'behind' you.
A more advanced system might have you in a body stocking, laced with strain gauges to monitor the positions of all your limbs. The computer can now tell whenever you take a step, whenever you sit down, stand up, or wave your arms. You can now walk from one end of the Parthenon to the other, watching the pillars pass by as the computer changes the images in sympathy with your steps. Tread carefully because, remember, you are not really in the Parthenon but in a cluttered computer room. Present day virtual reality systems, indeed, are likely to tether you to the computer by a complicated umbilicus of cables, so let's postulate a future tangle-free radio link, or infrared data beam. Now you can walk freely in an empty real world and explore the fantasy virtual world that has been programmed for you. Since the computer knows where your body stocking is, there is no reason why it shouldn't represent you to yourself as a complete human form, an avatar, allowing you to look down at your 'legs', which might be very different from your real legs. You can watch your avatar's hands as they move in imitation of your real hands. If you use these hands to pick up a virtual object, say a Grecian urn, the urn will seem to rise into the air as you 'lift' it.
If somebody else, who could be in another country, dons another set of kit hooked up to the same computer, in principle you should be able to see their avatar and even shake hands - though with present day technology- you might find yourself passing through each other like ghosts. The technicians and programmers are still working on how to
create the illusion of texture and the 'feel' of solid resistance. When I visited England's leading virtual reality company, they told me they get many letters from people wanting a virtual sexual partner. Perhaps in the future, lovers separated by the Atlantic will caress each other over the Internet, albeit incommoded by the need to wear gloves and a body stocking wired up with strain gauges and pressure pads.
Now let's take virtual reality a shade away from dreams and closer to practical usefulness. Present day doctors have recourse to the ingenious endoscope, a sophisticated tube that is inserted into a patient's body through, say, the mouth or the rectum and used for diagnosis and even surgical intervention. By the equivalent of pulling wires, the surgeon steers the long tube round the bends of the intestine. The tube itself has a tiny television camera lens at its tip and a light pipe to illuminate the way. The tip of the tube may also be furnished with various remote- control instruments which the surgeon can control, such as micro- scalpels and forceps.
In conventional endoscopy, the surgeon sees what he is doing using an ordinary television screen, and he operates the remote controls using his fingers. But as various people have realized (not least Jaron Lanier, who coined the phrase 'virtual reality' itself) it is in principle possible to give the surgeon the illusion of being shrunk and actually inside the patient's body. This idea is in the research stage, so I shall resort to a fantasy of how the technique might work in the next century. The surgeon of the future has no need to scrub up, for she need not go near her patient. She stands ? in a wide open area, connected by radio to the endoscope inside the patient's intestine. The miniature screens in front of her two eyes present a magnified stereo image of the interior of the patient
immediately in front of the tip of the endoscope. When she moves her head to the left, the computer automatically swivels the tip of the endoscope to the left. The angle of view of the camera inside the intestine faithfully moves to follow the surgeon's head movements in all three planes. She drives the endoscope forward along the intestine by her footsteps. Slowly, slowly, for fear of damaging the patient, the computer pushes the endoscope forwards, its direction always controlled by the direction in which, in a completely different room, the surgeon is walking. It feels to her as though she is actually walking through the intestine. It doesn't even feel claustrophobic. Following present day endoscopic practice, the gut has been carefully inflated with air, otherwise the walls would press in upon the surgeon and force her to crawl rather than walk.
When she finds what she is looking for, say a malignant tumour, the surgeon selects an instrument from her virtual toolbag.
